Error converting content: marked is not a function
- From [why ELT wins](https://www.fivetran.com/blog/etl-vs-elt-why-a-post-load-process-wins-every-time#:~:text=Faster%20load%20times%3A%20Because%20ETL,sources%20to%20the%20destination%20system.) - Historically, the ETL process (Extract-Transform-Load) made the most sense for [data transformation](https://www.fivetran.com/blog/what-is-data-transformation)because the cost of computation storage and bandwidth were both high – and transforming your data before loading it into the data warehouse would reduce both. However, in the last decade, [cloud data warehouses](https://www.fivetran.com/blog/warehouse-benchmark), such as Snowflake, Amazon Redshift, and Google BigQuery, have become ubiquitous, driving down storage costs and increasing processing power exponentially. - This has made it much less of a concern to store raw data in the data warehouse – and opened up the potential for a different way of transforming data post-load, rather than pre-load. Known as [ELT (Extract-Load-Transform)](https://www.fivetran.com/blog/etl-vs-elt), this post-load data transformation process has a number of advantages over traditional ETL. - Advantages - Because analysts can perform transformations within the data warehouse environment without needing to rely on [data engineers](https://www.fivetran.com/blog/what-wasting-data-engineering-talent-really-costs-you), it shortens the turnaround times for all analytics projects and allows for speedier delivery of insights. - Perpetual access to raw data - One aspect of data analytics is that there is often a need to mine the same data source for different purposes. However, in ETL, when your query needs change, you will need to rebuild your ETL pipelines. This can be costly, time-consuming and will require data engineering expertise. - Moreover, with the ELT process, [data pipelines](https://www.fivetran.com/blog/what-is-a-data-pipeline) can be automated. This not only eliminates data engineering time in building custom pipelines but also in maintaining them. And it allows the entire process – from extraction to load to transformation – to be done by a data analyst rather than requiring an engineer - ELT simplifies [data integration](https://www.fivetran.com/blog/what-is-data-integration), results in lower failure rates, allows for flexible scaling and moves the transformation process to the warehouse, where you can apply such skills as SQL to achieve data transformation. - Interesting comment on #stackoverflow ECCD - "I prefer multi-step ETL -- ECCD (Extract, Clean, Conform, Deliver) whenever possible. I also keep intermediate csv files after each extract, clean, and conform step; takes some disk space, but is quite useful. Whenever DW has to be re-loaded due to bugs in etl, or DW schema changes, there is no need to query source systems again -- it is already in flat files. It is also quite convenient to be able to *grep*, *sed* and *awk* through flat files in the staging area when needed. In the case when there are several source systems which feed into the same DW, only extract steps have to be developed (and maintained) for each of the source systems -- clean, conform, and deliver steps are all common."