Over the past year, I’ve been writing about several back-to-the-basics topics, like extracting data via APIs and SFTP, and building solid data models.
Nice article.
I am a beginner but I read with using the some ai to understand to the ground.
Thanks for your effort to write this article.
With the T layer. A big challenge I see are efficient incremental update for the semantic layer / model refresh. If we are ending in an E(tltltltl)L, how to burn less ressources and reduce de bill?
1. "Shift left"
2. Headless data & zero-copy architecture
Nice article.
I am a beginner but I read with using the some ai to understand to the ground.
Thanks for your effort to write this article.
With the T layer. A big challenge I see are efficient incremental update for the semantic layer / model refresh. If we are ending in an E(tltltltl)L, how to burn less ressources and reduce de bill?
1. "Shift left"
2. Headless data & zero-copy architecture