Close Menu
    Trending
    • Optimizing Data Transfer in Distributed AI/ML Training Workloads
    • Achieving 5x Agentic Coding Performance with Few-Shot Prompting
    • Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found
    • From Transactions to Trends: Predict When a Customer Is About to Stop Buying
    • America’s coming war over AI regulation
    • “Dr. Google” had its issues. Can ChatGPT Health do better?
    • Evaluating Multi-Step LLM-Generated Content: Why Customer Journeys Require Structural Metrics
    • Why SaaS Product Management Is the Best Domain for Data-Driven Professionals in 2026
    ProfitlyAI
    • Home
    • Latest News
    • AI Technology
    • Latest AI Innovations
    • AI Tools & Technologies
    • Artificial Intelligence
    ProfitlyAI
    Home » From ‘Dataslows’ to Dataflows: The Gen2 Performance Revolution in Microsoft Fabric
    Artificial Intelligence

    From ‘Dataslows’ to Dataflows: The Gen2 Performance Revolution in Microsoft Fabric

    ProfitlyAIBy ProfitlyAIJanuary 13, 2026No Comments9 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    of bulletins from the current FabCon Europe in Vienna, one which will have gone beneath the radar was concerning the enhancements in efficiency and price optimization for Dataflows Gen2.

    Earlier than we delve into explaining how these enhancements influence your present Dataflows setup, let’s take a step again and supply a short overview of Dataflows. For these of you who’re new to Microsoft Cloth — a Dataflow Gen2 is the no-code/low-code Cloth merchandise used to extract, remodel, and cargo the information (ETL).

    A Dataflow Gen2 gives quite a few advantages:

    • Leverage 100+ built-in connectors to extract the information from a myriad of information sources
    • Leverage a well-recognized GUI from Energy Question to use dozens of transformations to the information with out writing a single line of code — a dream come true for a lot of citizen builders
    • Retailer the output of information transformation as a delta desk in OneLake, in order that the reworked knowledge can be utilized downstream by numerous Cloth engines (Spark, T-SQL, Energy BI…)

    Nonetheless, simplicity often comes with a price. Within the case of Dataflows, the fee was considerably greater CU consumption in comparison with code-first options, similar to Cloth notebooks and/or T-SQL scripts. This was already well-explained and examined in two nice weblog posts written by my fellow MVPs, Gilbert Quevauvilliers (Fourmoo): Comparing Dataflow Gen2 vs Notebook on Costs and usability, and Stepan Resl: Copy Activity, Dataflows Gen2, and Notebooks vs. SharePoint Lists, so I gained’t waste time discussing the previous. As an alternative, let’s deal with what the current (and future) brings for Dataflows!

    Adjustments to the pricing mannequin

    Picture generated by writer

    Let’s briefly look at what’s displayed within the illustration above. Beforehand, each second of the Dataflow Gen2 run was billed at 16 CU (CU stands for Capacity Unit, representing a bundled set of resources — CPU, memory, and I/O — used in synergy to perform a specific operation). Relying on the Cloth capability measurement, you get a sure variety of capability models — F2 gives 2 CUs, F4 gives 4 CUs, and so forth.

    Going again to our Dataflows situation, let’s break this down by utilizing a real-life instance. Say you’ve a Dataflow that runs for 20 minutes (1200 seconds)…

    • Beforehand, this Dataflow run would have price you 19.200 CUs: 1200 seconds * 16 CUs
    • Now, this Dataflow run will price you 8.100 CUs: 600 seconds (first 10 minutes) * 12 CUs + 600 seconds (after first 10 minutes) * 1.5 CUs

    The longer your Dataflow must execute, the larger the financial savings in CUs you probably make.

    That is wonderful by itself, however there’s nonetheless extra to that. I imply, it’s good to be charged much less for a similar quantity of labor, however what if we might make these 1200 seconds, let’s say, 800 seconds? So, it wouldn’t save us simply CUs, but in addition cut back the time-to-analysis, for the reason that knowledge would have been processed quicker. And, that’s precisely what the following two enhancements are all about…

    Fashionable Evaluator

    The brand new preview characteristic, named Fashionable Evaluator, permits utilizing the brand new question execution engine (working on .NET core model 8) for working Dataflows. As per the official Microsoft docs, Dataflows working the trendy evaluator can present the next advantages:

    • Quicker Dataflow execution
    • Extra environment friendly processing
    • Scalability and reliability
    Picture generated by writer

    The illustration above exhibits the efficiency variations between numerous Dataflow “flavors”. Don’t fear, we’ll problem these numbers quickly in a demo, and I’ll additionally present you the way to allow these newest enhancements in your Cloth workloads.

    Partitioned Compute

    Beforehand, a Dataflow logic was executed in sequence. Therefore, relying on the logic complexity, it might take some time for sure operations to finish, in order that different operations within the Dataflow needed to wait within the queue. With the Partitioned Compute feature, Dataflow can now execute components of the transformation logic in parallel, thus decreasing the general time to finish.

    At this second, there are particular limitations on when the partitioned compute will kick in. Particularly, solely ADLS Gen2, Cloth Lakehouse, Folder, and Azure Blob Storage connectors can leverage this new characteristic. Once more, we’ll discover the way it works later on this article.

    3, 2, 1…Motion!

    Okay, it’s time to problem the numbers supplied by Microsoft and test if (and to what diploma) there’s a efficiency acquire between numerous Dataflows varieties.

    Right here is our situation: I’ve generated 50 CSV recordsdata that include dummy knowledge about orders. Every file incorporates roughly 575.000 information, so there are ca. 29 million information in whole (roughly 2.5 GBs of information). All of the recordsdata are already saved within the SharePoint folder, permitting for a good comparability, as Dataflow Gen1 doesn’t assist OneLake lakehouse as a knowledge supply.

    Picture by writer

    I plan to run two collection of exams: first, embrace the Dataflow Gen1 within the comparability. On this situation, I gained’t be writing the information into OneLake utilizing Dataflows Gen2 (yeah, I do know, it defeats the aim of the Dataflow Gen2), as I wish to evaluate “apples to apples” and exclude the time wanted for writing knowledge into OneLake. I’ll check the next 4 situations, wherein I carry out some fundamental operations to mix and cargo the information, making use of some fundamental transformations (renaming columns, and so on.):

    1. Use Dataflow Gen1 (the previous Energy BI dataflow)
    2. Use Dataflow Gen2 with none extra optimization enhancements
    3. Use Dataflow Gen2 with solely the Fashionable evaluator enabled
    4. Use Dataflow Gen2 with each the Fashionable evaluator and Partitioned compute enabled

    Within the second collection, I’ll evaluate three flavors of Dataflow Gen2 solely (factors 2-4 from the record above), with writing the information to a lakehouse enabled.

    Let’s get began!

    Dataflow Gen1

    Your entire transformation course of within the previous Dataflow Gen1 is pretty fundamental — I merely mixed all 50 recordsdata right into a single question, cut up columns by delimiter, and renamed columns. So, nothing actually superior occurs right here:

    Image by author

    The same set of operations/transformations has been applied to all three Dataflows Gen2.

    Please keep in mind that with Dataflow Gen1 it’s not possible to output the data as a Delta table in OneLake. All transformations are persisted within the Dataflow itself, so when you need this data, for example, in the semantic model, you need to take into account the time and resources needed to load/refresh the data in the import mode semantic model. But, more on that later.

    Dataflow Gen2 without enhancements

    Let’s now do the same thing, but this time using the new Dataflow Gen2. In this first scenario, I haven’t applied any of these new performance optimization features.

    Image by author

    Dataflow Gen2 with Modern Evaluator

    Ok, the moment of truth — let’s now enable the Modern Evaluator for Dataflow Gen2. I’ll go to the Options, and then under the Scale tab, check the Allow use of the modern query evaluation engine box:

    Image by author

    Everything else stays exactly the same as in the previous case.

    Dataflow Gen2 with Modern Evaluator and Partitioned Compute

    In the final example, I’ll enable both new optimization features in the Options of the Dataflow Gen2:

    Image by author

    Now, let’s proceed to the testing and analyzing results. I will execute all four dataflows in sequence from the Fabric pipeline, so we can be sure that each of them runs in isolation from the others.

    Image by author

    And, here are the results:

    Image by author

    Partitioning obviously didn’t count much in this particular scenario, and I will investigate how partitioning works in more detail in one of the following articles, with different scenarios in place. Dataflow Gen2 with Modern Evaluator enabled, outperformed all the others by far, achieving 30% savings compared to the old Dataflow Gen1 and ca. 20% time savings compared to the regular Dataflow Gen2 without any optimizations! Don’t forget, these savings also reflect in the CU savings, so the final CU estimated cost for each of the used solutions is the following;

    • Dataflow Gen1: 550 seconds * 12 CUs = 6.600 CUs
    • Dataflow Gen2 with no optimization: 520 seconds * 12 CUs = 6.240 CUs
    • Dataflow Gen2 with Modern Evaluator: 368 seconds * 12 CUs = 4.416 CUs
    • Dataflow Gen2 with Modern Evaluator and Partitioning: 474 seconds * 12 CUs = 5.688 CUs

    However, I wanted to double-check and confirm that my calculation is accurate. Hence, I opened the Capacity Metrics App and took a look at the metrics captured by the App:

    Image by author

    Although the overall result accurately reflects the numbers displayed in the pipeline execution log, the exact number of used CUs in the App is different:

    • Dataflow Gen1: 7.788 CUs
    • Dataflow Gen2 with no optimization: 5.684 CUs
    • Dataflow Gen2 with Modern Evaluator: 3.565 CUs
    • Dataflow Gen2 with Modern Evaluator and Partitioning: 4.732 CUs

    So, according to the Capacity Metrics App, a Dataflow Gen2 with Modern Evaluator enabled consumed less than 50% of the capacity compared to the Dataflow Gen1 in this particular scenario! I plan to create more test use cases in the following days/weeks and provide a more comprehensive series of tests and comparisons, which will also include a time to write the data into OneLake (using Dataflows Gen2) versus the time needed to refresh the import mode semantic model that is using the old Dataflow Gen1.

    Conclusion

    When compared to other (code-first) options, Dataflows were (rightly?) considered “the slowest and least performant option” for ingesting data into Power BI/Microsoft Fabric. However, things are changing rapidly in the Fabric world, and I love how the Fabric Data Integration team makes constant improvements to the product. Honestly, I’m curious to see how Dataflows Gen2’s performance and cost develop over time, so that we can consider leveraging Dataflows not only for low-code/no-code data ingestion and data transformation requirements, but also as a viable alternative to code-first solutions from the cost/performance point of view.

    Thanks for reading!

    Disclaimer: I don’t have any affiliation with Microsoft (except being a Microsoft Data Platform MVP), and I haven’t been approached/sponsored by Microsoft to write this article



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleUnder the Uzès Sun: When Historical Data Reveals the Climate Change
    Next Article An introduction to AWS Bedrock | Towards Data Science
    ProfitlyAI
    • Website

    Related Posts

    Artificial Intelligence

    Optimizing Data Transfer in Distributed AI/ML Training Workloads

    January 23, 2026
    Artificial Intelligence

    Achieving 5x Agentic Coding Performance with Few-Shot Prompting

    January 23, 2026
    Artificial Intelligence

    Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found

    January 23, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    The SyncNet Research Paper, Clearly Explained

    September 20, 2025

    AI Medical Record Summarization: Definition, Challenges, And Best Practices

    April 9, 2025

    Unlocking the hidden power of boiling — for energy, space, and beyond | MIT News

    April 7, 2025

    600+ AI Micro SaaS Ideas for Entrepreneurs in 30+ Categories • AI Parabellum

    April 3, 2025

    Load-Testing LLMs Using LLMPerf | Towards Data Science

    April 18, 2025
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    Most Popular

    ACP: The Internet Protocol for AI Agents

    May 9, 2025

    New software designs eco-friendly clothing that can reassemble into new items | MIT News

    October 17, 2025

    Anthropic Wins Key Copyright Lawsuit, AI Impact on Hiring, OpenAI Now Does Consulting, Intel Outsources Marketing to AI & Meta Poaches OpenAI Researchers

    July 1, 2025
    Our Picks

    Optimizing Data Transfer in Distributed AI/ML Training Workloads

    January 23, 2026

    Achieving 5x Agentic Coding Performance with Few-Shot Prompting

    January 23, 2026

    Why the Sophistication of Your Prompt Correlates Almost Perfectly with the Sophistication of the Response, as Research by Anthropic Found

    January 23, 2026
    Categories
    • AI Technology
    • AI Tools & Technologies
    • Artificial Intelligence
    • Latest AI Innovations
    • Latest News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 ProfitlyAI All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.