Back to Blog

Migrating from ADF Mapping Data Flows to Fabric Dataflow Gen2 - A Practical Guide

April 16, 20268 min readMichael Ridland

If you've been building data transformations in Azure Data Factory's Mapping Data Flows, you've probably been wondering what happens to all that work when Fabric becomes the default. The good news is that most of your transformation logic has a direct equivalent in Fabric's Dataflow Gen2. The less good news is that the interface is completely different, and some of the patterns you've built muscle memory around need to be relearned.

We've been helping clients work through this transition at Team 400, and the biggest friction point isn't usually technical. It's cognitive. You know what you want to do - you've been doing it in Mapping Data Flows for years - but you can't find the button. Everything is in a different place, with different names, using different metaphors.

This post maps every common Mapping Data Flow transformation to its Dataflow Gen2 equivalent. Think of it as a translation dictionary for your data engineering brain.

The Interface Shift - Power Query Online

Before we get into specific transformations, you need to understand the fundamental difference. Mapping Data Flows uses a visual canvas with a DAG (directed acyclic graph) of transformations. Dataflow Gen2 uses Power Query Online, which is the same engine behind Power BI dataflows and Excel's Get & Transform.

If you've ever used Power Query in Power BI Desktop, Dataflow Gen2 will feel familiar immediately. If you haven't, there's a learning curve. The Power Query interface is organised around a ribbon with tabs (Home, Transform, Add column) and a query list on the left side. Transformations are applied as steps that you can see and edit in a sequence.

One tip that's saved me time: use the Global search box (Alt + Q) when you can't find something. It searches across connectors, transformations, and queries. When you're first learning where everything lives, this shortcut is worth memorising.

Multiple Inputs and Outputs

This is where the conceptual model differs most from Mapping Data Flows.

Branching (New Branch becomes Reference)

In Mapping Data Flows, you create a new branch from any transformation to split your pipeline into parallel paths. In Dataflow Gen2, the equivalent is Reference. Right-click a query in the query list and select Reference. This creates a new query that points to the same upstream data, and you can apply different transformations from that point.

It works, but it feels different. In Mapping Data Flows, branches are visual - you can see the split on the canvas. In Power Query, references are more like pointers in the query list. You lose some visual clarity but gain the ability to build more complex query dependencies.

Joins (Join becomes Merge Queries)

Mapping Data Flows' Join transformation maps to Merge queries in Dataflow Gen2. You'll find it on the Home tab. There are two flavours:

  • Merge queries - merges into the current query
  • Merge queries as new - creates a new query from the merge result

The join kind options (inner, left outer, right outer, full outer, left anti, right anti) are all there. The Lookup transformation from Mapping Data Flows also maps to Merge queries - just select "Left outer" as the join kind.

One thing I genuinely prefer about the Power Query approach: the merge preview shows you a sample of the result immediately. In Mapping Data Flows, you had to run a debug session to see what your join produced. Here, you get instant feedback.

Union (Union becomes Append Queries)

Straightforward rename. Union becomes Append queries, also on the Home tab. Same two flavours - append into current or append as new.

Conditional Split (becomes Reference with Filters)

This one doesn't have a single-click equivalent. In Mapping Data Flows, Conditional Split lets you route rows to different outputs based on conditions. In Dataflow Gen2, you achieve the same thing by creating multiple References to a query and then applying different filter conditions to each.

It's more manual but equally functional. The main downside is that if your conditions are complex, you need to maintain them across separate queries rather than seeing them side by side in a single transformation.

Schema Modification Transformations

This category covers the bread and butter of data transformation - adding columns, renaming, aggregating.

Derived Column becomes Custom Column

The workhorse Derived Column transformation maps to several options in Dataflow Gen2, all under the Add column tab:

  • Custom column - write an expression to create a new column. This is your primary replacement for Derived Column.
  • Column from examples - provide example values and let Power Query infer the expression. Surprisingly powerful for string manipulation.
  • Conditional column - create a column with if/then/else logic. Cleaner than writing a full custom expression for simple conditions.

The expression language is M (Power Query formula language) rather than Data Flow Expression Language. This is probably the biggest adjustment for experienced Mapping Data Flow users. M is a functional language with different syntax for type handling, string operations, and conditional logic. Budget time for this learning curve.

Select becomes Choose Columns and Remove Columns

Mapping Data Flow's Select transformation does three things: picks columns, drops columns, and renames them. In Dataflow Gen2, these are separate operations:

  • Choose columns (Home tab) - pick which columns to keep
  • Remove columns (Home tab) - explicitly drop specific columns
  • Rename (Transform tab) - rename individual columns

I actually prefer having these as separate operations. In Mapping Data Flows, Select could get confusing when you were doing all three at once, especially with "Name as" mappings mixed in with drops.

Aggregate becomes Group By

Same concept, different name. The Group by transformation on the Transform tab lets you define grouping columns and aggregate functions. The configuration dialog is clear and supports multiple aggregations in one step.

Surrogate Key becomes Index Column

Need to generate sequential keys? Index column on the Add column tab creates an auto-incrementing column. You can start from 0, 1, or a custom value with a custom increment. Not quite as feature-rich as the Mapping Data Flow Surrogate Key (which supported key ranges for parallel execution), but sufficient for most scenarios.

Pivot and Unpivot

Both map directly. Pivot column and Unpivot columns are both on the Transform tab. The Unpivot option gives you three modes:

  • Unpivot columns (all except selected)
  • Unpivot other columns (all except currently selected)
  • Unpivot only selected columns

These options cover the same ground as Mapping Data Flow's unpivot, just with different selection mechanics.

External Call becomes Custom Column with Web.Contents

If you were using External Call in Mapping Data Flows to hit REST APIs mid-transformation, the equivalent in Dataflow Gen2 is creating a custom column that uses the Web.Contents M function. It works, though the error handling and retry behaviour are different. Test thoroughly if your transformations depend on external API calls.

Row Modification and Formatting

Filter becomes Filter Rows

Direct mapping. Filter rows on the Home tab. The filter dialog supports multiple conditions with AND/OR logic.

Sort stays Sort

Home tab, Sort. No surprises here.

Flatten becomes Expand Column

When working with structured data types (JSON, lists), Mapping Data Flow's Flatten transformation becomes the expand option that appears when you have structured data in a column. Click the expand icon in the column header and select which nested fields to extract.

Parse stays Parse

Text parsing operations are under Transform > Text column > Parse. Same concept, same location patterns.

Flowlets and Destinations

Flowlets become Custom Functions

Mapping Data Flow's Flowlets - reusable transformation snippets - map to Power Query's Custom functions. The concept is similar: define a transformation once, reuse it across multiple queries. The implementation is different (M functions vs visual flowlet design) but the reuse pattern is the same.

Sink becomes Add Data Destination

The final destination for your transformed data. In Mapping Data Flows, you configured a Sink. In Dataflow Gen2, you use Add data destination on the Home tab. Fabric supports writing to lakehouses, warehouses, Azure SQL, and other destinations directly from the dataflow.

What's Not Supported (Yet)

A few Mapping Data Flow transformations don't have Dataflow Gen2 equivalents:

  • Assert - data quality assertions
  • Alter Row - upsert/delete/insert policies
  • Stringify - converting complex types to strings
  • Window - window function calculations

The Window one hurts the most, in my opinion. Window functions are genuinely useful for running calculations and ranking operations that go beyond simple Group By. The Rank column in Dataflow Gen2 covers some of these cases, but not all.

Microsoft has a Fabric Ideas portal where you can vote for these features. Based on the velocity of Fabric releases, I'd expect Window and Assert to appear within the next few release cycles.

Should You Migrate Now?

If your Mapping Data Flows are working and you're not feeling pressure to move to Fabric, there's no rush. This isn't a situation where ADF is being deprecated tomorrow.

But if you're building new data transformation workflows, I'd start them in Dataflow Gen2. The integration with the broader Fabric ecosystem (lakehouses, warehouses, Power BI semantic models) is tighter than anything you can get with standalone ADF. And the M expression language, while different from Data Flow expressions, is more widely documented and has a larger community.

For existing Mapping Data Flows, my recommendation is to migrate opportunistically. When you need to modify a data flow, consider rebuilding it in Dataflow Gen2 instead of updating the ADF version. This gives you a gradual migration path without the risk of a big-bang cutover.

The Microsoft documentation on this topic has the full transformation mapping table with screenshots. Keep it bookmarked - you'll reference it often during the transition.

If you need help planning your Fabric migration or want a structured assessment of your existing ADF estate, get in touch with our team. We've walked through this process with enough clients to know where the pitfalls are, and more importantly, where the quick wins hide.