Migrating Azure Data Factory to Fabric Data Factory - A Practical Guide
The conversation has shifted. Six months ago, clients would ask us "should we move to Fabric?" Now they're asking "how do we move to Fabric?" The decision's been made at the enterprise level, and the data engineering teams need to figure out the actual migration.
Azure Data Factory has been a workhorse for Australian organisations running data integration workloads on Azure. Thousands of pipelines, hundreds of linked services, years of accumulated logic. Moving all of that to Fabric Data Factory isn't a weekend project. But it's also not as scary as it looks, if you plan properly.
Microsoft has published detailed migration guidance that covers the technical paths. This post is about the practical side - what we've seen work, what catches people off guard, and how to think about the sequencing.
Why Move at All?
Let's be honest about the motivation. If your ADF pipelines are running fine and your team knows the platform inside out, "because Microsoft says so" isn't a compelling reason on its own.
The real reasons we see organisations making the jump:
Unified governance is genuinely useful. In ADF, your pipelines live in one management plane, your data lake in another, your Power BI models in a third, and your Synapse workspaces in yet another. Fabric brings all of this under one roof. For organisations that have spent years trying to get consistent access control and lineage across their analytics stack, this actually matters.
OneLake simplifies the storage layer. Instead of configuring linked services to reach blob storage, ADLS Gen2, and various other endpoints, everything writes to OneLake. Lakehouse and Warehouse sit right there. The number of configuration touch-points drops significantly, and with it, the number of things that can silently break at 2am.
Built-in CI/CD without the Git plumbing. ADF's Git integration works, but getting deployment pipelines right across dev/test/prod requires ARM templates or Azure DevOps pipelines that someone has to build and maintain. Fabric's deployment pipelines handle this natively. It's not perfect, but it's considerably less setup.
New activities that actually save time. Email and Teams activities for notifications, semantic model refresh activities that replace convoluted PowerShell scripts, and Copilot to help with pipeline authoring. These aren't flashy, but they eliminate a lot of the glue code we see in mature ADF environments.
The Three Migration Paths
Microsoft outlines three approaches, and in practice, most organisations end up using a combination of all three.
Path 1 - Mount Your ADF in Fabric
This is the "start here" option and it's genuinely smart. You add your existing Azure Data Factory as an item in a Fabric workspace. Your pipelines keep running on ADF infrastructure, but you can see them, organise them, and govern them from within Fabric.
This doesn't migrate anything. It gives you visibility. Think of it as putting all your ADF assets on a board so you can see what you're working with before you start moving things.
We recommend this as the first step for every migration, regardless of size. Catalogue everything. Figure out which pipelines are still in use (you'll be surprised how many aren't). Identify the owners. Group them by domain. This inventory work makes everything that follows smoother.
One thing to watch: mounting gives you a read-only view. You can't edit ADF pipelines from Fabric. It's for discovery and planning, not for day-to-day development.
Path 2 - The PowerShell Conversion Tool
Microsoft provides a PowerShell module that converts ADF pipeline definitions into Fabric-native format. For straightforward pipelines - Copy activities, Lookups, Stored Procedures, basic control flow - it works surprisingly well. You feed it your ADF JSON, and it produces Fabric pipeline definitions.
But here's the thing: treat the output as a starting point, not a finished product.
The tool handles the common patterns well. Copy activities translate cleanly. Lookup and Stored Procedure activities come across with most of their configuration intact. Parameters and variables carry over. Basic control flow (ForEach, If Condition, Switch) works.
Where it falls short:
- Custom connectors don't have direct Fabric equivalents. You'll need to find alternative approaches or use the Azure Function activity as a workaround.
- Complex expressions sometimes need manual adjustment. The expression language is largely the same, but there are edge cases around how connections and datasets are referenced.
- Data flows (the visual ETL designer) have varying levels of parity. Simple mappings convert fine. Anything with complex joins, derived columns using window functions, or custom sinks will need manual work.
- Integration runtimes don't exist in Fabric the same way. Self-hosted IRs for on-premises data access need a different solution in Fabric.
Our advice: run the conversion tool in batches. Start with your simplest, highest-volume pipelines. Get those working in Fabric, validate the outputs against your ADF results, and build confidence before tackling the complex ones.
Path 3 - Manual Migration
For complex pipelines, heavily customised environments, or situations where you want to take the opportunity to modernise your architecture, manual migration is the way to go.
This sounds worse than it is. "Manual" doesn't mean "rebuild everything from scratch in the UI." It means you're intentionally redesigning your data integration layer to take advantage of Fabric's capabilities instead of just replicating what you had in ADF.
Practical example: we had a client with an ADF pipeline that orchestrated data movement into a Synapse dedicated SQL pool, ran a series of stored procedures for transformation, then triggered a Power BI dataset refresh via a web activity calling the Power BI REST API. In Fabric, that becomes a pipeline that loads data into a Lakehouse, runs a Spark notebook for transformation, and uses the native semantic model refresh activity. Fewer moving parts, less configuration, no REST API authentication to manage.
Manual migration is also your opportunity to clean house. That pipeline someone built three years ago with 47 activities and no documentation? You don't have to replicate it. You can rebuild it properly.
A Practical Migration Sequence
Based on what we've seen work across Microsoft Fabric consulting engagements, here's the sequence we recommend:
Phase 1 - Inventory and assess (1-2 weeks). Mount your ADF in Fabric. Catalogue every pipeline, linked service, and integration runtime. Classify pipelines by complexity (simple/medium/complex) and business criticality (high/medium/low). Identify what's actually running and what's been abandoned.
Phase 2 - Quick wins (2-4 weeks). Take your simple, high-value pipelines and run them through the PowerShell converter. These are usually straightforward Copy Activity pipelines that move data from source systems into your analytics layer. Validate the outputs carefully. Run them in parallel with the ADF originals until you're confident.
Phase 3 - Medium complexity (4-8 weeks). Tackle pipelines with moderate logic - parameterised Copy activities, some control flow, Lookup activities for dynamic configuration. Use the PowerShell tool as a starting point and expect to do manual cleanup.
Phase 4 - Complex redesign (ongoing). This is where manual migration lives. Take your most complex pipelines and redesign them for Fabric. Don't try to replicate the exact same logic. Ask "what was this pipeline trying to achieve?" and build the Fabric-native solution for that outcome.
Throughout all phases: Keep the ADF versions running in parallel until the Fabric versions are validated. Don't cut over under pressure. The mounted ADF in Fabric makes it easy to compare results side by side.
Things That Catch People Off Guard
Global parameters need conversion. ADF uses global parameters for values shared across pipelines. Fabric uses Variable Libraries instead. The concepts map to each other, but the migration isn't automatic. Plan for it, especially if you have dozens of global parameters.
Linked services become connections. The naming is different, and the management model is slightly different. Connections in Fabric are workspace-scoped by default, which is usually what you want but can surprise teams used to ADF's more centralised linked service management.
Triggers need rebuilding. ADF's schedule triggers, tumbling window triggers, and event triggers don't migrate automatically. You'll need to recreate your scheduling in Fabric. The good news is Fabric's scheduling is simpler. The bad news is "simpler" sometimes means "less flexible," so check that your specific trigger patterns are supported.
Self-hosted integration runtimes. If you're using self-hosted IRs to reach on-premises data sources, this is your biggest planning item. Fabric has its own on-premises data gateway story, but it works differently. Test your on-prem connectivity early.
What About the Data?
A migration from ADF to Fabric Data Factory is a pipeline migration - you're moving the orchestration and transformation logic. But it's worth thinking about the data layer at the same time.
If your ADF pipelines currently load data into Azure SQL Database, Synapse SQL Pool, or blob storage, you have an opportunity to consolidate into Fabric's Lakehouse or Warehouse. This isn't required - Fabric Data Factory can still connect to external data stores - but it simplifies the architecture.
For organisations already using or planning to use Power BI heavily, having your data in OneLake alongside your semantic models is genuinely convenient. DirectLake mode in Power BI, which reads Parquet files directly from OneLake without needing an import, is one of the most compelling features of the Fabric stack.
Don't Boil the Ocean
The biggest mistake we see is trying to migrate everything at once. Someone creates a project plan that says "migrate all 200 ADF pipelines to Fabric in Q2" and it immediately falls behind because the second pipeline they attempt has some obscure custom activity that doesn't have a Fabric equivalent.
Migrate incrementally. Start with the easy wins. Build confidence and expertise. Let the team get comfortable with Fabric's authoring experience before tackling the hard stuff.
And be honest about what "done" means. You might end up running some workloads in ADF and some in Fabric for months. That's fine. Microsoft supports running ADF alongside Fabric, and the mounted ADF item in Fabric means you still have a single pane of glass for governance even during the transition.
The organisations that migrate successfully are the ones that treat it as a process, not a project. They chip away at it, learn as they go, and don't panic when something doesn't convert cleanly on the first try.
Fabric Data Factory is a genuine improvement over ADF for most workloads. The migration to get there just requires patience and planning.