Back to Blog

Upgrading Azure Data Factory Pipelines to Fabric - A Practical Guide

April 1, 20269 min readMichael Ridland

Upgrading Azure Data Factory Pipelines to Fabric - A Practical Guide

Microsoft has quietly shipped something that a lot of data teams have been waiting for - a built-in migration experience that lets you move Azure Data Factory pipelines to Fabric Data Factory directly from the ADF portal. It's in Preview right now, and it's a big deal for anyone managing an ADF estate that's been growing for years.

I've been watching this space closely and helping clients plan their migration strategies. Here's what you need to know, including the parts Microsoft's marketing material glosses over.

Why This Matters Now (Even Without a Deadline)

Let me be clear about something first. Microsoft has not announced a deprecation date for Azure Data Factory. You can run ADF and Fabric Data Factory side by side indefinitely. Nobody is forcing you to migrate.

So why think about it now?

Because the direction of investment is obvious. Microsoft is putting its engineering effort into Fabric. New features are landing in Fabric Data Factory first. The monitoring experience is better. The integration with lakehouses, warehouses, and Power BI is tighter. Copilot for pipeline authoring is a Fabric-only feature.

If you're starting new data pipelines today, there's very little reason to build them in ADF. And if you have a growing ADF estate, every new pipeline you add there is one more pipeline you'll eventually need to move.

The migration tool being built into ADF itself tells you where Microsoft sees this going. They're making it easy to move because they expect you to move.

The New Migration Experience - How It Actually Works

The migration tool is now accessible directly from the ADF authoring canvas. You click "Migrate to Fabric (Preview)" and it walks you through a structured process. Here's what happens at each step.

Step 1 - Assessment

The tool scans every pipeline in your factory and categorises each one with a readiness status:

  • Ready - fully supported, safe to migrate as-is
  • Needs review - requires minor changes, like parameter adjustments or configuration tweaks
  • Coming soon - Fabric support is planned but not available yet
  • Not compatible - no Fabric equivalent exists, redesign required

You can drill into individual pipelines and see which specific activities are causing issues. The whole assessment is read-only - it doesn't touch your factory. You can also export results to CSV, which is handy for larger estates where you need to share the findings with your team.

This is genuinely useful. Before this tool existed, figuring out migration readiness meant manually reviewing every pipeline against the Fabric feature matrix. Now you get a clear picture in minutes.

Step 2 - Mounting

After the assessment, you select a Fabric workspace and "mount" your ADF instance to it. Mounting doesn't migrate anything. It creates a reference to your factory inside Fabric so you can see its structure and continue the migration from the Fabric side. Think of it as parking your ADF in the Fabric garage before you start unpacking.

Step 3 - Pipeline Selection and Migration

From the Fabric workspace, you pick which pipelines to migrate. You don't have to do everything at once. Migrate a batch, validate them, then come back for more.

Step 4 - Connection Mapping

This is where ADF linked services get converted into Fabric connections. For common connectors - Azure Blob Storage, ADLS Gen2, Azure SQL Database, Cosmos DB, PostgreSQL, MySQL - the tool handles the mapping automatically. The authentication methods translate fairly cleanly. Account keys, SAS tokens, service principals, and system-assigned managed identities all have Fabric equivalents.

For less common connectors or custom authentication setups, you'll need to manually create the Fabric connections and map them yourself. If you skip this step entirely, pipelines still migrate but the affected activities are deactivated until you sort out the connections.

Step 5 - Validation

Migrated pipelines land in your Fabric workspace with triggers disabled by default. This is intentional. You get a chance to review everything, validate connections, and run test executions before anything goes live.

Pipeline names get prefixed with the source factory name to avoid conflicts, which is a sensible default.

What Migrates Cleanly

If your ADF pipelines stick to bread-and-butter activities - Copy, ForEach, If Condition, Lookup, Switch, Until, Set Variable, Execute Pipeline - you're in good shape. These map directly to their Fabric equivalents.

Standard connectors with standard authentication methods migrate smoothly. If you're pulling data from Azure SQL, Blob Storage, or Data Lake into your pipelines, the connection mapping handles most of the work.

Schedule triggers migrate automatically (though disabled). Simple pipelines that move data from A to B with basic control flow? Those are straightforward migrations.

What Doesn't Migrate (and Needs Redesign)

Here's where honesty matters. The "out of scope" list is longer than you might expect.

Self-hosted integration runtimes can't migrate. If you're using SHIR to connect to on-premises data sources, you'll need to set up a Fabric On-Premises Data Gateway instead. Similar concept, different implementation, separate setup process.

Managed virtual network integration runtimes don't carry over. Fabric has its own virtual network gateway model, but it works differently and requires reconfiguration. If your security posture depends on managed VNet in ADF, this needs careful planning.

SSIS integration runtimes are out entirely. No migration path. If you're running SSIS packages through ADF, that workload stays in ADF for now.

Mapping data flows aren't supported yet. Microsoft says "coming soon," but if you have significant investment in mapping data flows, you'll need to wait or rethink those as Dataflow Gen2 or Spark notebooks.

Tumbling window triggers need redesign. Fabric has interval-based scheduling, but the dependency chaining and backfill semantics of tumbling window triggers don't translate directly.

Dynamic linked services (parameterised connections) don't migrate. If you've built metadata-driven pipelines that dynamically switch connections based on parameters, each permutation needs to become a separate Fabric connection. This is a real pain point for teams with heavily parameterised architectures.

SAP connectors, marketing SaaS connectors (HubSpot, Google Ads, Xero, Shopify), and change data capture workloads are all out of scope.

Global parameters need manual recreation. You'll recreate them as Fabric variable libraries, which is actually an upgrade in functionality but still requires manual effort.

What Fabric Data Factory Does Better

It's not just about parity. There are genuine improvements worth knowing about.

No publish step. In ADF, you author then publish. In Fabric, you save and run. This removes a friction point that's tripped up every ADF team I've worked with.

Unified monitoring. The Fabric monitoring hub shows you pipelines, dataflows, notebooks, and everything else in one place. ADF's monitoring is fine for individual pipelines, but the cross-workspace view in Fabric is a real improvement.

Variable libraries. These replace global parameters and give you a more structured way to manage configuration across pipelines. It's a better model.

Copilot integration. You can use natural language to generate pipeline expressions and troubleshoot failures. It's not magic, but it saves real time on expression syntax and error investigation.

No datasets. Fabric eliminates the dataset object entirely. You define data properties inline within activities. Fewer objects to manage, less indirection to understand.

Native lakehouse and warehouse integration. If your data lands in Fabric lakehouses or warehouses, pipelines can reference them directly without connector setup.

Where Fabric Data Factory Is Still Catching Up

Connector breadth. ADF has more connectors. If you need SAP, Oracle, or niche SaaS connectors, check the Fabric connector list carefully before committing to migration.

Private networking. ADF's managed VNet is more mature than Fabric's virtual network gateway. For organisations with strict network isolation requirements, this gap is real.

Tumbling window trigger parity. The interval scheduling in Fabric doesn't yet match the full feature set of ADF's tumbling window triggers, particularly around dependency chains between triggers.

Mapping data flow support. If you've invested heavily in ADF's mapping data flows, the transition to Dataflow Gen2 or Spark notebooks is a redesign, not a migration.

Practical Advice for Planning the Migration

Based on what we've seen working with clients across Australia, here's how I'd approach this.

Run the assessment first, no matter what. Even if you're not planning to migrate soon, run the built-in assessment. It takes minutes and gives you a clear picture of your readiness. Export the CSV and file it away. When the conversation comes up (and it will), you'll have data instead of guesses.

Migrate in waves, not all at once. Start with your simplest, least critical pipelines. Get comfortable with the Fabric experience. Work out your connection mapping patterns. Then tackle the more complex stuff.

Don't migrate pipelines that use unsupported features. If a pipeline depends on SHIR, managed VNet, SSIS, or mapping data flows, leave it in ADF for now. Running both platforms in parallel is explicitly supported and expected.

Validate in a non-production workspace first. The migration tool makes it easy to target a dev or test workspace. Use that. Don't migrate directly to production.

Budget time for connection mapping. The automatic conversion handles common scenarios well. But if you have custom authentication, parameterised linked services, or non-standard connectors, connection setup will take real effort.

Plan for global parameter recreation. If you rely heavily on global parameters, set aside time to recreate them as variable libraries. This is manual work.

When to Migrate vs When to Wait

Migrate now if:

  • Your pipelines use standard activities and common connectors
  • You're already using Fabric for other workloads (lakehouses, Power BI, warehouses)
  • Your ADF estate is relatively straightforward
  • You want access to Fabric-only features like Copilot and improved monitoring

Wait if:

  • You depend on SSIS integration runtimes
  • Your security model requires managed virtual networks
  • You've invested heavily in mapping data flows
  • Your pipelines use heavily parameterised, metadata-driven patterns
  • You rely on connectors that don't exist in Fabric yet

Run both in parallel if:

  • Some pipelines are ready and others aren't
  • You want to start building new work in Fabric while keeping existing ADF pipelines stable

There's no shame in waiting. ADF isn't going anywhere, and Microsoft is actively narrowing the feature gaps. But if your assessment comes back mostly green, there's real value in starting the move now rather than letting the backlog grow.

For the full technical walkthrough of the migration steps, see Microsoft's official migration guide.

If you're working through a migration or just trying to figure out whether it makes sense for your organisation, our Microsoft Data Factory consultants have been doing this work across a range of Australian enterprises. We also work across the broader Microsoft Fabric platform, and our AI for business intelligence practice can help you think about how your data pipelines fit into the bigger picture of your analytics strategy.