Back to Blog

Azure Data Factory vs Fabric Data Factory - Which to Choose Now

April 20, 20267 min readMichael Ridland

If you're building or modernising data pipelines in 2026, you've probably noticed that Microsoft now has two Data Factory products. Azure Data Factory (ADF), the standalone service that's been around since 2017, and Fabric Data Factory, which lives inside the Microsoft Fabric platform.

They share a name. They share a lot of the same interface. But they are not the same product, and choosing the wrong one can cost you months of rework.

We've helped dozens of Australian organisations make this decision over the past two years, and the right answer depends on where you are today, where you're headed, and how your team works. Here's what we've learned.

What Is Azure Data Factory?

Azure Data Factory (ADF) is a standalone cloud-based ETL/ELT service on Azure. It orchestrates data movement and transformation across cloud and on-premises sources. You build pipelines using a visual designer or code, connect to hundreds of data sources, and schedule or trigger runs.

ADF runs on its own Azure subscription. You pay for pipeline activity runs, data movement, and integration runtime hours. It's mature, well-documented, and has a large ecosystem of connectors and community knowledge.

Key characteristics:

  • Standalone Azure service with its own resource group
  • Pay-per-use pricing (activity runs, data movement units, integration runtime hours)
  • Over 100 built-in connectors
  • Self-hosted integration runtime for on-premises connectivity
  • Mapping data flows for code-free transformations
  • Deep integration with Azure Synapse, Azure SQL, Azure Data Lake

What Is Fabric Data Factory?

Fabric Data Factory is the data integration layer within Microsoft Fabric. It provides similar pipeline and dataflow capabilities, but everything runs inside the Fabric workspace and uses Fabric capacity units (CUs) instead of standalone Azure billing.

Fabric Data Factory is essentially the next generation of ADF, rebuilt to work natively with the Fabric lakehouse, OneLake, and the broader Fabric ecosystem (Power BI, Real-Time Intelligence, Data Warehouse, Notebooks).

Key characteristics:

  • Part of the Microsoft Fabric platform
  • Capacity-based pricing (Fabric CUs, not per-pipeline billing)
  • Native integration with OneLake, Fabric Lakehouse, and Power BI
  • Dataflows Gen2 for self-service data preparation
  • Pipelines that look and feel like ADF pipelines
  • Shares workspace governance and security with other Fabric items

Feature Comparison Table

Feature Azure Data Factory Fabric Data Factory
Pricing model Pay-per-use Capacity-based (Fabric CUs)
On-premises connectivity Self-hosted IR (mature) On-premises data gateway
Number of connectors 100+ Growing, but fewer than ADF today
Code-free transforms Mapping data flows Dataflows Gen2
Orchestration Pipelines Pipelines (similar UI)
Native lakehouse support Via ADLS Gen2 Direct OneLake integration
Power BI integration Indirect (via data stores) Native (same workspace)
Git integration Azure DevOps, GitHub Fabric Git integration
CI/CD maturity Mature (ARM templates, APIs) Improving but less mature
Monitoring Azure Monitor, ADF Monitor Fabric Monitor Hub
Hybrid scenarios Strong (self-hosted IR) Limited compared to ADF

When to Choose Azure Data Factory

ADF remains the better choice in several common scenarios.

You have significant on-premises data sources. ADF's self-hosted integration runtime is battle-tested. It handles complex networking scenarios - VPNs, ExpressRoute, private endpoints - that many Australian enterprises rely on. If you're connecting to on-premises SQL Server, Oracle, SAP, or file systems, ADF gives you more flexibility and reliability today.

You need granular cost control. Pay-per-use pricing means you only pay for what runs. For organisations with unpredictable or spiky workloads, this can be significantly cheaper than reserving Fabric capacity. We've seen mid-sized Australian businesses save 30-40% on data integration costs by staying with ADF when their pipeline volumes don't justify a Fabric capacity reservation.

Your data platform isn't moving to Fabric yet. If you're running Azure Synapse, Databricks, or a custom data lake architecture and there's no plan to migrate to Fabric in the next 12-18 months, ADF integrates perfectly with your existing stack. There's no benefit to Fabric Data Factory if you're not using Fabric.

You need mature CI/CD. ADF's deployment model using ARM templates and the ADF SDK is well-established. Teams with existing DevOps pipelines for ADF can keep their workflows intact. Fabric's deployment story is improving, but it's not at parity yet.

When to Choose Fabric Data Factory

Fabric Data Factory is the right pick in these situations.

You're building a new data platform on Microsoft Fabric. If you're starting fresh or migrating your entire analytics stack to Fabric, there's no reason to use standalone ADF. Fabric Data Factory integrates directly with OneLake, the Fabric lakehouse, and Power BI. Everything lives in one workspace, one security model, one governance layer.

Your organisation has committed to Fabric capacity. If you're already paying for Fabric capacity for Power BI Premium or other Fabric workloads, you can run data pipelines within that same capacity at no additional per-pipeline cost. We've seen this reduce the effective cost of data integration by 50% or more for organisations already on Fabric.

You want end-to-end lineage and governance. Fabric provides unified lineage from source data through transformation to Power BI reports. If data governance is a priority - and for Australian financial services, healthcare, and government organisations, it usually is - this end-to-end visibility is a genuine advantage.

Your team includes business analysts doing self-service data prep. Dataflows Gen2 in Fabric are designed for Power Query users. If part of your data integration strategy involves business users preparing and blending their own data, Fabric's model is much more accessible than ADF's mapping data flows.

Pricing Comparison for Australian Organisations

This is where most of the decision-making happens in practice.

Azure Data Factory pricing (approximate, AUD)

  • Pipeline activity runs: ~$1.50 per 1,000 runs
  • Data movement: ~$0.38 per DIU-hour
  • SSIS integration runtime: ~$0.22 per hour (standard), ~$0.44 per hour (enterprise)
  • Self-hosted IR: Free (you pay for the VM)

For a typical mid-market organisation running 50-100 pipelines daily with moderate data volumes, expect to pay $800-$2,500/month AUD.

Fabric Data Factory pricing (approximate, AUD)

Fabric uses capacity units (CUs). Data Factory workloads consume CUs from your Fabric capacity:

  • F2 capacity: ~$330/month AUD
  • F4 capacity: ~$660/month AUD
  • F8 capacity: ~$1,320/month AUD
  • F16 capacity: ~$2,640/month AUD
  • F64 capacity: ~$10,560/month AUD

Your data pipelines share this capacity with other Fabric workloads (Power BI, Warehouse, Lakehouse). If you're already paying for Fabric capacity, the incremental cost of running data pipelines is often near zero.

The break-even calculation: If your ADF bill is under $1,500/month AUD and you're not using other Fabric services, standalone ADF is usually cheaper. If you're already on Fabric or your ADF costs exceed $3,000/month, Fabric Data Factory typically works out better.

Migration Path - ADF to Fabric Data Factory

Microsoft has made migration relatively straightforward because the pipeline model is nearly identical:

  1. Pipelines copy directly. Most ADF pipelines work in Fabric with minimal changes.
  2. Linked services become connections in Fabric. The configuration is similar but not identical.
  3. Mapping data flows need to be converted to Dataflows Gen2 or Fabric notebooks. This is the most labour-intensive part.
  4. Triggers work differently in Fabric. Schedule and event-based triggers need reconfiguration.
  5. Self-hosted integration runtimes are replaced by on-premises data gateways. This requires testing, particularly for complex networking setups.

In our experience, a typical migration for 50-100 pipelines takes 4-8 weeks with proper planning. The pipelines themselves migrate quickly - it's the testing, networking, and trigger reconfiguration that takes time.

Our Recommendation for 2026

For most Australian organisations we work with, the answer is becoming clearer:

  • New projects should default to Fabric Data Factory unless you have specific requirements (heavy on-premises integration, granular cost control) that ADF serves better.
  • Existing ADF implementations should stay on ADF unless you're migrating your broader analytics platform to Fabric. Don't migrate Data Factory in isolation - it only makes sense as part of a larger Fabric adoption.
  • Hybrid is fine. Microsoft supports running both. Some of our clients use ADF for on-premises heavy lifting and Fabric Data Factory for cloud-native pipelines. This isn't a hack - it's a legitimate architecture pattern.

How Team 400 Can Help

We're Microsoft Data Factory consultants who've worked with both products since their early days. We help Australian businesses evaluate, implement, and migrate between ADF and Fabric Data Factory based on actual requirements - not marketing slides.

Whether you're choosing between the two, planning a migration, or optimising an existing implementation, our team can help you move quickly and avoid the common pitfalls.

We also work across the broader Microsoft Fabric and Power BI ecosystem, so we can assess your data platform as a whole, not just the integration layer.

Get in touch to discuss your Data Factory requirements, or learn more about our AI consulting and data services.