Back to Blog

How to Plan a Microsoft Fabric Migration from Azure Synapse

April 18, 202610 min readMichael Ridland

How to Plan a Microsoft Fabric Migration from Azure Synapse

If you're running Azure Synapse Analytics today, Microsoft is nudging you toward Fabric. The Synapse Dedicated SQL Pool isn't going away tomorrow, but the investment and feature development has clearly shifted to Fabric. The question for most organisations isn't whether to migrate, but when and how.

We've led several Synapse-to-Fabric migrations for Australian organisations, and the process is more nuanced than Microsoft's marketing suggests. This post covers what a well-planned migration actually looks like, what to watch out for, and how to set realistic expectations.

Why Migrate from Synapse to Fabric

Let's start with the honest reasons, not the marketing reasons:

Cost savings are real. Azure Synapse Dedicated SQL Pools are expensive, especially if you're running them 24/7. A DW200c (a common mid-market configuration) costs roughly $3,600 AUD/month. Equivalent query performance in Fabric Warehouse or Lakehouse SQL endpoint often costs less, especially when you factor in the consolidation of other services.

Feature development has moved to Fabric. Microsoft is putting its engineering effort into Fabric. Synapse is in maintenance mode - it still works, it still gets security patches, but new capabilities are landing in Fabric. If you want features like Direct Lake for Power BI, real-time analytics, or OneLake storage, they're Fabric-only.

Operational simplification. If you're running Synapse Dedicated Pool, Synapse Serverless, Synapse Pipelines, and Power BI Premium as separate services, Fabric consolidates the management overhead. One capacity, one monitoring tool, one governance model.

Microsoft's licensing direction. We've seen Microsoft account teams offering attractive Fabric pricing to incentivise migration from Synapse. If you're coming up for EA renewal, this is a good time to negotiate.

What You're Actually Migrating

A Synapse environment typically has these components that need to move:

  1. Dedicated SQL Pool (data warehouse) - Tables, views, stored procedures, functions, users, and security
  2. Synapse Pipelines - ETL/ELT orchestration (similar to Azure Data Factory)
  3. Synapse Serverless SQL Pool - Ad-hoc query capabilities over data lake files
  4. Spark Pools - Notebooks and Spark jobs for data engineering
  5. Linked Services and Integration Runtimes - Connections to source systems
  6. Power BI connections - Reports and datasets that connect to Synapse

Each component has a different migration path to Fabric, and some are more straightforward than others.

Step 1 - Assess Your Current Synapse Environment

Before you plan anything, you need a clear picture of what you're working with. We run a structured assessment that covers:

Workload inventory:

  • How many tables in the Dedicated Pool? What's the total data volume?
  • How many stored procedures and views? How complex are they?
  • How many pipelines? What do they connect to?
  • How many Spark notebooks? What libraries do they depend on?
  • How many Power BI reports connect to Synapse?

Usage patterns:

  • When do ETL jobs run? How long do they take?
  • What's the peak concurrent query load? When does it happen?
  • Which tables are queried most frequently?
  • Are there any real-time or near-real-time requirements?

T-SQL compatibility:

  • Does your code use features that Fabric Warehouse doesn't support yet?
  • Common blockers include: materialized views, result set caching, workload management, certain data types, and cross-database queries

Dependencies:

  • What source systems feed into Synapse?
  • What downstream systems consume data from Synapse?
  • Are there any direct connections from applications to the Dedicated Pool?

This assessment typically takes 1-2 weeks and produces a migration complexity score for each component.

Step 2 - Choose Your Fabric Target Architecture

You have several options for where Synapse workloads land in Fabric:

Option A - Fabric Warehouse

The most direct replacement for Synapse Dedicated SQL Pool. Fabric Warehouse supports T-SQL, so many stored procedures and views can be migrated with minimal changes. It's the right choice when:

  • Your team thinks in T-SQL
  • You have significant stored procedure logic
  • Your data model follows a traditional star schema

Option B - Fabric Lakehouse with SQL Endpoint

A Lakehouse stores data as Delta Parquet files in OneLake, with an auto-generated SQL endpoint for querying. It's the right choice when:

  • You want to access the same data from both SQL and Spark
  • You're planning to use Direct Lake mode in Power BI
  • Your data engineering team prefers Python/Spark over T-SQL

Option C - Hybrid (Warehouse + Lakehouse)

Many organisations use both. Raw and transformed data lives in the Lakehouse (accessible via Spark), while curated business models live in the Warehouse (accessible via T-SQL). This is the architecture we recommend most often because it gives you flexibility without forcing a team to change their preferred tools.

Pipeline Migration

Synapse Pipelines map closely to Data Factory pipelines in Fabric. The pipeline definition format is compatible, so in many cases you can export pipeline JSON from Synapse and import it into Fabric with minor modifications. The main differences:

  • Some connectors available in standalone Azure Data Factory or Synapse aren't yet available in Fabric Data Factory
  • Integration Runtime behaviour differs slightly
  • Managed private endpoints work differently in Fabric

Spark Migration

Synapse Spark notebooks can generally be moved to Fabric Spark with modest changes. The main considerations:

  • Library and dependency management is different in Fabric
  • Fabric uses a different Spark runtime version - check for compatibility
  • Pool configuration and autoscaling work differently

Step 3 - Build a Migration Plan with Realistic Timelines

Here's what we've found works in practice:

Small Environment (under 50 tables, under 20 pipelines)

  • Assessment: 1 week
  • Architecture design: 1 week
  • Migration and testing: 3-4 weeks
  • Parallel running and cutover: 2 weeks
  • Total: 7-8 weeks

Medium Environment (50-200 tables, 20-100 pipelines)

  • Assessment: 2 weeks
  • Architecture design: 1-2 weeks
  • Migration and testing: 6-10 weeks
  • Parallel running and cutover: 2-4 weeks
  • Total: 11-18 weeks

Large Environment (200+ tables, 100+ pipelines, Spark workloads)

  • Assessment: 2-3 weeks
  • Architecture design: 2-3 weeks
  • Migration and testing: 12-20 weeks
  • Parallel running and cutover: 4-6 weeks
  • Total: 20-32 weeks

These timelines assume a dedicated migration team (2-4 people) working primarily on the migration. If migration is competing with BAU work, double the elapsed time.

Step 4 - Execute the Migration in Phases

We never recommend a "big bang" migration. Instead, migrate in phases ordered by business value and risk:

Phase 1: Foundation and quick wins

  • Set up Fabric capacity and workspaces
  • Migrate the simplest pipelines and tables first
  • Move one Power BI report end-to-end (pipeline, table, report) as a proof of concept
  • Validate data quality between Synapse and Fabric

Phase 2: Core workloads

  • Migrate the main data warehouse tables and transformation logic
  • Move the primary ETL pipelines
  • Reconnect Power BI reports to Fabric data sources
  • Run parallel processing (both Synapse and Fabric) for validation

Phase 3: Complex and edge cases

  • Migrate complex stored procedures (may require refactoring)
  • Move Spark notebooks and jobs
  • Handle any connector or integration runtime changes
  • Migrate security model and row-level security

Phase 4: Cutover and decommission

  • Switch all consumers to Fabric
  • Monitor for issues during a stabilisation period (2-4 weeks)
  • Decommission Synapse resources
  • Update documentation and runbooks

Common Pitfalls We've Seen

T-SQL Incompatibilities

Fabric Warehouse supports a broad subset of T-SQL, but it's not 100% compatible with Synapse Dedicated Pool. Watch out for:

  • CREATE TABLE AS SELECT (CTAS) - Works differently in Fabric. You'll need to adjust patterns that rely on CTAS for table management.
  • Distribution and indexing - Synapse uses hash distribution and columnstore indexes that you configure explicitly. Fabric Warehouse manages distribution automatically. This is actually simpler, but means your existing DDL scripts need modification.
  • Materialized views - Not supported in Fabric Warehouse as of early 2026. If you rely on these for performance, you'll need alternative approaches.
  • Dynamic data masking and column-level security - Check current Fabric support before migrating security models.

Data Movement Underestimation

Moving terabytes of data from Synapse Dedicated Pool to OneLake takes time. We've seen organisations underestimate the data movement phase. For large datasets, plan for:

  • Exporting data from Synapse Dedicated Pool (this can be slow if the pool is small)
  • Ingesting into OneLake (typically via COPY INTO or Data Factory pipelines)
  • Validating row counts and data quality post-migration

Power BI Report Reconnection

Changing the data source for Power BI reports from Synapse to Fabric isn't always a simple connection string swap. If your reports use Import mode, you'll need to update the data source and refresh. If they use DirectQuery, you'll need to repoint and validate that query performance is acceptable on Fabric.

The move to Direct Lake mode is worth the effort for most organisations, but it requires your data to be in the Lakehouse (Delta Parquet format), not the Fabric Warehouse. Plan your architecture accordingly.

Security Model Differences

Synapse Dedicated Pool has a rich security model with database roles, schema-level permissions, row-level security, and dynamic data masking. Not all of these map directly to Fabric. Plan to audit and rebuild your security model, especially for sensitive data.

Performance Regression on Specific Queries

In our experience, 80-90% of queries run the same or faster on Fabric compared to Synapse Dedicated Pool. But 10-20% may run slower, especially complex multi-join queries that benefited from Synapse's specific distribution strategies. Identify these queries early and plan for optimisation work.

Running Synapse and Fabric in Parallel

During migration, you'll run both environments simultaneously. This costs more in the short term but is essential for risk management. Tips for the parallel period:

  • Set a firm end date for the parallel period. Without a deadline, it drags on indefinitely and you're paying double.
  • Don't maintain both environments equally. Fabric should be primary and Synapse should be read-only or limited to the workloads that haven't migrated yet.
  • Use the parallel period for performance comparison. Run the same queries against both environments and document the differences. This builds confidence for cutover.

Estimated Cost of a Synapse-to-Fabric Migration

Beyond the Fabric licensing costs, you'll need to budget for migration effort:

Company Size Internal Effort External Consulting (if applicable)
Small (under 50 tables) 200-400 hours $30,000-60,000 AUD
Medium (50-200 tables) 500-1,000 hours $75,000-150,000 AUD
Large (200+ tables) 1,000-2,000+ hours $150,000-300,000+ AUD

These figures include assessment, migration, testing, and parallel running. They don't include ongoing Fabric licensing or any data engineering improvements you make along the way.

How Team 400 Approaches Synapse-to-Fabric Migrations

We've built a structured migration methodology from our experience with Australian organisations. Our Microsoft Fabric consulting team handles migrations end-to-end, from assessment through cutover and post-migration support.

What makes our approach different:

  • We start with an honest assessment. If Fabric isn't ready for your workloads today, we'll tell you. Sometimes the right answer is to wait 6 months for a specific feature.
  • We migrate in business-value order. The first workloads we move are the ones that will show the biggest improvements in cost, performance, or capability.
  • We transfer knowledge throughout. Your team should be able to operate Fabric independently by the time we finish. We don't build dependency.

We also work with Azure Data Factory and Power BI as part of migration projects, since these are almost always intertwined with a Synapse migration.

If you're considering a migration from Synapse to Fabric and want to understand what's involved for your specific environment, contact our team for an initial assessment. We'll scope the effort, estimate costs, and give you a realistic timeline.

Learn more about our full range of data and AI services or read about our broader AI consulting approach.