Back to Blog

Microsoft Fabric for Real-Time Analytics - What Works

April 19, 202610 min readMichael Ridland

Microsoft Fabric for Real-Time Analytics - What Works

Real-time analytics has been one of those areas where the promise always outpaced the delivery. Every data platform claims to support "real-time," but the actual experience ranged from painful to barely functional for most mid-market organisations. Azure Event Hubs, Azure Stream Analytics, Azure Data Explorer - stitching these together required specialist skills and a significant budget.

Microsoft Fabric changes this. The Real-Time Analytics workload in Fabric is one of the platform's genuine strengths, and it's accessible in a way that previous streaming solutions weren't. But it's not magic, and there are clear boundaries around what it does well and where it falls short.

In this post, I'll share what we've seen work in practice across our consulting engagements with Australian organisations.

What Real-Time Analytics in Fabric Actually Includes

Fabric's Real-Time Analytics capability is built on the same engine as Azure Data Explorer (Kusto). If you've used ADX before, the technology is familiar. If you haven't, here's what you're working with:

Eventstreams - The ingestion layer. Eventstreams connect to real-time data sources (Azure Event Hubs, IoT Hub, custom apps, Azure SQL CDC, and others) and route data into Fabric. Think of it as the plumbing that gets streaming data into the platform.

KQL Database - The storage and query engine. Data lands in a KQL (Kusto Query Language) database optimised for time-series and log-style data. KQL is purpose-built for querying large volumes of timestamped records quickly.

KQL Querysets - Saved queries that you can share and reuse. Similar to saved SQL queries in other environments.

Real-Time Dashboards - Visualisation layer built on top of KQL databases. These dashboards auto-refresh and are designed for operational monitoring rather than traditional BI reporting.

Reflexes (Data Activator) - An alerting layer that monitors streaming data and triggers actions when conditions are met. Think "send a Teams notification when this metric exceeds a threshold."

Use Cases Where Fabric Real-Time Analytics Shines

Based on our project experience, these are the scenarios where Fabric's real-time capabilities deliver genuine value:

1. Operational Monitoring and Alerting

Monitoring application performance, infrastructure health, or business process metrics in real time. Examples we've implemented:

  • Logistics company tracking delivery vehicle GPS positions and triggering alerts when vehicles deviate from expected routes or stop unexpectedly
  • Financial services firm monitoring transaction processing rates and flagging when throughput drops below acceptable levels
  • Manufacturing business tracking production line sensor data and alerting when quality metrics drift outside tolerances

For these use cases, the combination of Eventstreams (ingestion), KQL Database (storage and querying), and Data Activator (alerting) works well. The latency from event occurring to alert firing is typically under 30 seconds, which is adequate for most operational monitoring.

2. IoT Telemetry Analysis

If you're collecting data from sensors, devices, or equipment, Fabric's KQL engine is built for this. It handles high-volume, high-velocity time-series data efficiently. The query language includes built-in functions for time-series analysis, anomaly detection, and aggregation over time windows.

We worked with an Australian resources company that was ingesting telemetry from 500+ field sensors at 1-second intervals. KQL handled the query volume without issues at an F32 capacity tier, and the team could run ad-hoc analysis across weeks of historical data in seconds.

3. Clickstream and User Behaviour Analytics

Tracking website or application user behaviour in real time is another natural fit. Eventstreams can ingest events from web applications (via Event Hubs), and KQL's support for session analysis, funnel queries, and cohort analysis makes it straightforward to build real-time user analytics.

4. Log Analytics and Troubleshooting

Application logs, audit logs, and system logs are a great fit for KQL databases. The query language was originally designed for log analysis (it powers Azure Monitor and Microsoft Sentinel), so patterns like searching across billions of log entries, correlating events, and building diagnostic dashboards are first-class capabilities.

Use Cases Where Fabric Real-Time Falls Short

Not every real-time scenario is a good fit:

Complex Event Processing (CEP)

If you need to detect patterns across multiple event streams in real time - for example, "alert me when Event A happens on stream 1 and Event B happens on stream 2 within 5 minutes, but only if Event C hasn't happened on stream 3" - Fabric's current capabilities are limited. This kind of complex event processing is better served by Apache Kafka with a stream processing framework, or Azure Stream Analytics for simpler patterns.

Sub-Second Latency Requirements

If your use case demands true sub-second latency from event to action (high-frequency trading, real-time fraud detection on individual transactions, or real-time game state), Fabric isn't the right tool. The ingestion-to-query path typically has 2-15 seconds of latency, which is fine for monitoring but too slow for millisecond-level requirements.

Massive Scale Streaming (millions of events per second)

Fabric can handle impressive throughput, but if you're processing millions of events per second across hundreds of sources, you'll likely need a dedicated streaming platform like Apache Kafka or Confluent, potentially with Fabric as the analytics layer downstream.

Transactional Workloads

KQL databases are optimised for append-only, time-series data. They're not designed for OLTP-style workloads with frequent updates and deletes. If your real-time requirement involves updating individual records in real time (like a live order status), you need a different approach.

Architecture Patterns That Work

Pattern 1 - Simple Streaming Analytics

Best for: Monitoring dashboards, basic alerting

Source (Event Hub / IoT Hub)
    → Eventstream
        → KQL Database
            → Real-Time Dashboard
            → Data Activator (alerts)

This is the simplest pattern and covers a surprising number of use cases. Data flows from your source through an Eventstream into a KQL database, where it's immediately queryable. Real-Time Dashboards provide live visualisation, and Data Activator triggers alerts.

Setup time: 2-4 hours for a basic implementation. A production-ready version with proper error handling and monitoring takes 1-2 weeks.

Pattern 2 - Streaming Plus Batch Enrichment

Best for: Operational analytics where real-time data needs context from batch data

Real-time source → Eventstream → KQL Database
                                      ↕ (join)
Batch source → Data Factory → Lakehouse → SQL Endpoint
                                      ↓
                              Power BI (unified view)

In this pattern, streaming data lands in KQL for real-time queries, but you also join it with dimensional data from your Lakehouse (customer details, product information, reference data). This gives your real-time dashboards business context.

We use this pattern frequently. One example: a retail client receives point-of-sale transaction events in real time, but enriches them with product category and store location data from the Lakehouse to build a live sales-by-category dashboard.

Pattern 3 - Stream Processing with Landing in Lakehouse

Best for: When you need real-time analytics AND the same data in your batch data warehouse

Source → Eventstream → KQL Database (real-time queries)
                    → Lakehouse (batch storage)
                          ↓
                    Fabric Warehouse (dimensional model)
                          ↓
                    Power BI (batch reporting)

Eventstreams can route data to multiple destinations simultaneously. This lets you serve real-time dashboards from KQL while also landing the raw data in your Lakehouse for batch processing, historical analysis, and traditional BI reporting.

This is the pattern we recommend most often because it eliminates the common problem of having real-time and batch data in separate silos.

Getting KQL Right - Practical Tips

KQL is a powerful query language, but it's different from SQL. Here are the things we've seen trip up teams new to real-time analytics in Fabric:

Think in pipes, not joins. KQL uses a pipe-forward syntax where you chain operations: TableName | where Timestamp > ago(1h) | summarize count() by bin(Timestamp, 5m). It reads left-to-right, top-to-bottom, which is actually more intuitive than SQL once you get used to it.

Use materialised views for recurring aggregations. If your dashboard repeatedly runs the same aggregation (hourly counts, daily averages), create a materialised view. It pre-computes the aggregation and updates incrementally, which is much more efficient than running the full query every time.

Set retention policies. KQL databases can grow quickly with high-volume streaming data. Set appropriate retention policies (e.g., keep raw data for 30 days, aggregated data for 1 year). This controls storage costs and query performance.

Use update policies for data transformation. Rather than transforming data in your application before sending it to Fabric, ingest the raw data and use update policies to transform it on arrival. This keeps your ingestion pipeline simple and moves transformation logic into the platform where it's easier to manage.

Don't try to make KQL into SQL. Teams that try to write SQL patterns in KQL struggle. KQL has its own idioms for time-series analysis, anomaly detection, and pattern matching. Invest a few days in learning KQL properly - Microsoft's official tutorials are good.

Capacity Planning for Real-Time Workloads

Real-time analytics can be CU-hungry in Fabric. Here's what to expect:

Ingestion consumes CUs proportional to your event volume and the complexity of any transformation applied during ingestion. Simple passthrough ingestion is cheap. Complex parsing and routing is more expensive.

KQL queries consume CUs based on the data volume scanned and the complexity of the query. Well-designed materialised views can reduce query cost by 10-50x compared to scanning raw data.

Real-Time Dashboards consume CUs every time they auto-refresh. A dashboard set to refresh every 10 seconds across 20 tiles generates significant query load. Be deliberate about refresh rates - most operational dashboards work fine at 30-60 second refresh intervals.

Data Activator consumes CUs to evaluate conditions continuously. The cost depends on how many conditions you're monitoring and how frequently they're evaluated.

For a mid-market real-time analytics workload (10,000-100,000 events per minute, 5-10 dashboard users, a handful of alerts), we typically see real-time workloads consume 20-40% of an F32 capacity. Budget accordingly, especially if you're sharing that capacity with batch workloads.

Real-Time Analytics vs Power BI Streaming - Which to Use

Fabric gives you two ways to build real-time visuals:

Real-Time Dashboards (KQL-based) are better for:

  • High-volume data (millions of events)
  • Historical context alongside real-time (query data from the last hour, day, or month)
  • Complex calculations and aggregations
  • Sharing with users who need to drill into data

Power BI streaming datasets are better for:

  • Simple counters and gauges
  • Very low-latency visual updates (under 5 seconds)
  • Embedding real-time tiles in existing Power BI dashboards
  • Scenarios where the audience already uses Power BI heavily

In practice, we use Real-Time Dashboards for operational teams (operations centres, production floors, logistics hubs) and Power BI streaming for executive dashboards that show one or two real-time KPIs alongside traditional batch reporting.

Getting Started with Real-Time Analytics in Fabric

If you're considering real-time analytics, here's the approach we recommend:

  1. Start with one use case. Don't try to build a real-time platform for everything. Pick one high-value scenario (the one where latency matters most to the business) and build that first.

  2. Use the Fabric trial. Microsoft's trial capacity is sufficient for a proof of concept. Build your Eventstream, load some test data, and validate the experience before committing budget.

  3. Invest in KQL skills. Send one or two team members through Microsoft's KQL training. It's a small investment that pays off quickly.

  4. Plan for capacity. Monitor CU consumption from day one using the Capacity Metrics app. Real-time workloads can surprise you with their consumption if you're not watching.

  5. Design for both real-time and batch. Use architecture Pattern 3 above so your streaming data also feeds your batch data platform. This avoids creating a silo.

How Team 400 Helps with Real-Time Analytics

We've implemented real-time analytics solutions across logistics, manufacturing, financial services, and retail in Australia. Our Microsoft Fabric consulting team has deep experience with KQL, Eventstreams, and the broader Fabric platform.

Typical real-time analytics engagements include:

  • Architecture design for streaming workloads
  • Eventstream configuration and source integration
  • KQL database design with optimised materialised views and retention policies
  • Real-Time Dashboard and Data Activator setup
  • Integration with existing Power BI reporting environments
  • Capacity planning and performance tuning

We also help organisations that need real-time analytics as part of a broader AI and data strategy, where streaming data feeds ML models or powers intelligent alerting.

If you're exploring real-time analytics in Fabric and want to understand what's realistic for your use case, reach out to our team. We'll help you determine whether Fabric's real-time capabilities fit your requirements or if a different approach makes more sense.

Explore our full services or learn how real-time data fits into broader data pipeline architectures.