Power BI DirectQuery - When to Use It and How to Make It Work
Most Power BI projects start with Import mode, and honestly, that's the right call for the majority of use cases. But every few months a client comes to us with a scenario where Import just won't cut it - the dataset is too large, the data needs to be truly real-time, or there are compliance requirements around not duplicating data outside the source system. That's when DirectQuery enters the conversation.
The problem is that DirectQuery gets misunderstood. Some teams avoid it entirely because they've heard it's slow. Others adopt it for everything because "real-time data" sounds impressive in a stakeholder meeting. Neither approach is right. Microsoft's official DirectQuery model guidance covers the technical detail well, but here's what we've actually learned from deploying DirectQuery models for Australian organisations.
What DirectQuery Actually Does
In Import mode, Power BI pulls a snapshot of your data into its own in-memory engine. Queries run fast because they're hitting compressed, columnar data sitting right there in the Power BI service. The trade-off is that your data is only as current as your last refresh.
DirectQuery flips this. No data is stored in Power BI. Every time a user interacts with a report - opens a page, changes a filter, clicks a slicer - Power BI generates a SQL query and sends it back to your source database. The result comes back, the visual renders, and then it's gone. Next interaction, new query.
This means your report performance is entirely dependent on how fast your source database can respond. I can't stress this enough. We've seen DirectQuery reports that feel instant against a properly tuned Azure SQL database, and we've seen reports that take 45 seconds per visual against an under-resourced on-premises SQL Server sitting behind a data gateway with a dodgy network connection.
When DirectQuery Actually Makes Sense
After working with dozens of DirectQuery deployments, here are the scenarios where it genuinely works well:
Large datasets that exceed Import capacity. Power BI Pro has a 1GB model size limit. Premium goes higher, but even then, if you're dealing with billions of rows of transactional data, Import isn't practical. DirectQuery lets you report against massive datasets without trying to squeeze them into memory.
Near real-time requirements that actually matter. Some industries need data that's minutes old, not hours old. We've built DirectQuery dashboards for logistics companies tracking shipments and for financial services firms monitoring transaction volumes. In those cases, the 30-minute refresh cycle of Import mode genuinely isn't good enough.
Data sovereignty and compliance. Some organisations - particularly in government and healthcare - have restrictions on data duplication. DirectQuery keeps the data in the source system. It never sits in the Power BI service. For some compliance frameworks, that distinction matters.
Reporting against operational databases where a staging layer isn't justified. Not every organisation has a data warehouse. If you're a mid-size company with a decent SQL database and you just need a few reports, DirectQuery can be simpler than building an entire ETL pipeline just to get data into Import mode.
When It Doesn't Make Sense (Even Though It Sounds Like It Should)
"We want real-time dashboards." This is the most common justification we hear, and about half the time, the actual requirement is data that's a few hours old, not seconds old. Import with scheduled refresh handles that fine and performs much better.
Complex analytical models with lots of measures and calculations. DirectQuery pushes computation to the source database. If your DAX measures are complex, Power BI translates them into SQL queries that your database might struggle with. We've seen single measure calculations generate SQL queries that are 200+ lines long. Your DBA will not thank you.
Reports with many visuals per page. Each visual generates its own query. A page with 15 visuals means 15 simultaneous queries hitting your source database every time someone opens the page or changes a filter. This can overwhelm an underprepared database quickly.
Optimising the Source Database
This is where most DirectQuery projects succeed or fail, and it's the part that gets the least attention in the planning phase. The model is only as good as what's underneath it.
Indexing matters more than ever. In Import mode, Power BI's VertiPaq engine handles compression and indexing internally. In DirectQuery, your database's indexes are doing all the work. Look at the columns used in filters, slicers, and group-by operations across your reports. Those columns need indexes. For SQL Server and Azure SQL, columnstore indexes can be particularly effective for aggregation-heavy report queries.
Materialise your calculations. If your reports frequently show computed values - revenue as quantity times unit price, for example - add computed columns to your source tables rather than calculating them in DAX. Computed columns in the database can be indexed. DAX calculations in DirectQuery translate to expressions in the SQL query, which the database optimizer may or may not handle efficiently.
Create indexed views for common aggregations. If your reports consistently aggregate line-level data up to monthly or weekly summaries, create indexed views at that grain in your database. This gives the query optimizer a pre-built path to the answer instead of scanning millions of rows each time.
Build a proper date table in your database. Don't use Power BI's auto date/time feature with DirectQuery. It generates additional queries that can be inefficient. Instead, create a date dimension table in your source database with columns for year, quarter, month, week, and any other time periods your reports need. This is standard dimensional modelling practice, but it matters even more here.
Use referential integrity where possible. When your source database enforces foreign key relationships between fact and dimension tables, Power BI can generate more efficient SQL using inner joins instead of outer joins. This is a setting on the relationship in Power BI (assume referential integrity), but it only works correctly if your data actually maintains integrity. If it doesn't, you'll get incorrect results - missing rows from the fact table.
Optimising the Power BI Model
Even with a well-tuned source database, the model design itself makes a difference.
Keep Power Query transformations minimal. Ideally, each table in your DirectQuery model maps directly to a single table or view in the source database. When Power BI can't fold your Power Query transformations into the source query, it falls back to local processing, which defeats the purpose. Use the "View Native Query" option in Power Query to check whether your transformations are being folded.
Avoid relative date filters in Power Query. Filtering for "last 12 months relative to today" in Power Query generates inefficient SQL with hardcoded date boundaries that get recalculated on every query. Apply date filtering through report-level filters or slicers instead.
Consider dual storage mode. Composite models let you mix Import and DirectQuery tables. This is genuinely useful. Import your slowly changing dimension tables (products, customers, regions) while keeping your large, frequently updated fact tables in DirectQuery. The dimension queries run fast from memory, and you only hit the source database for the fact table queries.
Reduce the number of visuals per page. This applies to all Power BI reports, but it's especially impactful with DirectQuery. Fewer visuals means fewer simultaneous queries. Aim for 7-8 well-designed visuals per page rather than cramming 20 onto a single screen.
The Team Dynamic That Makes or Breaks It
Here's something Microsoft's documentation mentions but doesn't emphasise enough: successful DirectQuery deployments almost always involve close collaboration between the Power BI developer and the database administrator. The Power BI developer needs to understand what queries their model generates. The DBA needs to understand the query patterns so they can optimise indexes and resources accordingly.
We've worked on projects where this collaboration was seamless - the DBA would monitor query patterns, add targeted indexes, and the reports would get faster over time. And we've worked on projects where the Power BI team and the database team barely talked to each other. The latter always ended with complaints about "Power BI being slow," when the real issue was nobody optimising the database for the actual query load.
If you're planning a DirectQuery deployment, budget time for this collaboration from the start. It's not a "build it and forget it" model. Your query patterns will change as users explore the data, and the database tuning needs to keep pace.
Our Recommendation
For most Australian organisations we work with, the pattern that works best is Import mode for the majority of reports, with DirectQuery reserved for specific scenarios where real-time data or data residency requirements make it genuinely necessary. When you do use DirectQuery, invest in database optimisation upfront rather than trying to patch performance issues after deployment.
If you're unsure which approach is right for your data, we're happy to look at your specific situation. Our Power BI consulting team works with organisations across Australia on everything from initial model design through to performance optimisation. We also offer broader business intelligence solutions that can help you think about where Power BI fits into your wider data architecture.
And if you're finding that your data preparation is getting complex enough that you need a proper pipeline before it reaches Power BI, it might be worth looking at Microsoft Fabric as part of the solution. Getting the data right before it reaches the reporting layer is consistently the highest-impact thing you can do for report performance.