Using Copilot to Auto-Generate Measure Descriptions in Power BI
If you've ever opened someone else's Power BI semantic model and tried to figure out what "Measure 7" actually calculates, you know the pain. Measure descriptions are one of those things that everyone agrees are important and nobody actually writes. It's like commenting code - in theory every team does it, in practice most models are a graveyard of cryptic measure names with blank description fields.
Microsoft's answer to this is a Copilot feature that reads your DAX formula and generates a plain-English description automatically. I've been watching this play out across several of our Power BI consulting engagements, and it's one of those quiet features that saves more time than it probably gets credit for.
How It Works
The feature lives in the Model view of Power BI Desktop (and in the web-based data model editor too). When you select a measure and look at its properties, there's a "Create with Copilot" button sitting right under the Description textbox. Click it, Copilot reads the DAX formula, and generates a natural language description. You review it, click "Keep it," and the description is saved to your model.
That's it. No complex setup, no prompt engineering, no configuring API keys. It reads the formula, writes a description, and you approve or reject it.
If you later update your DAX formula, you can hit the button again and Copilot will regenerate the description to match the new logic. This matters more than it sounds - stale descriptions that don't match current logic are arguably worse than no descriptions at all.
Microsoft has the full official documentation here if you want the technical specifics.
Why This Actually Matters in Practice
I'll be honest - when I first heard about auto-generated measure descriptions, I thought it was a gimmick. Another AI feature bolted onto a product for marketing purposes. But after seeing it used in real projects, I've changed my mind. Here's why.
Report authors can only see names and descriptions. When someone is building a report from a shared semantic model, they see the measure name and the description. That's it. They can't easily see the DAX formula behind it. So if your description field is blank - which it is in roughly 90% of the models we encounter during client onboarding - the report author is guessing based on the name alone. "Revenue YTD" could mean gross revenue, net revenue, or revenue after adjustments. Without a description, you're rolling the dice.
Documentation debt is real. Most organisations we work with in Australia have Power BI models that have grown organically over months or years. Someone creates a measure for a specific report, another person adds a calculated column, a third person builds a complex time intelligence measure. Nobody documents any of it. By the time the original author leaves or changes roles, the institutional knowledge walks out the door. We see this pattern repeatedly at organisations across Melbourne, Sydney, and Brisbane.
Manual description writing doesn't scale. I've tried the "let's have a documentation sprint" approach with clients. It works for about two days. Then priorities shift, deadlines hit, and the documentation effort dies. Having a one-click option that generates a reasonable first draft changes the economics of documentation entirely.
What It Gets Right
The descriptions Copilot generates are surprisingly readable. For straightforward measures - SUMX, CALCULATE with basic filters, time intelligence functions like SAMEPERIODLASTYEAR - it produces descriptions that a business analyst could actually understand. Not just restating the DAX in English, but explaining what the calculation does in business terms.
For example, a measure like CALCULATE(SUM(Sales[Revenue]), FILTER(ALL('Date'), 'Date'[Year] = YEAR(TODAY()))) might get described as something like "Calculates the total revenue for the current calendar year, removing any other date filters." That's useful. That's the kind of description that helps someone decide whether this is the right measure for their report.
It also handles nested calculations reasonably well. Measures that use variables, multiple CALCULATE modifiers, or conditional logic via SWITCH statements come back with descriptions that follow the logic step by step. Not always perfectly, but close enough to be a useful starting point.
Where It Falls Short
A few consistent limitations worth knowing about before you rely on this heavily.
Comments in your DAX are ignored. If you've been diligent enough to comment your DAX formulas (good on you), Copilot doesn't use those comments when generating descriptions. This is a missed opportunity. Those comments often contain the "why" that the formula itself doesn't capture - business rules, edge cases, reasons for specific filter choices. Copilot only looks at the DAX syntax.
Text in double quotes is skipped. Any string literals in your formula - table names, column references in quotes - aren't used by Copilot either. In most cases this doesn't matter much, but for measures that rely heavily on string comparisons or categorical filtering, the descriptions can miss important context.
It doesn't capture business intent. This is the fundamental limitation. Copilot can tell you what the formula does mechanically, but it can't tell you why it exists or when to use it versus a similar measure. If you have "Revenue Adjusted" and "Revenue Normalised" in the same model, Copilot can describe the mechanics of each, but it can't explain the business context for when you'd pick one over the other. That context still needs to come from a human.
Error-state measures get nothing. If your measure has an error - even a minor one like a broken reference - Copilot won't generate a description. Fair enough, but it means you can't use this feature to help diagnose what a broken measure was supposed to do.
Practical Advice From Client Work
After rolling this out across several client engagements, here's how I'd recommend approaching it.
Batch it as a model hygiene task. Don't try to generate descriptions one by one as you think of it. Set aside an hour, open Model view, and work through your measures systematically. Start with the ones that are most widely used in reports - those are the descriptions that deliver the most value.
Edit after generating. Treat Copilot's output as a first draft, not a finished product. Read each description and add the business context that Copilot can't know. "Calculates total sales after returns" is what Copilot gives you. "Total sales net of returns, used for monthly executive reporting - does not include warranty replacements" is what your report authors actually need.
Use it during model handovers. This is where the feature really earns its keep. When we're taking over management of a client's Power BI environment, one of the first things we do now is run through all the measures and generate descriptions. Even the imperfect descriptions give us a faster understanding of the model than reading raw DAX. We can then refine and add context as we learn the business logic.
Combine it with the DAX explain feature. We wrote about using Copilot for DAX queries recently, and the explain feature there pairs well with measure descriptions. Use the description generator for the quick summary, then use the explain feature when you need the detailed walkthrough of how a complex measure actually works.
Prerequisites You Should Know About
A few things that trip people up:
- You need a workspace with Fabric capacity. This isn't available on standalone Power BI Pro without Fabric.
- The measure needs to be in a valid state - no errors in the DAX formula.
- It works in both Power BI Desktop and when editing models in the web. Desktop tends to be faster in my experience.
- Your tenant admin needs to have Copilot features enabled. If you're hitting a greyed-out button, that's probably why.
The Bigger Picture
This feature sits in a broader trend Microsoft is pushing - using AI to reduce the friction of data model governance. Between Copilot for measure descriptions, Copilot for DAX queries, and the data lineage features in Fabric, Microsoft is trying to make it easier to build, document, and maintain semantic models at scale.
For Australian organisations running Power BI, the practical takeaway is this: if you've been meaning to document your semantic models and never got around to it, the barrier just got a lot lower. It's not going to write your data dictionary for you, but it removes the biggest excuse people have for not doing it - that writing descriptions for dozens of measures takes too long.
If you're dealing with undocumented Power BI models or thinking about scaling your BI environment, our Power BI consulting team can help you get things in order. We also work with organisations on broader business intelligence strategies that go beyond individual reports and dashboards.
The feature isn't perfect. But perfect documentation that nobody writes is worth less than good-enough documentation that actually exists in your model.