Back to Blog

Azure AI Foundry for Enterprise - Governance and Security Considerations

April 8, 202611 min readMichael Ridland

Enterprise AI governance isn't optional anymore. Australian regulators are paying attention, boards are asking questions, and the organisations that get governance right from the start avoid expensive rework later. We've seen too many AI projects get killed - not because the technology didn't work, but because the governance and security story wasn't strong enough for the risk committee.

Azure AI Foundry has a solid governance model, but you need to know how to configure it properly. Here's what we've learned from implementing Azure AI Foundry across Australian enterprises with real compliance requirements.

Why Governance Matters More for AI Than Traditional Software

Traditional software does what the code tells it to do. AI systems have a probabilistic element - they can produce different outputs for similar inputs, and those outputs can sometimes be wrong, biased, or inappropriate. That's not a flaw to be fixed; it's a characteristic to be managed.

For Australian enterprises, this means:

  • Regulatory exposure: If your AI system makes a decision that affects a customer (credit, insurance, employment), you need to explain how and why
  • Reputational risk: A customer-facing AI that says something inappropriate can make the news in hours
  • Data obligations: The Privacy Act 1988, APPs, and sector-specific regulations (APRA, ASIC) all have implications for how AI processes personal data
  • Board-level accountability: Directors increasingly face questions about AI governance, and "we'll figure it out later" isn't an acceptable answer

Azure AI Foundry provides the building blocks for addressing these concerns. But building blocks aren't the same as a finished house - you need to configure them deliberately.

The Hub and Project Model - Your Governance Foundation

Azure AI Foundry uses a two-level organisational structure: hubs and projects. Getting this right from the start saves significant rework.

Hubs

A hub is the top-level container. It owns the shared infrastructure: the storage account, key vault, networking configuration, and default policies. Think of it as the governance boundary.

How to structure hubs:

Approach When to Use
Single hub Small to mid-size organisations with consistent governance requirements
Hub per business unit Large enterprises where different divisions have different compliance needs
Hub per environment When dev/test/prod need different security configurations
Hub per data classification When you have different sensitivity levels (public, internal, confidential)

Most Australian mid-market organisations we work with start with a single hub and add separation as they grow. Large enterprises with existing data classification frameworks usually go with hub-per-classification from the start.

Projects

Projects sit within hubs and represent individual AI initiatives. Each project gets its own:

  • Model deployments
  • Data connections
  • Prompt flows
  • Evaluation results
  • Access control (inherited from hub, with project-level overrides)

The governance value of projects: They create natural boundaries for access control and cost tracking. When the CFO asks "how much are we spending on the document processing AI?", you can answer immediately if it's in its own project.

Access Control - Getting RBAC Right

Azure AI Foundry uses Azure Role-Based Access Control (RBAC), and getting this right is one of the most important governance steps.

Built-in Roles

Role What It Can Do Who Should Have It
Azure AI Developer Create and manage projects, deploy models, run experiments Data scientists, AI developers
Azure AI Inference Deployment Operator Deploy and manage model endpoints only MLOps engineers, deployment pipelines
Reader View resources but not modify Stakeholders, auditors, managers
Contributor Full management access Team leads, platform administrators
Owner Full access including RBAC management Hub administrators only

Recommendations We Give Every Client

Principle of least privilege: Give people the minimum access they need. Developers don't need Owner access. Stakeholders don't need Contributor access. This seems obvious, but in practice, we see "everyone is Contributor" far too often.

Use Entra ID groups, not individual assignments: Create security groups like "AI-Developers", "AI-Reviewers", "AI-Admins" and assign roles to groups. When someone joins or leaves a team, you update group membership once rather than modifying multiple role assignments.

Separate dev and prod access: Your development environment should have broader access so people can experiment. Your production environment should be locked down. We typically set up separate projects for dev and prod with different RBAC policies.

Service principals for automation: Deployment pipelines and automated processes should use managed identities or service principals, not personal accounts. This ensures deployments work even when individual people leave the organisation.

Data Protection and Privacy

How Azure AI Foundry handles your data is one of the first questions every enterprise client asks. Here's the straightforward answer:

Your Data Stays Yours

  • Data you send to Azure AI Foundry models is not used to train Microsoft's models
  • Your prompts, completions, and fine-tuning data remain within your Azure tenant
  • You control where your data is stored through region selection

Data Residency for Australian Organisations

If you deploy to the Australia East region:

  • Your storage, key vault, and AI Search data physically reside in Sydney data centres
  • Model inference still happens in your selected deployment region

Important distinction: The model compute and the data storage can be in different regions. If you deploy a model in East US (for broader model availability) but store your documents in Australia East, your data travels to the US for inference and back. For organisations with strict data sovereignty requirements, this matters.

Our recommendation for data-sensitive Australian organisations:

  1. Store all source data in Australia East
  2. Deploy models in Australia East where available
  3. For models only available in US regions, conduct a privacy impact assessment before proceeding
  4. Document your data flow architecture so you can demonstrate compliance

Encryption

  • At rest: All data encrypted with AES-256 by default. You can bring your own keys (BYOK) through Azure Key Vault for additional control.
  • In transit: TLS 1.2+ for all communications between services
  • In processing: Data is encrypted in memory during model inference on confidential computing capable hardware (available on selected VM sizes)

Network Security

For enterprises with strict network requirements, Azure AI Foundry supports several isolation patterns:

Private Endpoints

You can configure AI Foundry to be accessible only through private endpoints within your Azure virtual network. This means:

  • No public internet access to your AI models
  • Traffic stays within the Azure backbone
  • Your corporate firewall rules apply

Setup recommendation: Enable private endpoints from the start if your organisation requires them. Retrofitting private networking after you've built applications is possible but disruptive.

Managed Virtual Networks

Azure AI Foundry can create a managed virtual network for each hub, providing network isolation without you managing the underlying infrastructure. This is the simplest approach for most enterprises and what we recommend as the starting point.

VPN and ExpressRoute

For organisations that connect to Azure through VPN or ExpressRoute, AI Foundry resources are accessible through these existing connections when private endpoints are configured. No special configuration beyond standard Azure networking.

Responsible AI - Content Safety and Guardrails

Azure AI Foundry includes several responsible AI features that matter for enterprise governance.

Content Safety

Built-in content filters that detect and block:

  • Hate speech and discriminatory content
  • Violent or self-harm content
  • Sexual content
  • Jailbreak attempts (users trying to bypass model instructions)

These filters are enabled by default and can be customised. In our experience, keep them on. The reputational cost of a single inappropriate AI response far exceeds any inconvenience from overly cautious filtering.

Configuration tip: Set up custom blocklists for terms specific to your organisation or industry. We've seen cases where industry jargon triggered false positives in content filters - custom configuration resolves this.

Groundedness Detection

For RAG applications, groundedness detection checks whether the model's response is actually supported by the retrieved documents. This is critical for enterprise use cases where accuracy matters.

In our projects, we configure groundedness detection as a hard gate - if the model's response isn't grounded in the source documents, it returns a "I don't have enough information to answer that" response rather than speculating.

Abuse Monitoring

Azure AI Foundry logs prompts and completions for abuse monitoring purposes. For enterprises, this creates an audit trail that can be reviewed if issues arise. You can configure retention periods and access controls for these logs.

Audit and Compliance

Activity Logging

Every action in Azure AI Foundry is logged through Azure Monitor and Activity Logs:

  • Model deployments and configuration changes
  • API calls (with configurable detail levels)
  • Access control modifications
  • Data connections and changes

These logs integrate with your existing SIEM or log analytics solution through Azure Monitor.

Compliance Certifications

Azure AI Foundry inherits Azure's compliance certifications, including:

  • ISO 27001, 27017, 27018
  • SOC 1, SOC 2, SOC 3
  • IRAP (Australian Government - PROTECTED level)
  • PCI DSS (for payment card data)

For Australian Government clients, the IRAP assessment is particularly relevant. Azure's Australia East and Australia Southeast regions are assessed to PROTECTED level, which covers most government AI use cases.

Regulatory Considerations for Australian Enterprises

Financial services (APRA-regulated): CPS 234 (Information Security) applies to AI systems that process customer data. The key requirements are around information asset classification, implementation of controls, and incident management. Azure AI Foundry's logging and access controls support CPS 234 compliance, but you need to configure them deliberately.

Healthcare: The My Health Records Act and Privacy Act have specific requirements around health data. If you're building AI systems that process health information, ensure your data residency configuration keeps health data within Australia.

Government: The Australian Government's AI Ethics Framework and the voluntary AI Safety Standard both have implications for how AI systems are designed and operated. Azure AI Foundry's responsible AI features support compliance with these frameworks.

Governance Checklist for Enterprise Deployments

Before deploying Azure AI Foundry in production, walk through this checklist:

Access and Identity

  • Entra ID groups created for AI roles (Developers, Operators, Reviewers, Admins)
  • RBAC configured at hub and project level
  • Service principals created for automated pipelines
  • Multi-factor authentication enforced for all human accounts
  • Conditional access policies reviewed for AI resources

Data Protection

  • Data classification completed for all AI training and inference data
  • Data residency requirements documented and configured
  • Encryption configuration reviewed (BYOK if required)
  • Data retention policies defined
  • Privacy impact assessment completed (if processing personal data)

Network Security

  • Network isolation strategy chosen (private endpoints, managed VNet, or public)
  • Firewall rules configured
  • DNS resolution tested for private endpoints
  • ExpressRoute/VPN connectivity verified (if applicable)

Responsible AI

  • Content safety filters configured and tested
  • Custom blocklists added for industry-specific terms
  • Groundedness detection enabled for RAG applications
  • Abuse monitoring configured with appropriate retention
  • Human review process defined for edge cases

Monitoring and Audit

  • Azure Monitor configured for all AI Foundry resources
  • Alerts set for anomalous usage patterns
  • Cost alerts configured at project level
  • Activity logs routed to central log analytics
  • Compliance evidence collection automated where possible

Operational

  • Incident response plan includes AI-specific scenarios
  • Model update and redeployment process defined
  • Rollback procedure tested
  • Business continuity plan covers AI service dependencies

The Cost of Getting Governance Wrong

We've seen two common failure modes:

Too little governance: An AI project goes to production without proper access controls, content safety, or monitoring. Something goes wrong (it always does eventually), and the organisation has no audit trail, no ability to explain what happened, and no quick way to fix it. The project gets shut down entirely, and the organisation becomes gun-shy about AI for the next 12 months.

Too much governance: The governance framework is so heavy that it takes 6 months to get approval for a proof of concept. By the time anything is built, the business has moved on and the project loses its sponsor. We've seen this at several large Australian organisations where the AI governance committee meets monthly and has a 12-step approval process.

The right approach is somewhere in the middle. Start with sensible defaults, add governance controls proportional to the risk, and make the approval process fast for low-risk experiments while maintaining rigorous oversight for production systems.

How Team 400 Helps With Enterprise AI Governance

We've implemented Azure AI Foundry governance frameworks for organisations across financial services, government, and professional services. We know what auditors ask for, what risk committees care about, and how to set up governance that protects the organisation without killing the project.

Our Azure AI Foundry consulting includes governance architecture as a standard part of every engagement. We also work with clients' existing security and compliance teams to ensure AI governance integrates with their broader framework.

If you're planning an enterprise Azure AI Foundry deployment and governance is a concern (it should be), get in touch. We'll review your requirements and help you build a governance framework that's proportionate to your risk profile.

You can also explore our broader AI consulting services or learn more about working with us as your Azure AI consulting partner.