Back to Blog

Azure AI Services Security Features - What Australian Businesses Need to Get Right

April 8, 20268 min readMichael Ridland

Azure AI Services Security Features - What Australian Businesses Need to Get Right

Every conversation I have with an Australian enterprise about adopting AI services eventually arrives at the same question. Not "how does it work?" or "what does it cost?" but "how do we keep it secure?"

Fair enough. When you're sending customer data, internal documents, or proprietary information to an AI endpoint for processing, you need to know that the pipeline is locked down properly. And Azure AI Services - now part of the broader Azure AI Foundry platform - actually has a solid set of security controls. The problem is that most teams only configure about half of them, usually because the getting-started tutorials skip straight past security to get to the fun stuff.

I've spent enough time reviewing AI deployments to know what gets missed. Here's what you should actually be paying attention to.

Transport Layer Security - The Baseline

All Azure AI Services endpoints enforce TLS 1.2 at minimum, with TLS 1.3 available as an option. This is the floor, not the ceiling. If your client application is somehow still trying to connect over older TLS versions, the connection will be rejected.

For .NET developers (and we do a lot of .NET work), this is usually handled automatically by the framework. But I've seen older applications running on .NET Framework 4.5 that default to TLS 1.0 unless you explicitly configure the security protocol. It's a simple fix but easy to miss, especially when you're migrating existing applications to use AI capabilities.

The practical takeaway: make sure your client applications, SDKs, and any intermediate services (API gateways, reverse proxies) are all configured to use TLS 1.2 or 1.3. Don't assume it's the default in every environment.

Authentication - Stop Using Just API Keys

This is the biggest gap I see in production deployments. By default, Azure AI Services uses subscription keys for authentication. You get two keys per resource, you stick one in your application config, and you're off to the races.

The problem is that API keys are a shared secret. Anyone with the key has full access to the resource. They get emailed around, committed to source control, stored in plain text config files. We've walked into client environments where the same API key was being used across development, staging, and production, and it had been the same key for over a year.

Microsoft Entra ID (formerly Azure Active Directory) with managed identities is the way to go for production workloads. Here's why it matters:

Managed identities mean your application authenticates to Azure AI Services without any secrets in your code or configuration. The identity is managed by Azure itself. No keys to rotate, no secrets to leak, no credentials stored anywhere your developers might accidentally expose them.

Role-based access control lets you define exactly what each identity can do. Maybe your web application only needs to call the text analytics API, not manage the resource itself. You can scope the permissions accordingly.

Conditional access policies can restrict which networks, devices, or conditions allow authentication, adding another layer that API keys simply don't support.

If you're running in Azure, there's really no good reason not to use managed identities. The SDK support is there, the setup takes maybe fifteen minutes, and you eliminate an entire category of security risk.

That said, API keys still have their place. For quick prototyping, local development, and testing - they're fine. Just don't let them leak into production without a conversation about whether managed identity would be better. It almost always is.

Key Rotation

Even if you are using API keys (and sometimes you have to, for third-party integrations or legacy systems), Azure gives you two keys per resource for a reason. The idea is that you rotate them regularly - switch your application to use key 2, regenerate key 1, then later switch back and regenerate key 2.

The number of organisations that actually do this? In my experience, very few. Most set the key once and forget about it until something goes wrong.

Automating key rotation through Azure Key Vault is the right approach. Store your API keys in Key Vault, reference them from your application, and set up a rotation policy. When a key gets rotated, your app picks up the new one automatically. It takes some initial setup, but once it's in place, you don't think about it again.

Virtual Networks - Restricting Who Can Call Your AI Endpoints

By default, your Azure AI Services resource is accessible from anywhere on the internet. If someone has your API key, they can call it from any IP address, any country, any network.

Virtual network rules let you lock this down. You can configure your resource to only accept traffic from specific Azure virtual networks, specific IP address ranges, or a combination of both. Everything else gets rejected.

For Australian businesses with data sovereignty concerns, this is particularly relevant. You can ensure that only your applications running in your Azure subscription, within your configured network boundaries, can access the AI service. No external access, no unexpected traffic from unknown sources.

We typically recommend setting this up even in development environments. It forces you to design your network architecture properly from the start, rather than bolting on network restrictions later and discovering that half your integration tests break because they were calling the endpoint from outside the allowed network.

The one thing to watch out for: if you're using Azure AI Services containers (running inference locally), the container still needs outbound connectivity to the billing endpoint. Make sure your network rules don't accidentally block that.

Customer-Managed Keys

Azure encrypts data at rest by default using Microsoft-managed keys. For most workloads, this is sufficient. But some industries - financial services, government, healthcare - have requirements around managing their own encryption keys.

Customer-managed keys (CMK) let you bring your own encryption keys via Azure Key Vault. Your data gets encrypted with your key, which you control. You can rotate it, revoke it, or audit access to it independently of Microsoft.

Not all Azure AI Services support CMK, so check the documentation for your specific service before planning around it. Azure OpenAI supports it. So does the Translator service and several of the language services.

My honest assessment: unless you have a specific regulatory requirement for CMK, the Microsoft-managed encryption is fine. The added operational overhead of managing your own keys - making sure they don't expire, ensuring key vault availability, handling key rotation - is real. Don't take it on unless you need to.

Data Loss Prevention

This one flies under the radar but it's worth understanding. Some Azure AI Services accept URIs as inputs - you pass a URL and the service fetches and processes the content at that URL. The data loss prevention feature lets you restrict what types of URIs the service will accept.

Why does this matter? Consider an internal scenario where an employee (or a compromised application) submits a URL containing sensitive data in the query string to an Azure AI service. The service processes it, and now that data has effectively been exfiltrated from your network through a legitimate-looking API call.

By configuring data loss prevention, you can restrict the service to only accept URIs from approved domains or patterns. It's a niche feature, but if you're in a regulated industry, it's the kind of control that security auditors appreciate seeing.

Customer Lockbox

Customer Lockbox gives you a formal approval process for any situation where a Microsoft support engineer needs to access your data while resolving a support ticket. Instead of Microsoft engineers having standing access, each data access request goes through an approval workflow that you control.

This matters for compliance. If you need to demonstrate to auditors that no third party - including your cloud provider - can access customer data without explicit, documented approval, Customer Lockbox provides that audit trail.

It's available for Azure OpenAI, Translator, and several language understanding services. For other services like the Speech service, you can achieve similar controls through the bring-your-own-storage (BYOS) capability, where your service data lives in a storage account you own and control.

Putting It Together - A Practical Security Checklist

Here's the order I'd recommend implementing these features for a new Azure AI Services deployment:

  1. Enable managed identity authentication and remove API keys from your application code. This is the highest-impact change with the least effort.

  2. Configure virtual network rules to restrict access to known networks. Do this early, before you've built integrations that assume open access.

  3. Set up key rotation through Key Vault for any remaining API key usage.

  4. Enable Customer Lockbox if you're in a regulated industry.

  5. Evaluate CMK based on your specific compliance requirements. Don't implement it "just in case."

  6. Configure data loss prevention if your services accept URI inputs and you're concerned about data exfiltration.

How This Fits Into Your AI Security Posture

Security for AI services isn't a one-off configuration exercise. It's part of a broader approach that includes model evaluation, data handling policies, access controls, and monitoring.

If you're building AI capabilities on Azure and want to make sure the security foundations are right, our Azure AI consulting team can help you design an architecture that meets your compliance requirements without slowing down your development teams. We work with organisations across Australia on exactly these kinds of decisions - balancing security with velocity.

For the complete security feature reference, see Microsoft's Azure AI Services security documentation.