Back to Blog

AI Data Privacy Requirements in Australia - What You Need to Know

April 15, 20269 min readMichael Ridland

What are the data privacy requirements for AI in Australia?

It's a question we get asked in almost every initial conversation with clients. And the answer is more straightforward than most people expect - existing Australian privacy law already applies to AI, and it applies broadly.

If your AI system collects, stores, uses, or discloses personal information, you have obligations under the Privacy Act 1988 (Cth) and the Australian Privacy Principles (APPs). The fact that it's an AI doing the processing doesn't create an exemption. If anything, AI introduces new risks that make compliance harder.

Here's what you actually need to know.

Does the Privacy Act Apply to AI Systems?

Yes. The Privacy Act applies to how personal information is handled, regardless of whether a human or an algorithm is doing the handling. The Office of the Australian Information Commissioner (OAIC) has been clear on this.

If your AI system processes personal information - names, email addresses, transaction histories, health data, biometric data, or anything else that could identify an individual - you need to comply with the APPs.

This includes:

  • AI systems trained on datasets containing personal information
  • AI agents that interact with customers and collect data
  • Automated decision-making systems that use personal data
  • AI analytics tools that process customer behaviour data
  • Large language models that ingest or generate content about individuals

In our experience working with Australian businesses, many don't initially realise how much personal information their AI systems actually touch. A thorough data mapping exercise is always the first step.

What Are the Australian Privacy Principles That Matter Most for AI?

There are 13 APPs, but several are particularly relevant when you're deploying AI.

APP 1 - Open and Transparent Management

You need a privacy policy that covers your AI use. If you're using AI to process personal information, your privacy policy should say so. Vague language about "automated systems" isn't sufficient.

What to include:

  • What AI systems you use that handle personal information
  • What personal information those systems process
  • How AI-processed data is stored and protected
  • Whether data is sent offshore for processing (more on this below)

APP 3 - Collection of Solicited Personal Information

You can only collect personal information that is reasonably necessary for your business functions. AI systems that hoover up data "just in case it's useful later" create problems here.

Practical steps:

  • Define exactly what data your AI needs to function
  • Don't collect more than is necessary
  • Document why each data element is required
  • Review collection practices when AI models are updated

APP 5 - Notification of Collection

Individuals must be told when you collect their personal information and what you'll do with it. If your AI chatbot is collecting data from customers, they need to know.

What this means for AI:

  • Inform users when they're interacting with an AI system
  • Explain what data the AI collects during the interaction
  • Describe how that data will be used, including for model training
  • Provide clear opt-out mechanisms where feasible

APP 6 - Use or Disclosure of Personal Information

Personal information can only be used for the purpose it was collected for, unless an exception applies. This is where many AI projects run into trouble.

Common pitfall: Collecting customer data for service delivery, then using it to train an AI model. The training purpose may not be covered by the original collection notice.

APP 8 - Cross-Border Disclosure

If your AI system sends personal information overseas - including to cloud-based AI services hosted in the US, Europe, or Asia - you need to comply with APP 8. You remain accountable for the overseas recipient's handling of that data.

This catches many businesses off guard. Using OpenAI's API, Google Cloud AI, or AWS AI services means data is likely leaving Australia. You need to:

  • Know where data is processed and stored
  • Ensure the overseas recipient handles data consistently with the APPs
  • Consider whether Australian-hosted alternatives are appropriate for sensitive data
  • Include cross-border disclosure in your privacy notices

APP 11 - Security of Personal Information

You must take reasonable steps to protect personal information from misuse, interference, loss, and unauthorised access. For AI systems, this includes protecting training data, model inputs, model outputs, and the models themselves.

What About Automated Decision-Making?

This is where Australian law is evolving. Currently, there's no standalone right to explanation for automated decisions under the Privacy Act - unlike the EU's GDPR, which has specific provisions.

However, the Australian Government's review of the Privacy Act proposed introducing a right to know when a substantially automated decision has been made about you, and to request meaningful information about how the decision was made.

While this isn't law yet, we recommend our clients prepare for it. In practice, this means:

  • Document how your AI makes decisions - what data inputs, what logic, what outputs
  • Build explainability into your AI systems from the start - it's much harder to retrofit
  • Maintain human review for high-impact decisions - employment, credit, insurance, healthcare
  • Keep records of automated decisions - you may need to explain them later

The direction of travel is clear. Automated decision-making transparency requirements are coming. Building for them now is cheaper than rebuilding later.

Data Handling Obligations for AI Training

Training AI models on personal information raises specific privacy questions.

Can You Use Customer Data to Train AI Models?

It depends on your collection notice and consent arrangements. If you told customers you'd use their data to "improve services," that may or may not extend to training an AI model. The safer approach:

  1. Review your existing privacy notices and consent mechanisms
  2. Assess whether AI training falls within the stated purpose of collection
  3. If it doesn't, either update notices and obtain fresh consent, or anonymise the data before training
  4. Document your assessment and reasoning

De-identification and Anonymisation

De-identified data is not personal information, so the APPs don't apply to it. But de-identification must be genuine - if the data can be re-identified, it's still personal information.

For AI training data:

  • Remove direct identifiers (names, addresses, email, phone numbers)
  • Remove or generalise indirect identifiers (dates of birth, postcodes, job titles)
  • Test whether individuals can be re-identified from the remaining data
  • Document your de-identification process
  • Consider the risk that combining de-identified datasets could enable re-identification

We've seen organisations assume that removing names was sufficient de-identification. It's not. A combination of age, postcode, and profession can often identify individuals.

Industry-Specific Privacy Requirements

Financial Services

APRA-regulated entities have additional obligations under CPS 234 (Information Security) and prudential guidance. AI systems handling financial data need to meet these standards, which go beyond the Privacy Act. See our detailed guide on AI compliance for financial services.

Healthcare

Health information receives additional protections under the Privacy Act. AI systems processing health records, diagnostic data, or patient information face stricter requirements. The TGA also has views on AI in medical devices and clinical decision support.

Government

Government agencies are covered by the Privacy Act and may have additional obligations under agency-specific legislation and the Digital Transformation Agency's guidance.

Practical Privacy Compliance Checklist for AI Projects

Here's the checklist we use with our clients at Team 400 before any AI deployment:

Before Development:

  • Conduct a Privacy Impact Assessment (PIA) for the AI system
  • Map all personal information the AI will collect, use, store, and disclose
  • Identify the legal basis for each data use
  • Assess cross-border data flows
  • Review and update privacy notices
  • Determine consent requirements

During Development:

  • Implement data minimisation - collect only what's needed
  • Build in de-identification where personal data isn't required
  • Design for explainability from the start
  • Implement access controls and encryption
  • Create audit trails for data processing
  • Test for data leakage in model outputs

Before Deployment:

  • Complete the PIA and address identified risks
  • Update privacy policies to reflect AI use
  • Implement monitoring for privacy incidents
  • Train staff on privacy obligations related to the AI system
  • Establish a process for handling access and correction requests
  • Document data retention and deletion procedures

Ongoing:

  • Regular privacy reviews as the AI system evolves
  • Monitor for changes in privacy law and guidance
  • Review third-party AI service providers annually
  • Maintain incident response procedures
  • Update PIAs when systems change materially

Common Mistakes We See

1. Treating AI as a technology project, not a data project. AI is fundamentally about data. Privacy needs to be involved from the beginning, not consulted after the system is built.

2. Ignoring vendor data practices. When you use a third-party AI service, you need to understand what they do with your data. Do they use it for model training? Where is it stored? Who has access?

3. Assuming consent covers everything. Broad consent clauses may not extend to new AI uses. Review your consent mechanisms against specific AI use cases.

4. Forgetting about model outputs. AI models can sometimes reproduce personal information from their training data. Test for this and implement guardrails.

5. Not planning for data subject requests. Individuals have rights to access and correct their personal information. If that information is embedded in an AI model, how do you handle those requests?

What's Coming Next

The Australian Government is actively considering stronger AI-specific privacy requirements. Based on current consultations and proposals:

  • Mandatory Privacy Impact Assessments for high-risk AI systems are likely
  • Automated decision-making transparency requirements are being developed
  • Children's privacy protections specific to AI are under consideration
  • AI-specific data handling standards may emerge from the ongoing Privacy Act review

We recommend treating these as likely future requirements and building for them now. The cost of compliance is always lower when it's designed in rather than bolted on.

How Team 400 Helps

At Team 400, we build AI systems for Australian businesses with privacy compliance built in from day one. Our approach includes Privacy Impact Assessments, data mapping, and architecture decisions that make compliance practical rather than painful.

We work across financial services, healthcare, and enterprise clients who need AI systems that meet Australian privacy requirements.

If you're planning an AI project and need to get privacy right, talk to our team. We'll help you build AI that works within Australian law, not around it.