Back to Blog

AI and the Privacy Act - What Australian Companies Must Do

April 17, 202610 min readMichael Ridland

Does the Privacy Act apply to your AI system?

Almost certainly, yes. If your AI system touches personal information in any way - collecting it, using it, storing it, disclosing it, or processing it - the Privacy Act 1988 (Cth) applies. There's no AI exemption, no technology carve-out, and no "but the algorithm did it" defence.

We work with Australian companies every week on this question. Here's what you actually need to do to comply.

How the Privacy Act Applies to AI

The Privacy Act regulates how "APP entities" (most private sector organisations with annual turnover above $3 million, plus all government agencies) handle "personal information." Personal information is information about an identified or reasonably identifiable individual.

AI systems interact with personal information in multiple ways:

Collection: AI chatbots collecting customer details. AI forms gathering user data. AI systems receiving data feeds from other systems.

Use: AI analysing customer behaviour. AI scoring credit applications. AI categorising support tickets that contain personal details.

Storage: Training datasets containing personal information. AI-generated profiles. Conversation logs.

Disclosure: AI outputs shared with third parties. Data sent to cloud AI services. Cross-border data transfers to AI providers.

The Privacy Act doesn't care whether a human or a machine is doing the handling. The obligations are the same.

The APPs That Matter Most for AI

APP 1 - Transparency

You must manage personal information in an open and transparent way. For AI, this means:

  • Your privacy policy must describe your AI use where it involves personal information
  • You should be clear about what AI decisions affect individuals
  • You need to explain, at a reasonable level, how personal information is used in AI systems

What we tell our clients: Update your privacy policy now. Don't wait for a regulator to ask. If you're using AI to process personal information and your privacy policy doesn't mention it, you have a gap.

APP 2 - Anonymity and Pseudonymity

Individuals must have the option of not identifying themselves, or using a pseudonym, when dealing with you - unless it's impractical or the law requires identification.

AI implication: If your AI system requires personal identification where it's not strictly necessary, you may have a problem. Can your AI chatbot help anonymous users? If not, is there a good reason?

APP 3 - Collection

You can only collect personal information that is "reasonably necessary" for your functions or activities. This is a real constraint on AI.

Common problems:

  • Collecting extensive data "because the AI model might need it"
  • Gathering data for AI training that goes beyond the original purpose
  • AI systems that collect more data than necessary to function

The test: For each piece of personal information your AI collects, can you explain why it's reasonably necessary? If not, don't collect it.

APP 5 - Notification

When you collect personal information, you must tell individuals specific things - who you are, why you're collecting it, who you'll share it with, and whether the information will go overseas.

For AI systems, notify individuals about:

  • The fact that AI will process their information
  • What the AI will do with their information
  • Whether information will be sent to overseas AI services for processing
  • How to access and correct information held by the AI system
  • How to complain

This is where many organisations fall short. We've reviewed AI deployments where customers had no idea their data was being processed by AI, let alone by an AI service hosted overseas.

APP 6 - Use and Disclosure

Personal information can only be used or disclosed for the purpose of collection, or a directly related secondary purpose the individual would reasonably expect.

This is the APP that creates the most problems for AI projects. Here's why:

You collect customer data to provide a service. Later, you want to use that data to train an AI model. Is AI training a "directly related secondary purpose" that the customer would "reasonably expect"? Maybe. Maybe not. It depends on the context, the data, and the purpose of the AI.

Safe approaches:

  1. Collect consent specifically for AI training use
  2. De-identify the data before using it for training (truly de-identified data isn't personal information)
  3. Ensure your original collection notice is broad enough to cover AI training (but be cautious - overly broad notices can be challenged)

APP 8 - Cross-Border Disclosure

If personal information leaves Australia, you must take reasonable steps to ensure the overseas recipient handles it consistently with the APPs. You remain accountable.

This catches many businesses using cloud AI. If you're using:

  • OpenAI's API (data processed in the US)
  • Google Cloud AI services (various locations)
  • AWS AI services (various locations)
  • Microsoft Azure AI (configurable, but check)

...then personal information is likely crossing borders. You need to:

  1. Know where data is processed
  2. Assess the recipient's privacy practices
  3. Consider contractual protections (Data Processing Agreements)
  4. Disclose the cross-border transfer in your privacy notice
  5. Consider Australian-hosted alternatives for sensitive data

At Team 400, we use Azure AI Foundry with Australian-hosted infrastructure where appropriate, giving our clients more control over data residency.

APP 11 - Security

You must take reasonable steps to protect personal information from misuse, interference, loss, unauthorised access, modification, and disclosure.

For AI systems, this includes protecting:

  • Training data containing personal information
  • Data in transit to and from AI services
  • AI model inputs and outputs
  • Logs and audit trails
  • The AI models themselves (which may encode personal information)

Reasonable steps for AI security:

  • Encryption in transit and at rest
  • Access controls and authentication
  • Monitoring and logging
  • Regular security assessments
  • Incident response plans
  • Vendor security assessments

APP 13 - Correction

Individuals can request correction of their personal information. If that information has been used to train an AI model, complying with a correction request can be complex.

Practical approaches:

  • Maintain records of personal information used in training, separate from the model
  • Be prepared to retrain models if correction requests require it
  • For high-risk systems, consider architectures that allow information to be updated without full retraining

Automated Decision-Making Under the Privacy Act

Currently, the Privacy Act doesn't have a standalone provision specifically about automated decision-making (unlike the EU's GDPR, which has Article 22).

However, the 2022 review of the Privacy Act recommended introducing:

  • A right to know when a substantially automated decision has been made
  • A right to request meaningful information about how the decision was reached

While these recommendations haven't been enacted yet, the direction is clear. We advise our clients to prepare now.

Practical preparation:

  • Document how your AI systems make decisions
  • Build explainability into AI systems from the design stage
  • Maintain human oversight for decisions that significantly affect individuals
  • Keep records that would allow you to explain specific decisions
  • Have a process for individuals to request human review of automated decisions

The organisations that prepare now will have a significant advantage when these requirements become law.

Privacy Impact Assessments for AI

A Privacy Impact Assessment (PIA) is your primary tool for identifying and managing privacy risks in an AI system. The OAIC recommends PIAs for any project that involves personal information, and they're especially important for AI.

What an AI PIA should cover:

  1. Description of the AI system - what it does, what data it uses, who is affected
  2. Data flows - where personal information comes from, how it's processed, where it goes
  3. Legal basis - which APPs apply and how you'll comply
  4. Privacy risks - what could go wrong from a privacy perspective
  5. Mitigations - how you'll address each identified risk
  6. Residual risks - what remains after mitigations
  7. Recommendations - actions needed before deployment

When to conduct a PIA:

  • Before developing an AI system that handles personal information
  • When materially changing an existing AI system
  • When changing how an AI system uses personal information
  • When changing AI service providers

We conduct PIAs for every AI project we deliver at Team 400. In our experience, the earlier in the project the PIA happens, the cheaper it is to address the findings. PIAs conducted after the system is built often result in expensive redesign.

The Notifiable Data Breaches Scheme

If your AI system is involved in a data breach - whether through a security incident, accidental disclosure, or AI malfunction that exposes personal information - the Notifiable Data Breaches (NDB) scheme applies.

You must notify the OAIC and affected individuals if:

  • There is unauthorised access to, or disclosure of, personal information
  • The breach is likely to result in serious harm
  • You haven't been able to prevent the likely risk of serious harm through remedial action

AI-specific breach scenarios:

  • An AI chatbot discloses one customer's information to another
  • Training data containing personal information is exposed
  • An AI system is manipulated to reveal data it shouldn't
  • A cloud AI service experiences a breach affecting your data
  • An AI model inadvertently memorises and reproduces personal information

Preparation steps:

  • Include AI-specific scenarios in your data breach response plan
  • Monitor AI systems for data leakage
  • Test AI outputs for inadvertent personal information disclosure
  • Know how to disable AI systems quickly if a breach is detected

Practical Compliance Checklist

Here's the compliance checklist we work through with our clients:

Governance:

  • AI use is covered in your privacy policy
  • Privacy team is involved in AI projects from the start
  • Privacy Impact Assessments are conducted for AI systems handling personal information
  • Staff are trained on privacy obligations related to AI

Data handling:

  • Data minimisation is applied - AI only accesses what it needs
  • Data mapping is completed for all AI systems
  • Cross-border data flows are identified and managed
  • De-identification is used where full personal data isn't required
  • Training data is managed with appropriate controls

Transparency:

  • Individuals are informed when AI processes their information
  • The purposes of AI processing are communicated clearly
  • Cross-border disclosures are described in privacy notices
  • Automated decision-making is disclosed where applicable

Security:

  • AI systems are included in your information security framework
  • Access controls are in place for AI systems and data
  • AI outputs are monitored for inadvertent data disclosure
  • AI vendors have been assessed for security practices
  • AI-specific scenarios are in your breach response plan

Individual rights:

  • Processes exist for handling access requests related to AI data
  • Correction requests can be handled for data in AI systems
  • Complaint-handling processes cover AI-related complaints
  • Human review is available for significant automated decisions

What's Coming Next

The Privacy Act is being reformed. Based on the 2022 review and subsequent government consultations, expect:

  • Mandatory Privacy Impact Assessments for high-risk processing, which will include many AI applications
  • Automated decision-making rights - transparency and explanation requirements
  • Children's privacy provisions that will affect AI systems used by or about children
  • Stronger enforcement with higher penalties

The organisations that treat current best practice as future minimum requirements will be well-positioned when these changes arrive.

How Team 400 Helps

At Team 400, privacy compliance is built into every AI project we deliver. We conduct Privacy Impact Assessments, design for data minimisation, and build transparency and explainability into our AI systems from the architecture stage.

We've delivered AI systems across financial services, healthcare, and enterprise clients where privacy compliance is non-negotiable.

If you need help understanding your Privacy Act obligations for AI, or want to ensure your AI project is compliant from the start, talk to our team. We'll help you get it right.