Back to Blog

AI in Healthcare - Compliance and TGA Considerations for Australian Companies

April 18, 202610 min readMichael Ridland

Can you use AI in healthcare in Australia without running afoul of the TGA?

Yes, but you need to understand where the regulatory lines are. The Therapeutic Goods Administration (TGA) regulates software as a medical device (SaMD), and many AI applications in healthcare fall into this category. Get it wrong, and you could face serious regulatory consequences. Get it right, and AI can significantly improve patient outcomes and operational efficiency.

We've worked with Australian healthcare organisations deploying AI, and the regulatory questions come up in every project. Here's what you need to know.

When Does the TGA Regulate AI in Healthcare?

The TGA regulates therapeutic goods in Australia, including medical devices. Under the Therapeutic Goods Act 1989 and the Therapeutic Goods (Medical Devices) Regulations 2002, software can be a medical device if it meets certain criteria.

AI software is likely a medical device if it:

  • Is intended for diagnosis, prevention, monitoring, treatment, or alleviation of disease
  • Makes clinical recommendations or predictions about individual patients
  • Analyses medical images, pathology, or other clinical data to support clinical decisions
  • Monitors patients and generates alerts based on clinical parameters

AI software is likely NOT a medical device if it:

  • Performs administrative functions (scheduling, billing, record management)
  • Provides general health information (not personalised clinical advice)
  • Supports operational efficiency without clinical decision-making
  • Is used for research purposes only (not patient care)
  • Simply stores or transfers clinical data without analysis

The line between regulated and unregulated can be subtle. An AI system that organises patient records is administrative software. An AI system that analyses those records and flags patients at risk of deterioration is likely a medical device.

TGA Classification of AI Software

If your AI is a medical device, you need to determine its classification. The TGA uses a risk-based classification system:

Class I (lowest risk)

  • General-purpose software tools
  • Most administrative and operational AI falls here (if it's a medical device at all)
  • Requires inclusion on the ARTG (Australian Register of Therapeutic Goods) but has a lighter assessment process

Class IIa

  • Software that provides information used to make clinical decisions where an incorrect output is unlikely to lead to serious harm
  • Many clinical decision support tools fall here
  • Requires conformity assessment by a Notified Body or TGA

Class IIb

  • Software intended to monitor physiological parameters where variations could result in danger
  • AI that influences treatment decisions for serious conditions
  • Requires more rigorous conformity assessment

Class III (highest risk)

  • Software that directly controls or influences the operation of life-supporting devices
  • AI in critical care monitoring and decision-making
  • Requires the most stringent assessment

How classification works in practice:

An AI system that helps radiologists by highlighting potential abnormalities in X-rays (with radiologist review before diagnosis) might be Class IIa. The same system that autonomously diagnoses conditions without human review would likely be classified higher.

The intended purpose and level of autonomy drive classification. We always recommend getting this assessment right early - reclassification after development is expensive.

The Regulatory Pathway

Step 1 - Determine If Your AI Is a Medical Device

Work through the TGA's decision framework:

  • What is the intended purpose of the software?
  • Does it meet the definition of a medical device?
  • If yes, what classification applies?

If you're uncertain, the TGA offers a pre-submission process where you can discuss your product with the regulator before committing to a pathway.

Step 2 - Comply With Essential Principles

Medical devices must satisfy the Essential Principles of safety and performance. For AI software, key considerations include:

Safety:

  • The AI must be safe for its intended purpose
  • Risks must be identified, assessed, and controlled
  • The benefit-risk ratio must be acceptable
  • Foreseeable misuse must be considered

Performance:

  • The AI must perform as intended
  • Clinical performance must be validated
  • Accuracy, sensitivity, and specificity must be demonstrated
  • Performance must be maintained over the product lifecycle

Design and manufacture:

  • Quality management system (typically ISO 13485)
  • Software development lifecycle (IEC 62304)
  • Risk management (ISO 14971)
  • Usability engineering (IEC 62366)

Step 3 - Clinical Evidence

You need clinical evidence that your AI system works. The level of evidence depends on the classification and risk.

Types of clinical evidence:

  • Literature review (for well-established approaches)
  • Clinical investigation (prospective study of the AI system)
  • Post-market clinical follow-up (ongoing evidence collection)
  • Performance evaluation (analytical and clinical performance data)

For AI, clinical validation should include:

  • Performance on representative Australian patient populations
  • Testing across relevant clinical scenarios
  • Comparison with current clinical practice or existing tools
  • Assessment of failure modes and their clinical impact

Step 4 - ARTG Inclusion

To supply a medical device in Australia, it must be included on the Australian Register of Therapeutic Goods (ARTG).

The pathway depends on classification:

  • Class I: Manufacturer self-declaration, notification to TGA
  • Class IIa/IIb: Conformity assessment (through TGA or a recognised Notified Body)
  • Class III: Full TGA assessment

Step 5 - Post-Market Obligations

Once on the market, ongoing obligations include:

  • Adverse event reporting
  • Post-market surveillance
  • Periodic safety update reports
  • Management of field safety corrective actions
  • Maintaining conformity with Essential Principles

AI-Specific TGA Considerations

Continuous Learning Systems

AI systems that learn and update from new data present a specific regulatory challenge. If the model changes, does it need re-assessment?

The TGA's position is evolving, but the current expectation is:

  • Pre-defined, locked algorithms that don't change after deployment are treated as standard software
  • Algorithms that continuously learn and change may need additional controls and potentially re-assessment when changes are material

Practical approach:

  • Lock your model for deployment
  • Retrain and update through a controlled change management process
  • Assess whether each update changes the safety or performance profile
  • Re-validate when updates are material

Explainability

The TGA expects that clinical AI systems can be understood by their users. This doesn't necessarily mean full mathematical transparency, but clinicians need to understand:

  • What the AI system is designed to do
  • What inputs it uses
  • What its limitations are
  • How to interpret its outputs
  • When to override or question its recommendations

Cybersecurity

The TGA requires medical device cybersecurity to be addressed. For AI systems, this includes:

  • Protection of patient data
  • Protection against adversarial attacks on the AI model
  • Secure update mechanisms
  • Incident detection and response

The TGA references the IEC 81001-5-1 standard for health software cybersecurity.

Privacy Requirements in Healthcare AI

Health information receives special treatment under the Australian Privacy Act.

Heightened Protections

Health information is classified as "sensitive information" under the Privacy Act. This means:

  • Collection generally requires consent
  • Use and disclosure are more restricted
  • Higher security standards apply
  • Cross-border disclosure faces stricter requirements

My Health Records Act

If your AI system interacts with the My Health Record system, the My Health Records Act 2012 applies. This imposes additional restrictions on how health information from My Health Records can be used, including specific prohibitions on use by insurers and employers.

State and Territory Health Records Legislation

Each state and territory has its own health records legislation. If your AI system operates across jurisdictions, you may need to comply with multiple privacy regimes.

We always recommend a thorough privacy assessment for healthcare AI. The intersection of federal privacy law, state health records legislation, and TGA requirements creates a complex compliance environment. Getting expert advice early saves time and money.

Clinical Safety

Beyond TGA compliance, healthcare AI needs to be clinically safe.

Clinical Governance

Your AI system should sit within clinical governance frameworks:

  • Clinical oversight of AI system design and deployment
  • Clinical review of AI outputs (where appropriate)
  • Incident reporting and management
  • Clinical audit of AI performance

Human Oversight

For most healthcare AI applications, human oversight is expected and appropriate:

  • Clinicians review AI recommendations before acting on them
  • AI assists, rather than replaces, clinical judgment
  • Clear escalation paths exist for AI uncertainty
  • Override mechanisms are available and documented

Patient Safety Reporting

AI-related patient safety events should be reported through existing patient safety reporting mechanisms. If your AI system contributes to an adverse event, it needs to be investigated like any other clinical incident.

Practical Implementation Checklist

Here's what we work through with our healthcare clients:

Regulatory assessment:

  • Determine if the AI is a medical device under TGA definitions
  • If yes, determine the classification
  • Identify the regulatory pathway (ARTG inclusion)
  • Engage with TGA if the pathway is unclear
  • Understand post-market obligations

Clinical evidence:

  • Define clinical performance requirements
  • Plan and conduct clinical validation
  • Ensure testing on representative Australian populations
  • Document clinical evidence
  • Plan post-market clinical follow-up

Quality management:

  • Implement or extend ISO 13485 quality management system
  • Follow IEC 62304 for software development lifecycle
  • Apply ISO 14971 for risk management
  • Document design history and decisions

Privacy and data:

  • Conduct Privacy Impact Assessment for health information
  • Assess compliance with federal and state privacy laws
  • Implement appropriate security controls
  • Address cross-border data flows
  • Plan for My Health Records compliance if applicable

Clinical safety:

  • Integrate AI into clinical governance frameworks
  • Define human oversight requirements
  • Establish clinical monitoring procedures
  • Set up patient safety reporting for AI events
  • Train clinical users on AI system capabilities and limitations

Operational readiness:

  • Develop user training materials
  • Create operating procedures
  • Establish maintenance and update procedures
  • Plan for business continuity
  • Set up performance monitoring

Common Pitfalls

Building first, regulating later. We've seen organisations develop AI systems and then discover they need TGA approval. This is expensive and time-consuming to retrofit. Assess the regulatory requirements before development begins.

Underestimating clinical validation. Clinical evidence takes time to generate. If you need a clinical study, plan for it from the start.

Ignoring state and territory variations. Privacy requirements vary by jurisdiction. A system deployed nationally needs to comply everywhere.

Treating AI like traditional software. AI has unique characteristics - non-deterministic behaviour, data dependency, potential for drift. Regulatory and clinical governance must account for these.

Not engaging clinicians. Healthcare AI built without clinical input often fails in practice. Clinicians understand the clinical workflow, the patient safety implications, and the practical usability requirements that technology teams may miss.

How Team 400 Helps

At Team 400, we help Australian healthcare organisations deploy AI that meets TGA, privacy, and clinical safety requirements. Our experience with healthcare AI means we understand the regulatory environment and build compliance into our delivery process.

We work with healthcare providers, health technology companies, and life sciences organisations on:

  • AI regulatory assessment and strategy
  • AI development within healthcare compliance frameworks
  • Clinical validation planning and support
  • Privacy Impact Assessments for health AI

If you're exploring AI in healthcare and need to understand the compliance requirements, contact our team. We'll help you identify the regulatory pathway and build an AI system that's both effective and compliant.