AI Governance and Compliance for Australian Business
"Are we allowed to use AI for this?"
I hear this question increasingly from Australian executives. The regulatory landscape is evolving, and getting governance right matters—both for compliance and for building AI systems that actually work.
Here's the current state of AI governance in Australia and what it means for your business.
The Australian AI Landscape
Australia doesn't yet have comprehensive AI legislation like the EU AI Act. But that doesn't mean it's the wild west.
Current Regulatory Framework
Existing laws apply to AI:
- Privacy Act 1988 (data handling)
- Competition and Consumer Act (misleading conduct, consumer guarantees)
- Fair Work Act (workplace implications)
- Discrimination legislation (algorithmic bias)
- Industry-specific regulations (APRA, ASIC, TGA, etc.)
AI-specific guidance (voluntary but influential):
- AI Ethics Framework (Department of Industry)
- Voluntary AI Safety Standard
- Industry codes of practice
Coming soon:
- Mandatory guardrails for high-risk AI (consultation ongoing)
- Likely regulation of AI in specific domains
The government's position: existing laws apply, new guidance will come, high-risk uses will face specific requirements.
APRA's Position
For financial services, APRA has been clearest:
CPS 234 (Information Security) applies to AI systems handling financial data.
Upcoming prudential guidance on AI will likely require:
- Board accountability for AI risk
- Model risk management frameworks
- Explainability requirements
- Human oversight of material decisions
If you're APRA-regulated, treat AI governance as a prudential matter, not just a tech decision.
A Practical Governance Framework
You don't need perfect governance to start using AI. But you need appropriate governance. Here's a practical framework:
Level 1: Basic Hygiene (All AI Use)
Even for simple AI tool usage:
Acceptable use policy: What AI tools are approved? What data can be input? What review is required?
Data classification: What data is too sensitive for AI tools?
Output verification: Who checks AI outputs before they're used/sent?
Vendor assessment: Do your AI vendors meet basic security and privacy requirements?
This covers employees using ChatGPT, AI features in existing software, and simple automations.
Level 2: Formal Oversight (AI in Business Processes)
When AI is embedded in business processes:
AI inventory: What AI systems are in use? What do they do?
Risk assessment: For each AI system, what's the risk if it goes wrong?
Monitoring: How do you know if the AI is performing correctly?
Change management: How are AI systems updated and tested?
Incident response: What happens when AI makes a mistake?
This covers AI agents, automated decision support, and AI-powered workflows.
Level 3: Rigorous Governance (High-Risk AI)
For AI that makes or influences significant decisions:
Model documentation: How does the model work? What data was it trained on?
Bias testing: Have you tested for discriminatory outcomes?
Explainability: Can you explain individual decisions?
Human review: How are edge cases and appeals handled?
Audit trail: Can you reconstruct how decisions were made?
Regular review: Are outcomes monitored and models retrained?
This covers credit decisions, hiring recommendations, insurance underwriting, clinical support, and similar high-stakes applications.
Risk-Based Classification
Not all AI needs the same governance. Classify your AI use cases:
Low Risk
- AI writing assistants (human review before sending)
- Search and summarisation of internal documents
- Schedule optimisation for internal operations
- Marketing content generation (human approval)
Governance: Basic hygiene. Acceptable use policy. Output review.
Medium Risk
- Customer service chatbots
- Sales lead prioritisation
- Content recommendation
- Operational automation
Governance: Formal oversight. Performance monitoring. Escalation paths. Regular review.
High Risk
- Credit decisions
- Insurance underwriting
- Employment decisions
- Clinical recommendations
- Fraud detection with automatic action
- Child safety applications
Governance: Rigorous framework. Bias testing. Explainability. Human oversight. Audit trails.
Prohibited
Some uses probably shouldn't happen regardless of governance:
- Social scoring of customers/employees
- Emotional manipulation in marketing
- Deceptive AI (pretending to be human when it matters)
- Uses that violate human rights
Building an AI Governance Program
Step 1: Inventory What You Have
You can't govern what you don't know about.
Survey:
- What AI tools are employees using?
- What AI is embedded in business software?
- What AI systems have you built?
- What's in development?
This often reveals more AI use than expected.
Step 2: Classify by Risk
For each AI system:
- What decisions does it influence?
- Who is affected?
- What's the harm if it's wrong?
- Can decisions be reversed?
Use this to prioritise governance effort.
Step 3: Assign Accountability
AI governance needs clear ownership:
- Board/Executive: Overall AI risk appetite and strategy
- Business owners: Accountability for specific AI systems
- Tech teams: Implementation and operation
- Risk/Compliance: Oversight and assurance
Avoid "everyone's responsible" which means no one is.
Step 4: Implement Controls
Based on risk level:
- Policies and standards
- Technical controls (access, monitoring, testing)
- Human review processes
- Training and awareness
- Documentation requirements
Step 5: Monitor and Review
Governance isn't set-and-forget:
- Regular performance monitoring
- Incident tracking and response
- Periodic risk reassessment
- Governance framework review (annual minimum)
Practical Challenges
Shadow AI
Employees using ChatGPT without approval. AI features in SaaS products. Contractors using AI tools.
You can't govern what you can't see. Discovery is ongoing work.
Vendor AI
Your vendors are adding AI. That CRM now has "AI-powered insights." That support platform has an AI chatbot.
Your governance framework needs to extend to vendor AI. Include AI in vendor assessments.
Pace of Change
AI capabilities evolve fast. Your governance framework needs to be:
- Principle-based (not just rule-based)
- Reviewed regularly
- Adaptable to new use cases
Talent Gaps
Many organisations lack AI expertise in risk and compliance functions. Options:
- Train existing staff
- Hire AI-literate risk professionals
- Partner with external experts
- Build cross-functional AI governance teams
Getting Started
If you don't have AI governance today:
- Start with inventory: Know what AI you're using
- Establish basic policy: Acceptable use, data limits, review requirements
- Classify by risk: Focus governance effort where it matters
- Build incrementally: Perfect governance isn't required to start
For AI strategy and implementation, we help clients build appropriate governance from day one—not as an afterthought.
AI governance that's proportionate to risk enables responsible AI adoption. Governance that's excessive for the risk level just slows you down.
Talk to us about your AI governance needs.