Back to Blog

AI Insurance and Liability - Who Is Responsible When AI Fails

April 19, 20268 min readMichael Ridland

When your AI system gives a customer wrong advice and they lose money, who is responsible?

It's a question we get asked in almost every engagement now. And the honest answer is that Australian law hasn't fully caught up with AI deployment. But that doesn't mean you can wait for the law to sort itself out. You need a plan today.

After working with dozens of Australian organisations deploying AI across finance, insurance, field service, and professional services, here's what we've learned about managing AI liability and getting the insurance side right.

The Core Question - Who Pays When AI Gets It Wrong?

AI systems fail. Not hypothetically. They actually fail. Language models hallucinate. Classification systems misidentify. Recommendation engines suggest the wrong thing. This isn't a defect - it's the nature of probabilistic systems.

The question isn't whether your AI will make a mistake. It's what happens when it does.

In Australia, liability typically falls across three parties:

The deploying organisation - If you put AI in front of your customers or use it to make business decisions, you're the one they'll come after. Under Australian Consumer Law, the organisation delivering the service is responsible for the quality of that service, regardless of whether AI was involved.

The AI vendor or developer - If the AI system itself was defective - not just wrong on a particular input, but fundamentally flawed - the vendor may share liability. This is where your vendor contracts matter enormously.

The end user - In some cases, if the user was warned about AI limitations and chose to rely on the output anyway, there may be shared responsibility. But don't count on this as your primary defence.

In our experience, the deploying organisation carries the majority of practical risk. That's you.

What Australian Law Actually Says Right Now

There's no specific AI liability legislation in Australia as of early 2026. But existing law covers a lot of ground.

Australian Consumer Law (ACL) - If you're providing goods or services, they must be fit for purpose, of acceptable quality, and match any description given. An AI system that gives unreliable advice while being marketed as reliable could breach these guarantees.

Privacy Act 1988 - AI systems that process personal information must comply with the Australian Privacy Principles. The 2024 amendments introduced requirements around automated decision-making that directly affect AI deployments.

Professional standards legislation - If AI is used in regulated professions (financial advice, healthcare, legal), the professional obligations don't disappear just because a machine gave the recommendation. The licensed professional remains responsible.

Tort law (negligence) - The standard negligence framework applies. If you deploy AI without reasonable care - poor testing, no monitoring, insufficient safeguards - and someone suffers loss, you could face a negligence claim.

ASIC and APRA guidance - Financial services and insurance organisations face additional regulatory requirements around AI use, including expectations around explainability and fairness.

The bottom line is that existing law provides plenty of basis for AI-related claims. The absence of specific AI legislation doesn't create a liability vacuum.

The Insurance Gap Most Businesses Don't Know About

Here's what catches many organisations off guard: their existing insurance policies may not cover AI-related losses.

We've seen this play out in several ways:

Professional indemnity policies often cover advice given by qualified professionals. If an AI gives the advice, the insurer may argue it falls outside the policy's definition of "professional services."

Product liability policies typically cover physical products. An AI system that causes financial loss (not physical harm) may not trigger coverage.

Cyber insurance covers data breaches and system failures, but an AI giving wrong advice isn't a cyber incident. It's an operational failure.

General liability policies have exclusions that can catch AI-related claims. Technology-specific exclusions, professional services exclusions, and "your product" exclusions can all create gaps.

The practical advice: review your policies specifically with AI deployment in mind. Don't assume coverage exists until your broker confirms it in writing.

How to Structure AI Liability in Vendor Contracts

When you engage an AI vendor or developer, the contract should explicitly address what happens when things go wrong.

Key areas to negotiate:

Performance warranties - What accuracy or reliability standards does the vendor commit to? "Best efforts" isn't enough. Define measurable performance thresholds and what happens when they're not met.

Indemnification - Who pays for third-party claims arising from AI errors? Push for vendor indemnification for defects in the core AI system. Accept responsibility for how you deploy and configure it.

Limitation of liability - Most tech contracts cap vendor liability at the contract value. For AI systems making consequential decisions, this cap may be completely inadequate. Negotiate higher limits for AI-specific risks.

Data ownership and model training - If the vendor uses your data to improve their model, and that model later causes harm to another customer, what's the chain of responsibility? Get this in writing.

Audit and explainability - You need the right to audit how the AI reaches its conclusions. Without this, defending against a liability claim becomes extremely difficult because you can't explain what happened.

Insurance requirements - Require your vendor to carry appropriate insurance and provide certificates of currency.

In our consulting work, we always recommend clients have AI-specific terms reviewed by a technology-literate lawyer before signing. Generic software agreements don't adequately address AI risk.

Building an AI Risk Framework

Rather than treating AI liability as a purely legal problem, we recommend building a practical risk framework that reduces the likelihood and impact of AI failures.

Tier Your AI by Risk Level

Not all AI deployments carry the same risk:

Low risk - Internal productivity tools, content suggestions, data summarisation where humans review all outputs. If the AI is wrong, someone catches it before it matters.

Medium risk - Customer-facing information, process automation where errors cause operational problems but are recoverable. Wrong delivery estimate, incorrect classification that gets corrected.

High risk - Financial advice, medical recommendations, legal guidance, safety-related decisions. Errors cause direct, potentially irreversible harm.

Each tier should have different governance, testing, monitoring, and insurance requirements.

The Human-in-the-Loop Question

The single most effective liability mitigation strategy is keeping humans in the decision chain. But this isn't as simple as it sounds.

A human who rubber-stamps every AI recommendation without genuine review isn't providing meaningful oversight. For a human-in-the-loop defence to hold up, you need to demonstrate that:

  • The human had the expertise to evaluate the AI's output
  • The human had time and context to make an informed judgment
  • The human actually reviewed the output (not just clicked "approve")
  • The human had authority to override the AI

If your "human review" process involves someone clicking through 200 AI decisions per hour, that's not oversight. That's a liability trap.

Monitoring and Incident Response

When an AI system makes a consequential error, how quickly you respond affects both the damage and your legal exposure.

Build monitoring that catches:

  • Outputs that fall outside expected ranges
  • Patterns of errors (even small ones that individually seem harmless)
  • User complaints or override rates
  • Model performance degradation over time

Have an incident response plan specifically for AI failures:

  1. Detect the failure
  2. Stop the AI from making the same mistake (circuit breaker)
  3. Assess the scope of impact
  4. Notify affected parties if required
  5. Document everything
  6. Fix and verify before resuming

Insurance Options for AI Deployment

The insurance market for AI is evolving rapidly. Here's what's available now:

AI-specific insurance products - Several insurers now offer policies specifically designed for AI liability. These are relatively new and premiums vary widely based on the use case and risk profile.

Technology E&O (Errors and Omissions) - Professional indemnity for technology companies. If you're deploying AI as part of a service, this is your starting point.

Algorithmic liability endorsements - Some insurers offer endorsements to existing policies that extend coverage to AI-specific scenarios.

Parametric insurance - For high-volume AI systems, some organisations use parametric insurance that pays out based on measurable triggers (e.g., error rate exceeding a threshold) rather than individual claims.

Self-insurance - For lower-risk AI deployments, setting aside reserves to cover potential claims may be more cost-effective than commercial insurance.

Our recommendation: work with a broker who understands technology risk, not just general business insurance. The difference in coverage quality is substantial.

Practical Steps for Australian Businesses Deploying AI

Here's the checklist we work through with our clients:

  1. Classify your AI by risk tier before deployment
  2. Review existing insurance for AI coverage gaps
  3. Update vendor contracts with AI-specific liability terms
  4. Implement meaningful human oversight proportionate to risk
  5. Build monitoring and incident response specifically for AI failures
  6. Document your risk assessment process (regulators want to see your thinking)
  7. Get legal advice from a lawyer who understands both technology and Australian regulatory requirements
  8. Review and update quarterly as regulations evolve

What's Coming in Australian AI Regulation

The Australian government has signalled it will introduce AI-specific regulation. The 2024 voluntary AI Safety Standard is likely a precursor to mandatory requirements.

Expect to see:

  • Mandatory risk assessments for high-risk AI applications
  • Transparency requirements (telling people they're interacting with AI)
  • Accountability frameworks that assign clear responsibility
  • Sector-specific rules for financial services, healthcare, and government

Organisations that build good governance now will be ahead when regulations arrive. Those that wait will face expensive retrofitting.

How Team 400 Approaches AI Risk

At Team 400, we build AI liability considerations into every project from the beginning, not as an afterthought. Our AI consulting engagements include risk assessment as a standard phase, and our AI integration work includes monitoring and circuit-breaker patterns by default.

We've helped organisations across financial services, insurance, and professional services deploy AI systems that deliver value while managing risk appropriately. We can't give you legal advice - that's for your lawyers - but we can make sure the technology is built with liability management in mind.

If you're deploying AI and haven't addressed the liability question, let's have that conversation before it becomes urgent.