AI Governance Failures Expose Organizations to Professional Liability Risks

Published 2025-10-22 14:48:37 | riskandinsurance.com

🎙️ Paranoid Newscast

Recent incidents in Australia highlight how poor oversight of AI tools can lead to costly errors and privacy violations. These failures in AI governance reveal significant risks for organizations, particularly in the tech and consultancy sectors.

Two recent incidents demonstrate that the primary risk from artificial intelligence stems not from the technology itself, but from inadequate governance and quality assurance processes around AI-assisted work, according to a commentary from Lockton. The incidents reveal a pattern of organizational failures in managing AI tools effectively, according to Mark Luckin, national manager of Cyber & Technology for Lockton Australia.

In the first case, a consulting firm produced a report using Azure OpenAI that contained non-existent references and fabricated court quotes, leading to corrections and a partial refund to the client. The second incident saw a New South Wales government department contractor upload a spreadsheet containing thousands of rows of sensitive flood victim data directly into ChatGPT, creating a significant privacy breach.

These cases underscore how organizations across the tech and consultancy sectors are rushing to adopt AI for efficiency gains without establishing proper safeguards, Luckin writes. The commentary identifies three critical risk areas emerging from such failures: uncontrolled data leakage through AI prompts, lack of oversight on where data resides when processed by external AI systems, and the potential for inaccurate AI outputs to cause client losses and reputational damage.

For the risk management and insurance industry, these incidents present immediate challenges in determining appropriate coverage. Traditional cyber insurance policies may need explicit updates to cover AI-related data leakage, particularly when sensitive information is shared with AI vendors.

Organizations must implement comprehensive AI governance frameworks that go beyond simple usage policies. Practical measures include developing a comprehensive QA checklist for AI-assisted deliverables that requires two-person verification for quotations and references, human review of all numerical claims, and documentation of prompts and drafts to evidence due diligence.