Article: The AI mistake that should terrify every risk & governance executive
-3.png?width=686&height=529&name=Header%20image%20resising%20(HubSot)-3.png)
What happens when your AI lies? You pay. Literally.
In late 2022, Jake Moffatt went to Air Canada’s website to book a flight home for his grandmother’s funeral. He asked the airline’s chatbot if he could apply for a bereavement refund after his trip. The chatbot said yes. So he booked the full fare and dealt with his grief.
Weeks later, when he followed the instructions to claim his refund, the airline denied him. Their policy didn’t allow it, and they argued they weren’t responsible for what the chatbot said. In court, Air Canada went so far as to suggest the bot was a “separate legal entity.”
The British Columbia Civil Resolution Tribunal didn’t buy it. It ordered the airline to pay C$812.02 in damages, noting customers have every reason to trust what’s published, or said, on a company’s website. While the dollar amount happened to be minimal in this particular instance, the incident is a clear warning that the potential monetary, reputational and brand consequences for AI governance failures are significant.
This wasn’t a deepfake. It wasn’t a hallucinated image or synthetic voice clone. It was a basic automation failure, and it cost real money, reputation, and credibility. For Australian risk and governance leaders, the lesson is stark: AI governance can’t be an afterthought.
AI Governance: Part of the System, Not a Side Project
At Murrumbidgee Irrigation, Data Science Manager Eloy Gonzales is crystal clear about where AI governance belongs. “AI governance should be seamlessly interwoven into our organisation's existing risk and compliance frameworks, acting as a critical thread rather than a standalone patch.”
Gonzales believes that treating AI governance separately risks “overlooking the interconnected nature of AI risks with broader organisational risks like data security, regulatory compliance (e.g., GDPR, CCPA), ethical considerations, and operational resilience.”
Instead, his team extends existing governance structures to handle emerging AI risks – things like algorithmic bias, explainability, misuse, and data vulnerabilities. It’s a full-stack approach: risk policies, compliance rules, data governance, monitoring, and audit all working together. “By embedding AI governance within our established risk and compliance infrastructure, we can ensure a more consistent, comprehensive, and efficient approach to managing the risks and harnessing the benefits of AI.”
Over at AIA Australia, Wayne Blackshaw, Head of Technology Risk, Data Protection and Privacy – Enterprise Risk and Compliance, sees the same need, but he cautions against assuming current systems are enough. “AI governance is another dimension or tool which fits with any others requiring a risk and governance approach to enable effectively. The rapidly evolving, fast moving and not well understood elements of AI do warrant AI governance practices to become more responsive and agile.”
How to Know Which AI Projects Need the Most Scrutiny
At Murrumbidgee, identifying risk isn’t a guessing game, it’s a structured evaluation. Gonzales explained “Our organisation employs a multi-faceted process to identify and assess the risk level of different AI systems, allowing us to strategically prioritise our governance efforts.” That includes inventorying all AI systems, conducting cross-functional risk reviews, and analysing each use case for potential harm, including privacy, bias, misuse, and reputational fallout. Gonzales' team also evaluates data sensitivity, explainability, the level of autonomy, and the model's origin (internally built or vendor-supplied). Based on these factors, each AI system is given a risk rating, low, medium, high, or critical. The results are then used to determine governance requirements.
“High and critical risk systems... receive the most intensive governance oversight, including rigorous testing and validation, detailed explainability requirements, strict data governance policies, regular audits, and potentially human-in-the-loop decision-making processes.”
Blackshaw adds that at AIA, they are still building their governance models, but the principles are similar. “The primary focus areas are on accuracy and quality of information used by AI models as well as ensuring accuracy and explainability when used. The latter is heavily driven by the expectations of our regulators and customers/partners.”
Keeping Humans in the Loop
One of the most critical safeguards is ensuring AI doesn’t act on its own without human oversight, something Moffatt never got from Air Canada’s bot.
Gonzales outlines his team’s safeguards. “Outputs from our AI models, particularly those that trigger actions, are not directly executed. Instead, they serve as recommendations that require review, validation, and explicit approval by a human expert before any action is taken.” He adds that “We define the roles and responsibilities of human reviewers and approvers for each AI application... and we encourage and facilitate feedback from human reviewers on the AI’s recommendations.”
Blackshaw’s approach is policy driven. At AIA, any medium or high-risk use case must meet a checklist of mandatory controls. “Regular monitoring for hallucination, bias and drift are included within the areas to be monitored as is ensuring any material decisions are made by a human.”
Explainability Isn’t Optional Anymore
“I regard explainability, transparency and interpretability as essential controls for any AI strategy,” says Fernando Mourão, Head of Responsible AI at Seek. He sees explainability as the key to effective communication between AI providers and all stakeholders. “This builds trust (what we cannot trust cannot create value), enhances system control, ensures business clarity, supports informed decision-making, and enables regulatory compliance.”
Similarly, Gonzales’ team invests in detailed model documentation, explainability frameworks, and staff training. “We strive to deploy AI models that offer a degree of explainability... we prioritise techniques that provide insights into the key factors influencing the AI’s output, enabling more informed human decisions.”
At AIA, Blackshaw notes that explainability is already expected. “Even without AI, any calculated or derived result must be able to be explained by the owner of the tool.”
Embedding Ethics Into the AI Lifecycle
Ethics isn’t a last-minute patch at Murrumbidgee, it’s embedded from the beginning.
Gonzales explains that “We are developing and refining a comprehensive Ethical AI Framework that explicitly defines our core ethical principles... and provides actionable guidelines for their application across the AI lifecycle.” That includes bias detection, privacy-preserving techniques, ethical impact assessments, stakeholder feedback channels, and training.
Mourão puts it this way: “We become what we measure, so redesigning business success metrics creates incentives that propel AI governance forward. He goes on to note that “People precede processes, and processes precede technology; upskill your people to ask the right questions, empower them with appropriate processes to translate abstract ethical principles into clear behavioural expectations.”
Blackshaw says the ethical principles are already in place, what matters is ensuring AI follows them. “These ethical dimensions are core principles within our business so the use of AI will have these applied as a matter of course.”
Who Owns the System When Something Goes Wrong?
At Murrumbidgee, Gonzales explains that “For each AI system, we explicitly assign ownership to a specific business unit or functional team that is the primary beneficiary or user of the system.”
Ownership is broken down across data owners, model validators, executive sponsors, and SLAs between technical and business teams. “Ensuring clear accountability at the executive level has involved several key learnings... Specifically, linking AI initiatives to business outcomes and establishing regular reporting mechanisms.”
Blackshaw explains AIA’s model similarly. “At a system level, we have a technology system owner in addition to a business service owner... Under APRA guidelines the Board holds ultimate accountability which is exercised through robust governance and reporting from executive management.”
Governance Doesn’t End at Deployment
For Gonzales, ongoing monitoring is just as important as upfront planning. “We implement comprehensive MLOps pipelines that continuously monitor key performance indicators... Techniques to detect data drift will be used... anomaly detection mechanisms will be implemented... and we establish automated or semi-automated pipelines for retraining models on new data.”
His team also runs annual evaluation workshops across every model, considering safety, alignment, risk, and business relevance. “The evaluation results in clear recommendations for each AI system, ranging from continued operation to performance improvements, risk mitigation strategies, or retirement.”
Blackshaw notes AIA is earlier in its maturity curve. “We are early in this journey with significant and higher risk AI use cases generally in a proof of concept or pilot phase and, as such, we are still developing our thinking and approaches in this regard.”
The Bottom Line
Jake Moffatt didn’t sue Air Canada for an AI failure. He just wanted a refund. But in trying to cut costs and streamline service with automation, the airline accidentally stepped into a governance gap, and paid for it. The AI didn’t go to court. The company did.
“Trust is built through transparency,” Mourão says. “What we cannot trust cannot create value.”
And what you can’t govern? That might just cost you far more than you bargained for.
Connect with and hear from AI, risk & governance leaders, including Fernando Mourão, Wayne Blackshaw & Eloy Gonzales, at the AI Governance Summit 2025 on 7 August in Sydney. Learn more.
Download the Brochure
