AI can trigger liability faster than governance and insurance can keep up.

Last week, an AI risk story making the rounds (i.e., it landed in my inbox) described a Fortune 100 company whose AI agent autonomously executed 127 contracts totaling $43 million before anyone caught that it had exceeded its authority and locked the business into terms the legal team would never have approved (including uncapped indemnity liability). The story claimed it would cost the company $8.7 million to try to unwind what could be unwound, while carrying ongoing additional exposure on what couldn’t be unwound. The article goes onto discuss how AI governance can help bridge this insurance gap, since insurers are taking steps to carve out automated decisions and AI agents from D&O policies.
I want to be careful here! I couldn’t find independent reporting that substantiates those specific figures or identifies the company. The only public source of this claim that I found is an opinion/newsletter post providing the anecdote without documentation. Maybe this newsletter has nonpublic information, but I don’t want to risk presenting this as a verified incident if it’s not! Especially since this article appears to use a hallucinated piece of case law to substantiate the claim that federal courts are ruling that AI-executed contacts are binding. A whole lot of this newsletter article doesn’t pass the smell test, but its basic premise urging businesses to adopt AI governance is sound.
Even if these facts are complete hogwash, the pattern is absolutely plausible. Organizations are deploying systems that can act at machine speed, create commitments, and trigger liability faster than governance and insurance can keep up.
To add some spice to this quagmire of a liability stew, the U.S. regulatory story is getting noisier rather than simpler. In December 2025, the White House issued an executive order explicitly aimed at challenging certain state AI laws (including through the use of its own “AI Litigation Task Force”), evaluating “onerous” state laws, and even tying some federal funding conditions to whether states enforce certain AI rules. If you’re an executive looking for certainty, this kind of federal posture can feel like a signal that AI compliance pressure will ease. But… an executive order doesn’t automatically erase state laws or eliminate the underlying harms AI can cause, and that the legal teeth of preemption-by-EO are limited. Even if state-level AI legislation slows down as it works through this EO, your business risk does not.
That’s why AI governance belongs in a modern privacy program. You don’t need to spout off a bunch of fabricated facts to know that AI governance is a corporate resilience strategy that creates a competitive advantage for your business when it’s time to land new clients or customers, attract investors, and defend your valuation as you take your business to market. So, let’s dig into some actual facts…
When companies think “AI risk,” they often picture hallucinations (see above), bias, or model drift. Those are real problems. But a bigger operational issue is authority and automation. The more you use AI to draft, negotiate, route approvals, respond to customers, or execute workflows, the more you are creating situations where the system’s outputs (particularly automated outputs) may be treated as the company’s stance, decision or action, especially when customers, vendors, and partners reasonably believe the system is speaking “for the company.”
We’re already seeing the legal system treat AI outputs as the company’s responsibility in public-facing contexts. In a widely reported example, Air Canada was held responsible for misinformation provided by its chatbot and ordered to compensate a customer, despite attempts to distance itself from the chatbot’s statements. That case wasn’t about an autonomous contracting agent, but it’s a clean illustration of the broader principle that if the AI is deployed in your business process, you don’t get to shrug and call it “the bot’s fault.” (See Moffatt v. Air Canada, 2024 BCCRT 149 (CanLII))
The practical takeaway is simple. AI tools need a permissions model, not just a prompt. If an AI system can “do,” not just “suggest,” then you need governance controls that look a lot like financial controls, including authority limits, approvals, auditability, and escalation paths. Implementing this may not be so simple. Even if the tool itself has some of these processes built-in, that can’t be the end of the story, because technology is not infallible, and as we know, we need human review over technology and AI processes.
Even if the legal landscape feels fragmented, the insurance market is converging on a single point: AI risk is hard to model, potentially systemic, and therefore something many carriers want to limit or exclude. Recent reporting by the Financial Times and Barron’s describes major insurers moving to exclude or restrict AI-related liabilities across corporate policies, driven by concerns about large-scale, correlated losses. (See a Barron’s Article on Nov 24, 2025)
And standard-setting bodies are moving too! Verisk’s ISO has discussed new general liability endorsements addressing generative AI exposures, with filings pointing to January 1, 2026 as a proposed effective date in at least some contexts. If you’ve lived through the “silent cyber” era, this should feel familiar. We went through it with cyberinsurance a decade or two ago. Coverage that used to be assumed becomes contested, narrowed, or carved out (often at renewal and often without the business realizing what changed).
At the same time, the Lloyd’s market is also signaling that this is a distinct category of risk. Insurers at Lloyd’s have supported new products designed to cover losses from AI tool errors (including chatbot errors), explicitly positioning them as a response to the growing volume of real-world incidents and third-party claims. When new coverage products are being created for a risk, that’s usually a sign that the risk isn’t reliably handled by existing policies or there is friction over where the risk belongs.
If you’re an executive, AI governance isn’t just about regulators. It’s about whether your risk transfer strategy still works when something goes wrong. And for many companies, that question is remains unanswered until you do the hard work of mapping AI use cases to policy language and combing through your exclusions to confirm what is and is not covered.
If your organization operates internationally (or sells into the EU), AI governance can’t be treated as a U.S.-only policy exercise. It’s true that the EU’s proposed AI Liability Directive has been politically uncertain, and the European Commission has indicated it was withdrawing that proposal as part of its 2025 work program.
But that does not mean the EU is backing away from liability exposure. In 2024, the EU adopted a new Product Liability Directive (Directive 2024/2853) taking effect December 9, 2026 that modernizes strict-liability rules and explicitly treats software (including AI systems) as within scope. If your product (or service-as-product) relies on software/AI, the EU liability landscape is evolving in ways that can increase exposure and lower friction for claimants.
So even if a particular AI-specific liability vector isn’t moving, the broader direction still matters: AI risk is being treated as product risk, consumer risk, and governance risk.
In Resilience by Design, AI governance is not a 40-page policy that no one reads. It’s a set of operating rules that reduce the probability of an incident and reduce the blast radius when one happens.
Here are the foundations that matter in the real world:
Step 1: You inventory where AI is used (including shadow AI in business units), and you categorize use cases by risk—especially anything that can (a) create external commitments, (b) touch regulated data, or (c) materially impact individuals.
Step 2: You set clear “human-in-the-loop” requirements for the high-risk bucket, define authority limits (what the AI can and cannot do), and require audit logs that can actually be produced when a claim, regulator, or insurer asks what happened.
As yourself the following questions every time someone hands you a business case:
This might look a lot like a signing authority policy because it is!
Step 3: You also connect AI governance to vendor oversight. If a vendor’s model is trained on your data, if a tool retains prompts, if subprocessors are involved, or if cross-border transfers occur, those are privacy and contracting issues. Your contracts should reflect this reality with specific AI clauses, security requirements, and incident notification obligations.
Step 4: You channel evidence, not vibes. Take the evidence-first approach you should already be using with your privacy audits, incident response readiness, and insurance underwriting. That means gathering and inventorying artifacts:
Step 5: You pressure-test it. If you’re running tabletop exercises (and you should), add an AI inject: “The assistant sent the wrong file,” “the agent executed outside approval limits,” “a vendor’s model leaked training data,” “a regulator asks for your AI risk assessment,” “the insurer asks whether you used excluded AI functions.” These are no longer edge cases. They’re exactly the gaps organizations discover only when DEATH!! (money, I mean money…) is on the line.
Step 6: You retain human oversight where it matters most. This is not an anti- AI stance, and I think company’s taking a totally anti-AI stance are shooting themselves in the foot. AI is a productivity powerhouse for an increasingly large number of people and businesses, which means that completely barring is quickly going to go from impractical to impossible. So, take the risk-based stance. Underwriters and governance frameworks alike draw a nice red line around fully autonomous decision making versus a meaningful human in the loop. This is particularly critical where decisions affect money, rights, safety, or binding commitments. If this feels a lot like Step 2, well, it bears repeating!