Insight
Insights
Jan 6, 2026

Uninsured and ungoverned: how to get visibility over AI risks

Table of contents
Authors
Oliver Gould
Oliver Gould
Senior Software Engineer

Insurers are sending a clear warning about AI risk

Across the insurance industry, one message is coming through loud and clear: the risks from generative AI are widespread, immediate, and extremely hard to quantify. When actuaries and risk specialists, whose job it is to model liabilities, decide that a risk is too volatile to underwrite, organizations should take notice.

Recent policy filings like those shared by Hunton Andrews Kurth LLP and market movements show many insurers responding to AI risk by broadly excluding AI-related claims from standard liability coverage. In some cases, exclusions bar coverage entirely for “any actual or alleged use, deployment, or development of artificial intelligence” across directors & officers, errors & omissions, and fiduciary liability policies.

Demand is real while coverage is constrained 

Insurers are updating policies with broad-reaching AI exclusions that give them significant leeway in making decisions about coverage. The criteria for an AI-related exception can be as broad as any workflow that includes AI.

The reasoning is simple: insurance companies consider AI to be an unpredictable black box. If they cannot sufficiently model behavior, they cannot underwrite the consequences. The result is equally broad and generalized exceptions to match that level of uncertainty.

For companies, this creates a very real operational shift: the liability for AI incidents is moving back in-house.

A messy and risky landscape for businesses 

The core problem insurers are responding to is the same weakness many organizations face: companies don’t yet have reliable ways of observing, quantifying, and controlling AI behavior in production. Shadow AI usage, uncontrolled workflows, and agentic actions all contribute to this uncertainty. Examples include:

  • Shadow AI that employees adopt outside IT policies, making monitoring and control extremely difficult.
  • MCP (Model Context Protocol) server usage that injects dynamic workflows into AI agents beyond traditional API controls.
  • Agentic sequences where AI performs multiple steps after a human prompt, increasing unknown and compounding risk.

Insurers see these patterns as ambiguous loss drivers they cannot credibly price into coverage. As a result, companies may face higher premiums, narrower coverage, or outright exclusions tied to their use of genAI.

If organizations want to remain insurable and control their exposure, they must do more than rely on legacy security tooling or hope that AI usage behaves itself. Today’s AI ecosystem isn’t designed for accountability:

  • Traditional DLP tools can miss sensitive content embedded in prompts rather than files.
  • CASBs can restrict access to apps but don’t interpret AI behavior.
  • IAM governs identity but not AI decision logic or agent actions.
  • MCP workflows operate outside established security heuristics.

These gaps mean many organizations are blind to the risks insurers are trying to avoid.

How an AI control plane fits in

This is exactly where SurePath AI fits.

SurePath AI provides the visibility and control that almost all enterprise AI stacks lack, helping companies demonstrate real risk management to insurers and internal stakeholders alike. When incidents happen, they give organizations the evidence and context needed to respond and remediate effectively:

  • Detailed audit logging tracks administrative actions, configuration changes, and sensitive information access.
  • Request logging shows who, when, and how AI was used, creating a verifiable record of AI involvement in workflows.
  • Violation data pinpoints sensitive exposures, policy violations, and risk events with clear labeling and context.
  • Integrated telemetry export allows automated delivery of enriched logs to SIEM and analytics tools, eliminating manual data collection in incident response.
  • MCP policy controls provide both broad governance and fine-grained access control for tools, models, and server interactions.
  • This means organizations don’t just hope they’re secure, they can actually demonstrate observable, enforceable, auditable controls around their AI behavior.

Insurers are proactively responding to a rapidly expanding risk surface that neither they nor most organizations can reliably measure today. The message is simple: if you rely on GenAI, you need baked-in controls and visibility that match the scale and speed of AI-driven workflows.

SurePath AI gives organizations the oversight, policy enforcement, and incident response capabilities insurers expect and that modern AI adoption demands.