What Your Cyber Insurer Now Asks About AI: Five Renewal Questions and How to Answer Them

Brandon Sneider | March 2026


Executive Summary

  • Cyber insurance renewal applications are no longer AI-agnostic. Carriers including WR Berkley, AIG, Great American, Hiscox, and AXA XL have added AI-specific questionnaire sections to 2026 renewal applications. The CISO or CFO who last renewed with a standard security checklist will encounter new questions about AI governance, data handling, and human oversight that did not exist 12 months ago.
  • Five questions appear with increasing frequency across carrier applications. They cover AI tool inventory, acceptable use policies, data exposure through AI tools, human oversight of AI outputs, and AI-specific incident response. Companies that answer all five with documentation qualify for affirmative coverage. Companies that cannot answer them face sublimits, exclusions, or declination.
  • The questions are not hypothetical — they reflect real claims patterns. Coalition’s analysis of nearly 200 cyber insurance claims (2023-2025) found chatbots cited in 5% of web privacy claims. Over 40% of all cyber claims are denied, with 44% of denials tied to inadequate documentation (CompareCheapSSL Cyber Insurance Statistics, 2025-2026). The underwriter asking these questions is pricing actual loss history, not speculating.
  • Answering these questions costs less than failing to answer them. The governance documentation that satisfies all five questions — an AI tool registry, an acceptable use policy, data classification rules, human review protocols, and an AI incident response addendum — represents 15-30 hours of work for a mid-market IT and legal team. The premium penalty for not answering: 15-20% increases versus flat-to-negative renewals for well-documented accounts (S&P Global Ratings; Forrester Research, 2025-2026).
  • The questions your insurer asks today become the controls your insurer requires tomorrow. MFA went from an application question in 2020 to a non-negotiable coverage requirement by 2023. AI governance is on the same trajectory. The company that documents answers now locks in favorable coverage terms before the market hardens further.

The Five Questions and What They Actually Mean

These five questions are distilled from carrier application updates, broker renewal guidance (Founder Shield, WTW, Amwins, Marsh), and insurer public commentary through March 2026. Not every carrier asks all five. But every carrier asks at least two, and the trend line points in one direction.

Question 1: “What AI tools does your organization use, and what data do they process?”

What the underwriter is really asking: Do you know what AI is running inside your company, or are you going to discover it during a breach investigation?

This is the inventory question. Carriers have learned from a decade of shadow IT claims that the most expensive incidents involve tools the company did not know existed. Shadow AI compounds this: CyberArk (2026) reports machine identities outnumber human identities 82-to-1 on enterprise networks, and 88% of organizations still define “privileged user” as human-only — leaving AI agents entirely unmanaged.

The Geneva Association (2025-2026) found 90% of firms want dedicated AI cyber coverage, but carriers cannot price what they cannot see. The inventory is the prerequisite to everything else.

How to answer it well:

Element What Satisfies the Underwriter
Tool registry Named tool, vendor, department, use case, approval date
Data classification Which tools touch PII, PHI, financial data, or client confidential information
Shadow AI discovery SSO logs, network monitoring, or employee attestation documenting how unauthorized tools are detected
Update cadence Quarterly review cycle with a named owner

What gets you flagged: “We use Microsoft Copilot” with no further detail. The underwriter reads this as “we have no visibility into what else is running.”

Question 2: “Does your organization have a written AI acceptable use policy?”

What the underwriter is really asking: If an employee pastes customer PII into ChatGPT tonight, does your company have a documented rule that says they should not have — and can you prove the employee knew it?

This is the governance baseline question. It matters because employee misuse of public AI tools creates a breach notification obligation. If an employee uploads personally identifiable or health information to a public AI tool, insurers and regulators treat it as an unauthorized disclosure — a potential breach that triggers the same response as a traditional data incident (IAPP, 2025).

ISACA (2025) identifies the absence of an AI acceptable use policy as a primary underwriting blind spot. Insurers describe ungoverned AI as the risk “no policy can cover what it cannot see.”

How to answer it well:

Element What Satisfies the Underwriter
Written policy Dated, signed by an executive, and distributed to all employees
Employee acknowledgment Signed receipt or electronic acknowledgment on file for each employee
Specific prohibitions Named categories of data that cannot be entered into AI tools
Annual review Documented review date and update cycle
Enforcement mechanism What happens when an employee violates the policy — and at least one example of enforcement

What gets you flagged: A policy that exists on a shared drive but was never distributed. Underwriters now distinguish between “we have a policy” and “we deployed a policy.”

Question 3: “Do AI agents or automated systems have access to sensitive data, customer records, or core business systems?”

What the underwriter is really asking: Can a compromised or malfunctioning AI system reach your most valuable data without a human approving the access?

This question has escalated rapidly in 2025-2026 as agentic AI deployments expand. CyberArk reports that insurers now ask specifically about AI agent governance:

  • Do AI agents have unique identities with least-privilege access enforced?
  • How are authentication methods (API keys, certificates, tokens) vaulted, rotated, and revoked?
  • Do you log and monitor all agent actions in real time?
  • What happens if an agent behaves unexpectedly — do you have a rapid shutdown process?

The Saviynt/Cybersecurity Insiders CISO AI Risk Report (n=235, 2026) quantifies why underwriters care:

Finding Percentage
AI has access to core business systems 71%
Organizations lack full visibility into AI identities 92%
Cannot detect AI misuse 95%
Do not enforce access policies for AI 86%
Cannot contain a compromised AI agent 95%

How to answer it well: Demonstrate that AI systems are treated like untrusted insiders — with standing access to nothing and just-in-time privilege elevation for specific tasks. Document which systems AI can reach, what data it can access, and who reviews access permissions quarterly.

What gets you flagged: “Our AI tools use the same service accounts as other applications.” The underwriter hears: god-mode permissions with no monitoring.

Question 4: “Is there human oversight for AI-generated outputs used in business decisions, client deliverables, or customer-facing communications?”

What the underwriter is really asking: When your AI produces something wrong, harmful, or legally actionable — will a human catch it before it reaches someone who can sue you?

This question spans cyber, E&O, and professional liability policies. WTW’s March 2026 analysis reports underwriters “generally support a human in the loop for critical AI decisions” and may stipulate it as a binding policy condition. The question is not philosophical. Real claims drive it: Air Canada’s chatbot promised an incorrect discount to a passenger. Arup lost funds when deepfake videos of employees were used in a fraud scheme. Google’s AI Overviews feature falsely identified a Minnesota company as a lawsuit defendant (Coalition/IAPP, 2023-2025).

Where underwriters expect human-in-the-loop review is narrowing to specific categories: employment decisions (hiring, promotion, termination tools), customer-facing content, financial calculations, regulated filings, and legal advice.

How to answer it well:

Element What Satisfies the Underwriter
Review workflow Documented process specifying who reviews AI outputs, what standard applies, and how the review is recorded
Use-case boundaries Defined categories where AI assists versus categories where AI is prohibited
Error handling What happens when a human reviewer identifies an AI error — escalation path and remediation
Bias testing For employment-related AI: documented bias testing and disparate impact analysis

What gets you flagged: “Our employees review AI outputs.” Without a documented workflow, this is indistinguishable from “nobody reviews anything.”

Question 5: “Do you have an incident response plan that addresses AI-specific scenarios?”

What the underwriter is really asking: If your AI system leaks data, produces harmful output, or gets compromised tomorrow — do you know what to do in the first 48 hours, or will you be figuring it out during the crisis?

Carrier Management (March 2026) reports carriers are introducing “AI Security Riders” that require documented AI-specific incident response as a prerequisite for coverage. This is the fastest-moving requirement: it appeared on fewer than 10% of applications in 2024 and is now standard on most major carrier questionnaires.

Marsh McLennan’s Cyber Risk Intelligence Center (n=thousands of organizations, August 2025) provides the actuarial basis: organizations with tested incident response plans are 13% less likely to experience material cyber events, and breaches without tested plans cost 55% more on average.

AI-specific scenarios the plan should address:

  • Employee uploads sensitive data to a public AI tool (breach notification required?)
  • AI-generated output causes customer or client harm (E&O trigger)
  • AI agent is compromised and accesses unauthorized data (containment protocol)
  • Deepfake attack impersonates an executive (social engineering response)
  • Vendor AI system suffers a breach affecting your data (third-party response)

How to answer it well: Add an AI-specific annex to the existing incident response plan. Include the five scenarios above, test at least one through a tabletop exercise within the past 12 months, and document the results.

What gets you flagged: “Our general incident response plan covers all technology incidents.” Underwriters are specifically looking for AI scenario documentation. A general plan without AI-specific scenarios reads as “we have not thought about this.”

Key Data Points

Data Point Source Date
Global cyber insurance premiums: $16B (2025), projected $33B (2026) Munich Re; CompareCheapSSL 2025-2026
Over 40% of cyber claims denied; 44% of denials due to inadequate documentation CompareCheapSSL Cyber Insurance Statistics 2025-2026
82% of denied claims involved organizations without MFA Coalition 2024
Machine identities outnumber humans 82:1; 88% of orgs define “privileged user” as human-only CyberArk 2026
92% of CISOs lack visibility into AI identities; 95% cannot contain a compromised AI agent Saviynt/Cybersecurity Insiders (n=235) 2026
90% of firms want dedicated AI cyber coverage; two-thirds willing to pay 10%+ premium Geneva Association 2025-2026
Chatbots cited in 5% of web privacy claims (n=~200 claims, 2023-2025) Coalition/IAPP 2025
Tested IR plans correlate with 13% fewer material cyber events; breaches cost 55% more without them Marsh McLennan (n=thousands) August 2025
Well-documented accounts: flat to -10% premium renewals; undocumented: 15-20% increases S&P Global Ratings; Forrester; CRC Cyber REDY Index 2025-2026
WR Berkley absolute AI exclusion covers any claim “arising out of” AI use across D&O, E&O, fiduciary WR Berkley (form PC 51380) 2025
Verisk standardized AI exclusion (CG 40 47 01 26) now available to every US carrier Verisk/ISO January 2026

What This Means for Your Organization

The renewal conversation has changed. Twelve months ago, a mid-market CISO or CFO walked into the renewal meeting prepared to discuss MFA deployment, backup architecture, and incident response plans. Those controls remain non-negotiable — but they are no longer sufficient. The five questions above are appearing on applications today, and the company that cannot answer them with documentation faces a binary outcome: higher premiums or coverage gaps that become visible only at claims time.

The practical path forward is not a six-month governance project. It is a focused sprint: build the AI tool registry (Question 1), formalize the acceptable use policy (Question 2), document AI access controls (Question 3), create human review workflows (Question 4), and add AI scenarios to the existing IR plan (Question 5). For a 300-person company, this is 15-30 hours of work across IT, legal, and operations — a fraction of the cost of discovering the coverage gap during a claim.

The timing matters. MFA followed the same adoption curve with insurers: application question in 2020, recommended control in 2021, required control by 2023, claims denied without it by 2024. AI governance is on that same trajectory, accelerated by the Verisk standardized exclusion and WR Berkley’s absolute AI exclusion entering the market simultaneously. The company that documents answers before the next renewal locks in favorable terms. The company that waits may find the market has hardened around them. If this raised questions specific to your renewal timeline, I would welcome the conversation — brandon@brandonsneider.com.

Sources

  1. CyberArk, “How AI Agent Privileges Are Redefining Cyber Insurance Expectations,” 2026. https://www.cyberark.com/resources/blog/how-ai-agent-privileges-are-redefining-cyber-insurance-expectationsIndependent security vendor research. CyberArk has commercial interest in identity management but the 82:1 ratio and insurer questionnaire items are sourced from publicly available carrier documentation.

  2. Saviynt/Cybersecurity Insiders, “CISO AI Risk Report” (n=235 CISOs), 2026. — Industry survey, moderate sample size. Self-reported by security leaders; likely overrepresents large enterprises. The directional findings align with CyberArk’s independent data.

  3. Coalition/IAPP, “How AI Liability Risks Are Challenging the Insurance Landscape,” 2025. https://iapp.org/news/a/how-ai-liability-risks-are-challenging-the-insurance-landscapeIndependent analysis by privacy trade association. IAPP is a neutral research body. Coalition data drawn from ~200 claims over 3 years; small sample but the only public claims-level AI data available.

  4. CompareCheapSSL, “Cyber Insurance Statistics 2025-2026: Claim Rates, Denials, Premium Trends,” 2025-2026. https://comparecheapssl.com/cyber-insurance-statistics/Aggregator of industry data. Compiles data from multiple primary sources (Coalition, Munich Re, S&P Global). Cross-referenced against primary sources for accuracy.

  5. Carrier Management, “How Artificial Intelligence Is Changing Cyber Risk in 2026,” March 2026. https://www.carriermanagement.com/features/2026/03/09/285417.htmIndependent insurance trade publication. Carrier Management serves the insurance industry directly; editorial independence from policyholder or vendor interests.

  6. Marsh McLennan, Cyber Risk Intelligence Center, August 2025 (n=thousands of organizations). — Largest insurance broker’s proprietary claims data. Marsh has commercial interest in selling brokerage services but the dataset is the largest publicly referenced source for control-to-outcome correlation.

  7. Geneva Association, AI and Insurance Survey, 2025-2026. — Independent insurance industry think tank. The Geneva Association represents the global insurance industry but operates as a research body, not a commercial entity.

  8. Insurance Business Magazine, “AI Exclusions Are Creeping into Insurance,” 2025-2026; “Cyber Insurance Enters the AI Risk Era,” March 2026. https://www.insurancebusinessmag.com/us/news/cyber/cyber-insurance-enters-the-ai-risk-era-as-limits-wording-and-underwriting-models-shift-565329.aspxIndependent insurance trade media. Reports on carrier filings and policy language changes.

  9. WTW (Willis Towers Watson), “Cyber Risk: A Look Ahead to 2026,” February 2026. https://www.wtwco.com/en-us/insights/2026/02/cyber-risk-a-look-ahead-to-2026Major insurance broker analysis. WTW has commercial interest in brokerage but the market data and underwriting trend analysis reflect their position as the fourth-largest global broker.

  10. S&P Global Ratings; Forrester Research; CRC Cyber REDY Index, 2025-2026. — Mix of independent analyst and market data. Premium projections from S&P (independent credit rating agency) and Forrester (independent analyst firm). CRC index reflects actual market pricing.

  11. AI Certs News, “AI Cyber Insurance Riders Reshape Underwriting and Security,” 2026. https://www.aicerts.ai/news/ai-cyber-insurance-riders-reshape-underwriting-and-security/Industry news aggregator. Data cross-referenced with Swiss Re and Geneva Association primary sources.

  12. Petronella Cybersecurity, “Cybersecurity Insurance: What Underwriters Check in 2026,” 2026. https://petronellatech.com/blog/cybersecurity-insurance-what-underwriters-check-in-2026/Managed security provider. Commercial interest in selling security services. Technical control requirements verified against Marsh McLennan and Coalition primary data.


Brandon Sneider | brandon@brandonsneider.com March 2026