The AI Insurance Application Playbook: 20 Questions Your Underwriter Will Ask and What Answers Get the Best Rates
Brandon Sneider | March 2026
Executive Summary
- The insurance application is no longer a formality. Self-attestation is dead. Carriers now demand technical validation — screenshots, policies, logs, proof of tested controls, and documented AI governance. A CFO who last renewed with a two-page questionnaire will face a fundamentally different process at the next cycle.
- AI-specific questions are appearing across all four policy types. Cyber, D&O, E&O, and professional liability underwriters are now asking about AI tool inventories, governance frameworks, human oversight protocols, and board-level AI reporting. Companies that cannot answer these questions face exclusions, sublimits, or declination.
- Governance documentation is the new currency of insurability. Companies with documented AI governance programs qualify for affirmative coverage and premium reductions estimated at 10-20% relative to ungoverned peers (industry analyst projections, 2025-2026). Companies without governance face the Verisk ISO CG 40 47 01 26 standardized exclusion, now available to every carrier in the country.
- Three carriers are writing affirmative AI coverage. Coalition’s AI endorsement covers AI-related security events including deepfake fraud. Embroker’s AI Coverage Endorsement provides full-limit Technology E&O coverage for AI-assisted services, including $150,000 for algorithm removal expenses. Armilla/Lloyd’s offers standalone AI liability up to $25 million. The governance documentation that qualifies for one qualifies for all.
- The evidence is quantified. Marsh McLennan’s Cyber Risk Intelligence Center (n=thousands of organizations, August 2025) finds phishing-resistant MFA correlates with 9% lower breach likelihood; each 25% increase in EDR deployment correlates with 10% lower breach likelihood; tested incident response plans reduce material cyber events by 13%. These are the numbers your underwriter is using.
The 20 Questions: What Underwriters Are Asking About AI in 2026
The questions below are compiled from carrier application updates, broker renewal guidance (Founder Shield, WTW, Amwins, Marsh), and insurer public commentary through March 2026. No single carrier asks all 20. But a company that can answer all 20 has assembled the documentation package that satisfies every underwriter across every policy line.
Category 1: AI Inventory and Disclosure
These questions establish the scope of AI exposure. Underwriters cannot price what they cannot see.
| # | Question | What the Underwriter Wants | What Gets the Best Rate |
|---|---|---|---|
| 1 | What AI tools does your organization use, by department, and for what purpose? | Complete inventory. Shadow AI is the blind spot that kills coverage. | A maintained registry with tool name, vendor, department, use case, data inputs, and approval date. Updated quarterly. |
| 2 | Which AI tools process customer, employee, or client data? | Data flow mapping. This determines whether a claim hits cyber, E&O, or professional liability. | Classification matrix showing which tools touch PII, PHI, financial data, or client confidential information — with data flow diagrams. |
| 3 | Do you use AI in client-facing or revenue-generating activities? | Determines professional liability and E&O exposure scope. | Honest disclosure with documented human review workflows for AI-assisted deliverables. Dishonest answers void coverage at claims time. |
| 4 | Do you develop proprietary AI models, or do you use third-party SaaS AI tools? | Risk profile differs dramatically. Proprietary models carry IP, bias, and training data risk. SaaS tools carry vendor dependency and data processing risk. | Clear categorization with vendor risk assessments for SaaS tools and model documentation for proprietary systems. |
Category 2: Governance and Policy
These questions determine whether AI use is managed or ad hoc. The distinction drives the coverage/exclusion decision.
| # | Question | What the Underwriter Wants | What Gets the Best Rate |
|---|---|---|---|
| 5 | Does your organization have a written AI acceptable use policy? | Dated, signed, distributed to employees. A policy that exists on a shared drive but was never rolled out does not count. | A policy with effective date, executive signature, acknowledgment records showing employee receipt, and annual review schedule. |
| 6 | Does your organization have an AI governance framework or committee? | Evidence of structured oversight — not a one-off meeting, but an operating rhythm. | Named accountable person (or committee charter), meeting cadence, documented decisions, and escalation procedures. |
| 7 | Does the board receive reporting on AI risk? | This is the D&O question. WTW’s March 2026 analysis finds two-thirds of directors report limited AI knowledge, and fewer than 1 in 4 companies have board-approved governance policies. | Board meeting minutes showing AI governance discussion — even one documented conversation materially reduces D&O exposure. Annual board AI risk briefing. |
| 8 | How do you prevent and detect unauthorized AI use (shadow AI)? | ISACA (2025) identifies shadow AI as a primary underwriting blind spot. Insurers describe AI services used by third-party vendors as risks “no policy can cover what it cannot see.” | SSO enforcement for approved tools, expense report monitoring, employee survey or attestation process, and documented discovery methodology. |
Category 3: Security Controls and Data Protection
These are the technical controls that determine cyber coverage eligibility. AI amplifies both the attack surface and the documentation burden.
| # | Question | What the Underwriter Wants | What Gets the Best Rate |
|---|---|---|---|
| 9 | What data classification and DLP controls exist for AI inputs? | Prevention of sensitive data entering AI tools. This is the data exfiltration vector specific to AI deployment. | Documented data classification scheme, DLP rules for AI tool inputs, and monitoring evidence. |
| 10 | Do you have AI-specific incident response protocols? | Carrier Management (March 2026) reports carriers introducing “AI Security Riders” requiring documented AI-specific safeguards as prerequisites for underwriting. | AI-specific incident scenarios in the IR plan, tested through tabletop exercises. The Marsh McLennan data shows tested IR plans correlate with 13% fewer material cyber events. |
| 11 | What vendor risk assessment process exists for AI providers? | Third-party AI risk is where the ISACA “black box” concern lives. Underwriters describe AI systems as producing outputs “neither deterministic nor consistent.” | Vendor assessment checklist covering data usage rights, sub-processor disclosure, model training restrictions, SLA terms, and audit rights. Completed for each AI vendor. |
| 12 | How do you manage training data quality and provenance for proprietary AI? | Munich Re specifically identifies training data sources and permissions as a key underwriting consideration. | Documented data sourcing, licensing, bias testing, and quality assurance procedures. |
Category 4: Human Oversight and Quality Control
These questions address the E&O and professional liability exposure created when AI touches work product.
| # | Question | What the Underwriter Wants | What Gets the Best Rate |
|---|---|---|---|
| 13 | Is there a human-in-the-loop review process for AI-generated outputs used in business decisions or client deliverables? | WTW (2026) notes underwriters “generally support a human in the loop for critical AI decisions” and may stipulate it as a binding condition. | Documented review workflow specifying who reviews, what standard applies, what gets escalated, and how the review is recorded. |
| 14 | How do you validate AI outputs before they reach customers, clients, or regulators? | The “natural persons” problem: E&O policies may limit covered services to those provided by humans. If AI contributed and the output was not reviewed, coverage may be denied. | Quality assurance protocol with documented validation steps, error rates, and remediation procedures. |
| 15 | Do you use AI in making decisions that affect customers, employees, or third parties? How do you prevent bias? | WTW reports underwriters specifically ask: “Do you use AI in making decisions? How do you prevent bias? What if the AI fails? Is there human override?” | Bias testing documentation, defined use-case boundaries for AI decision support, human override procedures, and complaint/appeal mechanisms. |
Category 5: Compliance and Disclosure
These questions connect AI use to regulatory and contractual obligations.
| # | Question | What the Underwriter Wants | What Gets the Best Rate |
|---|---|---|---|
| 16 | What regulatory frameworks does your AI governance program align with? | Wiley (2026) predicts carriers will increasingly require “alignment with recognized AI risk management frameworks as a baseline for ‘reasonable security.’” | NIST AI RMF mapping document. ISO 42001 alignment evidence. State-specific compliance documentation (Colorado AI Act, Texas RAIGA, California ADMT regulations). |
| 17 | Do your client contracts address AI use in service delivery? | The seller’s contract gap: companies using AI in deliverables face liability exposure that existing MSAs may not allocate. | AI addendum in standard contracts, disclosure language in engagement letters, and liability allocation clauses. |
| 18 | What AI-related disclosures appear in your securities filings, marketing materials, or investor communications? | The D&O AI-washing question. AI-related securities class actions doubled from 7 (2023) to 14 (2024), with 53 through H1 2025 (Stanford Law/DLA Piper). | Accurate, conservative AI disclosures reviewed by counsel. Internal review process preventing overstatement of AI capabilities. |
Category 6: Training and Organizational Readiness
| # | Question | What the Underwriter Wants | What Gets the Best Rate |
|---|---|---|---|
| 19 | Have employees received AI-specific training, including risks of AI misuse and social engineering? | Founder Shield (2026) identifies AI misuse awareness and social engineering training as emerging underwriting criteria. | Dated training records with attendance documentation, training content covering approved tools, prohibited uses, and deepfake/social engineering awareness. |
| 20 | What is your process for evaluating and approving new AI tools before deployment? | Prevents the tool sprawl that creates ungoverned exposure. Delinea (2026) reports AI governance frameworks must be “thoroughly documented” for insurer satisfaction. | Documented approval workflow: request, risk assessment, security review, data classification, legal review, approval, and registry update. |
The Baseline Controls That Are No Longer Optional
Before the AI-specific questions, the application process requires documented evidence of foundational security controls. Without these, the AI governance conversation never starts.
| Control | Status in 2026 | Evidence Required |
|---|---|---|
| Phishing-resistant MFA (FIDO2 or smart cards) | Non-negotiable. Coalition’s 2024 data: 82% of claims involved organizations without MFA. Marsh McLennan: phishing-resistant MFA correlates with 9% lower breach likelihood. | Technical validation, not self-attestation. Deployment logs showing coverage of privileged, executive, and remote access. |
| Endpoint Detection and Response (EDR/XDR) | Non-negotiable. Legacy antivirus no longer qualifies. Marsh McLennan: each 25% increase in EDR deployment correlates with 10% lower breach likelihood. | Deployment evidence across workstations, servers, and mobile devices. Active response capability documentation. |
| Air-gapped, immutable backups | Required. Must be segregated from operational networks. | Backup architecture documentation and tested recovery evidence. |
| Tested incident response plan | Marsh McLennan (n=thousands, August 2025): organizations with tested IR plans are 13% less likely to experience material cyber events. Breaches without tested plans cost 55% more on average. | IR plan document plus tabletop exercise logs with dates, participants, and lessons learned. |
| Third-party risk management program | Continuous monitoring expected, not annual reviews. | Vendor assessment records, contractual security requirements, and ongoing monitoring evidence. |
The Premium Math: What Governance Is Worth at Renewal
The insurance market has created a measurable financial incentive for AI governance. The numbers are not theoretical.
The premium landscape in 2026:
- Clean accounts with strong controls: primary-layer premiums flat to -10% (CRC Cyber REDY Index, Q3 2025; Founder Shield, March 2026)
- Accounts without documented controls: hard market persists, with 15-20% premium increases projected (S&P Global Ratings; Forrester Research)
- AI governance programs: analyst projections estimate 10-20% premium reduction relative to ungoverned competitors for affirmative coverage qualification
The coverage landscape:
- Companies with governance: qualify for affirmative AI coverage from Coalition, Embroker, or Armilla/Lloyd’s
- Companies with legacy policies: “silent” coverage that may or may not respond to AI claims — and disappears at the next renewal
- Companies without governance: face the Verisk ISO exclusion (CG 40 47 01 26) or WR Berkley’s absolute exclusion (PC 51380) — no coverage for any claim “arising out of” AI use
The governance investment vs. premium impact: The $15,000-$45,000 governance program documented in prior research produces the documentation that satisfies every underwriter question above. For a mid-market company paying $50,000-$200,000 annually across cyber, D&O, E&O, and professional liability, a 10-20% premium differential on even one policy type recoups the governance investment within one renewal cycle.
The Application-Day Playbook: A 6-Week Pre-Renewal Sprint
Week 1-2: Assemble the Evidence Package
The AI governance dossier — one package, four underwriters:
- AI tool inventory — every tool, department, purpose, data inputs, approval date
- AI acceptable use policy — signed, dated, distributed, acknowledged
- Data classification matrix for AI inputs — PII, PHI, financial, confidential
- Employee AI training records — dates, attendance, content summary
- Board meeting minutes showing AI governance discussion
- Vendor AI risk assessments — completed for each AI provider
- Human review workflow documentation — for any AI-assisted client deliverables
- AI-specific incident response protocol — with tabletop exercise evidence
- Shadow AI discovery evidence — SSO logs, expense audits, employee attestations
- Regulatory compliance mapping — NIST AI RMF, applicable state laws
Week 3-4: Prepare the Broker
The broker is the translator between the governance program and the underwriter’s risk model. Before the renewal meeting, provide the broker with:
- The complete evidence package above
- A one-page AI governance summary (what tools, what oversight, what controls)
- Loss history with AI-specific context (or confirmation of no AI-related incidents)
- A list of AI tools added since last renewal
- Documentation of any AI-related regulatory developments affecting the business
Instruct the broker to market the risk to carriers writing affirmative AI coverage. The broker should know about Coalition, Embroker, and Armilla/Lloyd’s specifically. Silent coverage is a declining asset.
Week 5: The Renewal Meeting — What to Ask
Walk into the renewal meeting with these questions:
- Does the current policy contain any AI-specific exclusions or endorsements — including those added at last renewal in the endorsement schedule?
- Is AI-related liability affirmatively covered, silently covered, or excluded in each policy line?
- Does the “professional services” definition encompass AI-assisted work product?
- What specific governance documentation would move the company from silent to affirmative coverage?
- What is the premium differential between exclusion and affirmative coverage — across cyber, D&O, E&O, and professional liability?
- Does the policy contain “arising out of” language that could be triggered by any employee’s use of any AI tool?
- What AI Security Rider conditions, if any, are being imposed?
Week 6: Negotiate and Bind
Negotiation posture for mid-market companies:
- Do not accept blanket AI exclusions without challenge. Continuum Insurance (2026) recommends negotiating narrower exclusions tied to specific AI use cases rather than accepting sweeping “arising out of” language.
- Use governance documentation as leverage. A company that can produce the evidence package above is a demonstrably better risk than one that cannot.
- Request confirmation that AI-related regulatory defense costs are covered under the D&O policy.
- Confirm that the E&O definition of “professional services” explicitly includes AI-assisted deliverables.
- Review endorsement schedules line by line. Insurers frequently insert new exclusions at renewal inside endorsements rather than the base policy. A CFO who reviews only the declarations page and premium notice may miss the endorsement that eliminates coverage.
- Compare quotes between carriers offering affirmative AI coverage and those offering only exclusions. The premium differential reveals the market price of governance.
Key Data Points
- 82% of cyber claims involved organizations without MFA (Coalition 2024 Cyber Threat Index)
- 9% lower breach likelihood with phishing-resistant MFA vs. standard MFA (Marsh McLennan CRIC, August 2025)
- 10% lower breach likelihood per each 25% increase in EDR deployment (Marsh McLennan CRIC, August 2025)
- 13% fewer material cyber events for organizations with tested incident response plans (Marsh McLennan CRIC, n=thousands, August 2025)
- 55% higher breach costs for organizations without tested IR plans (industry benchmark, 2025)
- 15-20% cyber premium increase projected for 2026 for accounts without strong controls (S&P Global Ratings; Forrester Research)
- Flat to -10% premiums for clean accounts with documented security controls (CRC Cyber REDY Index Q3 2025)
- 53 AI-related securities class actions through H1 2025; median settlement $11.5M (Stanford Law School/DLA Piper)
- Two-thirds of board directors report limited or no AI knowledge; fewer than 1 in 4 companies have board-approved AI governance (WTW, March 2026)
- ISO CG 40 47 01 26 — standardized AI exclusion available to every carrier since January 2026 (Verisk)
- $150,000 algorithm removal expense coverage included in Embroker’s AI endorsement (August 2025)
- Up to $25 million AI liability coverage from Armilla/Lloyd’s; first-year SME costs $15,000-$35,000 (January 2026)
What This Means for Your Organization
The insurance application has become the audit. Every question above is a question a carrier will ask at the next renewal — and the company’s answers determine whether it pays 10-20% less for broader coverage or 15-20% more for narrower coverage with AI exclusions. The gap between those two outcomes is not abstract. For a mid-market company carrying $100,000 in combined annual premiums across four policy lines, the governance program that costs $15,000-$45,000 to build produces a measurable return in the first renewal cycle.
The practical action is a 6-week pre-renewal sprint. Assign the CIO, GC, or whoever inherited AI governance to assemble the evidence package: tool inventory, acceptable use policy, training records, vendor assessments, review workflows, board minutes, and incident response protocol. Hand this package to the broker. Walk into the renewal meeting with the seven questions above. Negotiate from a position of documented risk management rather than hoping the underwriter does not ask about AI.
The companies in the 5% — the ones capturing value from AI while managing risk — treat governance and insurability as the same investment. The $15,000-$45,000 governance program is not a compliance cost. It is the price of the application answers that get the best rates. If the timing of your next renewal raises questions about where to start, I’d welcome the conversation — brandon@brandonsneider.com.
Sources
-
Marsh McLennan Cyber Risk Intelligence Center. “Cybersecurity Signals: Connecting Controls and Incident Outcomes.” August 2025. IR plans reduce material events 13%; phishing-resistant MFA 9% lower breach likelihood; EDR 10% per 25% deployment. Based on thousands of CSA questionnaires and claims data. Credibility: Highest — major broker, proprietary claims data, largest dataset in the industry. https://www.marshmclennan.com/content/mmc-web/mmc-v2/en/news-events/2025/august/marsh-mclennan-cyber-risk-intelligence-center-report.html
-
WTW. “Sarbanes-Oxley and the AI Governance Gap: D&O Insurance Considerations.” March 2026. Two-thirds of directors lack AI knowledge; underwriters ask “Do you use AI in making decisions? How do you prevent bias?” Credibility: High — major insurance broker, independent analysis. https://www.wtwco.com/en-us/insights/2026/03/sarbanes-oxley-and-the-ai-governance-gap-d-and-o-insurance-considerations
-
WTW. “Cyber Risk: A Look Ahead to 2026.” February 2026. AI exclusions not yet deployed regularly on cyber policies; organizations should pursue broad AI coverage now. Credibility: High — major broker with market access. https://www.wtwco.com/en-us/insights/2026/02/cyber-risk-a-look-ahead-to-2026
-
Founder Shield. “Looking Ahead: Cyber Insurance in 2026.” March 2026. MFA, EDR, immutable backups as price of admission; AI governance factors. Credibility: Moderate-high — technology insurance broker, sector expertise. https://foundershield.com/blog/cyber-insurance-in-2026/
-
Founder Shield. “Technology Insurance Pricing Trends 2026.” March 2026. Primary layer flat to -10% for clean accounts; blended E&O/cyber policies becoming standard. Credibility: Moderate-high — broker with pricing data access. https://foundershield.com/blog/tech-insurance-pricing-trends-2026/
-
Coalition. “Coalition Adds Affirmative AI Endorsement to Cyber Policies.” 2025. AI-related security events covered, deepfake fraud protection. Credibility: Moderate — carrier source, but confirmed product offering. https://www.coalitioninc.com/announcements/coalition-adds-new-affirmative-ai-endorsement-to-cyber-policies
-
Embroker. “Embroker’s AI Coverage: Built for the Way Tech Companies Actually Use AI.” August 2025. Full-limit AI coverage, $150K algorithm removal expense, regulatory investigation defense. Credibility: Moderate — carrier source, specific coverage terms confirmed. https://www.embroker.com/blog/embroker-launches-ai-coverage/
-
CRC Cyber REDY Index. Q3 2025. Premium trends by account quality: flat to -10% for clean accounts. Credibility: Moderate-high — wholesale broker market data.
-
S&P Global Ratings. “Cyber Insurance Market Outlook 2026.” $23B projected premiums, 15-20% increase projections. Credibility: Highest — independent ratings agency. https://www.spglobal.com/ratings/en/regulatory/article/cyber-insurance-market-outlook-2026-resilient-earnings-tougher-competition-pockets-of-growth-s101658506
-
Carrier Management. “How Artificial Intelligence Is Changing Cyber Risk in 2026.” March 2026. AI Security Riders, adversarial red-teaming requirements, tech stack disclosure. Credibility: High — independent trade publication. https://www.carriermanagement.com/features/2026/03/09/285417.htm
-
ISACA. “Cyber Insurance in Crisis with AI Blind Spots.” 2025. Shadow AI as underwriting blind spot; AI risk registers as better-risk indicator. Credibility: High — independent professional association. https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2025/cyber-insurance-in-crisis-with-ai-blind-spots
-
Wiley. “7 Predictions for Cyber Risk and Insurance in 2026.” 2026. AI Security Riders, framework alignment requirements. Credibility: High — insurance law firm. https://www.wiley.law/article-7-Predictions-For-Cyber-Risk-And-Insurance-In-2026
-
Delinea. “Cyber Insurance Coverage Requirements for 2026.” AI governance framework documentation requirements, identity-centric controls. Credibility: Moderate-high — cybersecurity vendor, but requirements confirmed across multiple sources. https://delinea.com/blog/cyber-insurance-coverage-requirements-for-2026
-
Munich Re Specialty. “The New Frontier of Underwriting AI Risk.” Training data, governance board, proprietary model assessment. Credibility: Highest — major reinsurer with primary underwriting data. https://www.munichre.com/en/insights/cyber/the-new-frontier-of-underwriting-ai-risk.html
-
Continuum Insurance. “The Hidden AI Exclusions in PI and Cyber Insurance.” 2026. Endorsement schedule warnings, negotiation guidance for narrower exclusions. Credibility: Moderate-high — insurance broker, practical focus. https://www.continuuminsure.com/articles/the-hidden-ai-exclusions-in-pi-and-cyber-insurance/
-
Stanford Law School Securities Class Action Clearinghouse/DLA Piper. 53 AI-related SCAs through H1 2025. Credibility: Highest — academic institution and major law firm.
-
Verisk/ISO. CG 40 47 01 26: Exclusion — Generative Artificial Intelligence. Effective January 2026. Credibility: Highest — standardized insurance form, primary source.
-
Amwins. “State of the Market — 2026 Outlook.” AI exclusion trends, specialty class emergence. Credibility: High — major wholesale broker with market-wide visibility. https://www.amwins.com/resources-and-insights/market-insights/article/state-of-the-market-2026-outlook
-
Insurance Thought Leadership. “Cyber Insurance Exclusions to Expect in 2026.” AI errors, omissions, and regulatory violations now excluded by name. Credibility: Moderate-high — independent trade publication. https://www.insurancethoughtleadership.com/cyber/cyber-insurance-exclusions-expect-2026
-
McLane Middleton. “Insurance for Cyber, Privacy and AI: Are You Sure You Have It?” Three-coverage-type analysis, renewal preparation guidance. Credibility: Moderate-high — law firm insurance practice advisory. https://www.mclane.com/insights/insurance-for-cyber-privacy-and-ai-are-you-sure-you-have-it/
Brandon Sneider | brandon@brandonsneider.com March 2026