AI and Professional Liability: The Malpractice Exposure Nobody Priced

Brandon Sneider | March 2026


Executive Summary

  • 1,093 documented cases of AI-generated hallucinations in legal filings globally, with 769 in the United States alone — and courts have issued 66 opinions sanctioning or reprimanding the misuse (Charlotin AI Hallucination Database, March 2026). The legal profession’s errors are visible because courts publish sanctions. Every other profession using AI in client-facing work product has the same exposure with less visibility.
  • Professional liability insurers are bifurcating the market: Berkley’s “absolute” AI exclusion eliminates coverage for any claim “arising out of” AI use across D&O, E&O, and fiduciary liability products, while Hamilton, Philadelphia Indemnity, and Verisk’s standardized ISO forms (effective January 2026) are removing coverage that most policyholders assumed they had.
  • Only 17% of workers say AI output is reliable without human oversight, and 42% report editing or fixing AI output before use (Connext Global, n=1,000, January 2026). For professional services firms, every unreviewed AI output that reaches a client is an unpriced liability event.
  • The companies capturing value treat AI review workflows as professional infrastructure — the same rigor applied to partner review of associate work product. The 95% treating AI output as final draft rather than first draft are accumulating liability they may not be insured against.

The Liability No One Priced Into the Premium

Professional liability insurance was designed for a world where errors came from human judgment. A lawyer missed a filing deadline. An accountant transposed digits. An engineer miscalculated a load. Insurers understood these risks, had actuarial data spanning decades, and priced premiums accordingly.

AI breaks this model in three ways that matter to a 200-500 person professional services firm.

1. The Volume Problem

A single professional using AI can generate more work product in a day than the same person could produce in a week. Every document, memo, analysis, and recommendation that AI touches is a potential liability event. The hallucination rate for leading models — typically 3-10% depending on task complexity and domain — means errors are not occasional anomalies. They are statistical certainties at production volume.

The legal profession has made this visible. Charlotin’s database documents 1,093 cases globally where AI-generated content appeared in court filings. Of those, 436 involved practicing lawyers (not just pro se litigants), and 900 involved fabricated content — invented case citations, fictitious statutes, and nonexistent legal authorities. In January 2026, New York’s Third Department imposed monetary sanctions on lawyers for citing six nonexistent AI-generated cases. In Johnson v. Dunn (N.D. Ala., July 2025), the court disqualified attorneys entirely and referred them to state bar regulators.

Every profession that produces client-facing work product — accounting, financial advisory, engineering, architecture, healthcare — faces the same underlying risk. The difference is that courtrooms have immediate verification mechanisms. An accounting error that flows from AI-generated analysis into a tax return or audit may not surface until an IRS examination, a shareholder dispute, or a regulatory investigation — months or years later.

2. The “Enabled Incompetence” Problem

Traditional malpractice assumes either negligent failure to exercise professional judgment or deliberate misconduct. AI introduces a third category that legal scholars are calling “enabled incompetence”: the professional believes they conducted legitimate analysis but were misled by tools that present fabricated output with the same confidence as accurate output.

This matters because the standard of care is shifting in both directions simultaneously. The ABA’s Formal Opinion 512 (July 2024) establishes that lawyers must understand AI’s capabilities and limitations and verify all AI-generated output — making failure to verify a competence violation. The Journal of Accountancy’s February 2026 guidance extends the same principle to CPAs: AI output requires independent human review before client delivery, regardless of the system’s claimed reliability.

At the same time, as AI adoption becomes widespread, not using AI may itself become a competence issue. Medical malpractice data shows a 14% increase in AI-related claims between 2022 and 2024, with the emerging standard shifting toward “a reasonable physician would have consulted the AI diagnostic tool.” The professional is caught in a narrowing corridor: liable for using AI without adequate review, and potentially liable for not using AI at all.

3. The Coverage Gap

This is where the financial exposure crystallizes. Professional liability insurers who wrote policies before 2023 did not price premiums for AI-generated advice errors. Those policies may silently cover such losses — creating what the industry calls “silent AI” exposure. Insurers are now closing this gap aggressively.

The exclusion landscape as of March 2026:

Insurer Exclusion Type Scope
Berkley Insurance (PC 51380) Absolute AI exclusion Any claim “arising out of” AI use, deployment, or development — across D&O, E&O, and fiduciary liability
Hamilton Insurance Group Generative AI exclusion Claims involving use of “generative artificial intelligence,” naming specific platforms (ChatGPT, Gemini, Midjourney, DALL-E)
Philadelphia Indemnity Content exclusion Content “created using generative artificial intelligence in performance of services”
ISO/Verisk (CG 40 47 01 26) Standardized GL exclusion Bodily injury, property damage, and personal/advertising injury arising out of generative AI — available to all carriers since January 2026

The critical detail: Berkley’s exclusion is not limited to AI the insured chose to use. It covers claims “based upon, arising out of, or attributable to the actual or alleged use, deployment, or development of artificial intelligence” by any person or entity. If a third-party vendor’s AI system malfunctions and causes a client loss, the insured’s E&O policy may not cover the resulting claim.

Many E&O policies also limit covered services to those provided by “natural persons.” If AI contributed to the work product, the insurer may argue the service was not a “professional service” within the policy definition — even if a licensed professional supervised the output.

Where the Exposure Is Highest

Not every professional services function carries equal AI liability risk. The exposure concentrates where three factors converge: AI touches client-facing output, the output involves professional judgment, and errors carry financial or legal consequences.

The most documented exposure. ABA Formal Opinion 512 creates a clear duty to verify AI output. Forty-eight states and the District of Columbia have issued or are developing AI-specific guidance for lawyers. Courts are applying sanctions that range from monetary penalties to disqualification and bar referral. The malpractice insurance market is responding: some lawyers report that coverage for AI-related claims is not explicitly included in their policies, and use of AI tools “may not satisfy the definition of professional service” under standard malpractice forms.

Accounting and Financial Advisory

The AICPA’s Confidential Client Information Rule and Statements on Standards for Tax Services require CPAs to “make reasonable efforts to protect taxpayer information shared with others.” AI tools that process client financial data may violate these standards if vendor terms permit data use for model training. The Journal of Accountancy identifies hallucinations, deepfake fraud (scammers impersonating CFOs to authorize transfers), and the “black box problem” as the top CPA-specific risks. A financial advisor cannot defend a negligent recommendation by arguing “the algorithm suggested it” — the professional judgment standard applies regardless of the tool that informed the judgment.

Healthcare

Over 250 healthcare AI bills were introduced across 34+ states by mid-2025. Colorado’s AI Act (enforcement June 30, 2026) requires disclosure whenever AI is used in high-risk decisions. In healthcare, the liability sits in a “gray zone”: too technical for traditional malpractice, too clinical for tech E&O, and outside the bounds of administrative E&O. One documented case involved an AI system that incorrectly inserted a diagnosis into a patient’s medical record; the clinician, pressed for time, missed the error, triggering cascading administrative, billing, and clinical complications plus legal exposure. Risk managers are advised to “assume that AI-related incidents may not be covered under existing E&O terms, unless affirmatively endorsed” (EPIC Insurance Brokers, 2025).

Engineering and Architecture

Less documented but structurally identical. AI-assisted design calculations, structural analysis, and specification drafting carry the same professional judgment standard. An engineer who relies on AI-generated structural calculations without independent verification faces the same liability framework as a lawyer who cites AI-generated case law without checking the citations.

The Insurance Market Is Not Waiting

Between January 2025 and January 2026, the professional liability insurance market experienced what Harvard Law School Forum on Corporate Governance calls “a structural break.” The assumed gradual tightening became a sudden bifurcation: some organizations renewed with affirmative AI coverage, while others encountered sweeping exclusions.

The emerging market has three tiers:

Tier 1: Affirmative AI coverage. A small but growing number of specialized products explicitly cover AI-related professional liability. Armilla Insurance Services (launched April 2025, underwritten at Lloyd’s by Chaucer Group) covers AI hallucinations, degrading model performance, and algorithmic failures. Munich Re’s aiSure program (since 2018) provides performance guarantees for AI technologies. Google partnered with Beazley, Chubb, and Munich Re to offer tailored AI coverage for Google Cloud customers. These products require demonstrated governance — documented review workflows, approved tool lists, and employee training programs.

Tier 2: Silent coverage with tightening conditions. Legacy policies that do not explicitly exclude AI may still cover AI-related claims under existing E&O language. This is the “silent AI” coverage most mid-market firms currently rely on. It is disappearing at every renewal cycle. Insurers are adding AI-specific questions to renewal applications, and answers that reveal unmanaged AI use trigger exclusion endorsements.

Tier 3: Absolute exclusion. Berkley’s model is spreading. Organizations that cannot demonstrate AI governance programs face blanket exclusion from coverage for any AI-related claim — including claims where AI played a minor or incidental role. The “arising out of” language is deliberately broad.

The governance documentation that separates Tier 1 from Tier 3 is the same governance infrastructure this research corpus has documented: approved tool lists, data classification, human review workflows, incident response protocols, and employee training. The CFO who invested $15,000-$45,000 in AI governance is now the CFO who can obtain affirmative AI coverage. The CFO who did not may discover at the worst possible moment that the firm’s professional liability policy excludes the claim that matters most.

The Review Workflow That Separates Liability from Coverage

The Connext Global 2026 AI Oversight Survey (n=1,000 U.S. workers, January 2026) quantifies what professionals already know: AI output requires human review. Only 17% say AI is reliable without oversight. Thirty-five percent say reliability requires “AI plus light review,” and another 35% require “AI plus dedicated human oversight.” Forty-two percent of respondents report editing or fixing AI output, and 42% say AI “sometimes left out important details or context.”

For a professional services firm, this translates into a specific operational requirement: every AI-assisted work product that reaches a client must pass through a human review workflow calibrated to the liability risk of the output.

The three-tier review framework:

Output Type Review Standard Example
Internal analysis (not client-facing) Light review — spot-check for factual accuracy Market research summary, internal briefing
Client-facing informational (no professional judgment) Standard review — full accuracy verification Client newsletter, industry update, presentation
Client-facing professional work product Full professional review — same standard as reviewing associate work Tax return, audit opinion, legal brief, engineering specification, medical recommendation

The third tier is non-negotiable. A partner reviewing an AI-generated draft memo must apply the same professional judgment standard they would apply to an associate’s work — checking citations, verifying calculations, evaluating reasoning, and exercising independent professional judgment on conclusions. “The AI wrote it” is not a defense to a malpractice claim. “I reviewed it with the same rigor I apply to all work product” is.

The ABA’s standard is instructive across professions: an “appropriate degree of independent verification or review” is required, and what constitutes “appropriate” depends on the specific task, the AI tool’s known limitations, and the consequences of error. For client-facing professional work product, the appropriate degree is full professional review.

Key Data Points

  • 1,093 documented cases of AI hallucinations in court filings globally; 769 in the U.S.; 436 involving practicing lawyers; 66 court opinions sanctioning or reprimanding misuse (Charlotin Database, March 2026)
  • 14% increase in medical malpractice claims involving AI tools between 2022 and 2024
  • 72% of S&P 500 companies discussed AI risks in annual securities filings (Hunton Andrews Kurth, July 2025)
  • 17% of workers say AI is reliable without human oversight; 42% report editing or fixing AI output (Connext Global, n=1,000, January 2026)
  • 64% of workers expect the need for human review of AI to increase (Connext Global, January 2026)
  • ISO CG 40 47 01 26 — standardized generative AI exclusion available to all liability carriers since January 2026 (Verisk)
  • Berkley PC 51380 — absolute AI exclusion covering D&O, E&O, and fiduciary liability products; eliminates coverage for any claim “arising out of” AI use by any person or entity
  • ABA Formal Opinion 512 (July 2024) — establishes duty of competence includes understanding AI limitations and verifying all AI-generated output; boilerplate engagement letter consent is insufficient
  • AICPA standards require CPAs to verify AI output independently before client delivery; no universal disclosure mandate yet, but California and Utah have enacted AI disclosure requirements

What This Means for Your Organization

If your company provides professional services — legal, accounting, financial advisory, engineering, architecture, healthcare, consulting, or any function where you produce work product for clients — AI has changed the liability calculus in ways your current insurance may not cover.

The immediate action is a three-part audit. First, inventory every AI tool touching client-facing work product — not just the tools you approved, but the tools employees are actually using. Second, pull your current professional liability policy and read the AI-related language (or lack thereof). Identify whether your coverage is affirmative, silent, or excluded. Third, implement or formalize the review workflow: every AI-assisted deliverable that carries professional judgment must receive the same human review standard as work produced without AI.

The firms that get this right gain a measurable advantage. They qualify for affirmative AI coverage at renewal. They reduce malpractice exposure by documenting the review standard. And they use AI to produce more work at higher margins — because the review workflow ensures quality while AI handles volume. The firms that treat AI output as final draft are accumulating uninsured liability with every client deliverable.

This is a conversation worth having before your next policy renewal, not after your first claim. If it raised questions specific to your organization, I’d welcome the conversation — brandon@brandonsneider.com.

Sources

  1. Charlotin, D. “AI Hallucination Cases Database.” damiencharlotin.com. Accessed March 2026. 1,093 cases documented globally. Credibility: High — independent researcher, publicly verifiable database, cited in court decisions.

  2. American Bar Association. “Formal Opinion 512: Generative Artificial Intelligence Tools.” July 29, 2024. Credibility: Highest — authoritative professional standards body.

  3. Harvard Law School Forum on Corporate Governance. “The Hidden C-Suite Risk of AI Failures.” September 22, 2025. Credibility: High — academic forum, independent analysis.

  4. Connext Global. “2026 AI Oversight Report.” February 2026. n=1,000 U.S. workers via Pollfish, January 2026. Credibility: Moderate-high — industry survey via third-party polling platform; sample limited to current AI users.

  5. Hunton Andrews Kurth. “How Insurance Policies Are Adapting to AI Risk.” July 2, 2025. Credibility: High — major insurance law practice, specific policy analysis.

  6. Verisk/ISO. “CG 40 47 01 26: Exclusion — Generative Artificial Intelligence.” Effective January 2026. Credibility: Highest — standardized insurance form, primary source.

  7. Berkley Insurance Co. “PC 51380: Absolute AI Exclusion.” 2025-2026 filings. Credibility: Highest — primary insurer filing.

  8. Hamilton Insurance Group. “Generative Artificial Intelligence Exclusion for Professional Liability.” 2025. Credibility: Highest — primary insurer endorsement.

  9. EPIC Insurance Brokers. “DoNoHarm.exe: The Liability Reckoning for AI in U.S. Healthcare.” 2025. Credibility: Moderate-high — industry broker analysis.

  10. Journal of Accountancy. “AI Risks CPAs Should Know.” February 2026. Credibility: High — AICPA publication, authoritative for accounting profession.

  11. Journal of Accountancy. “Should I Disclose My Use of Gen AI to Clients?” April 2025. Credibility: High — AICPA publication.

  12. Jones Walker LLP. “From Enhancement to Dependency: What the Epidemic of AI Failures in Law Means for Professionals.” 2025. Credibility: Moderate-high — law firm analysis with substantive case law review.

  13. RAND Corporation. “Liability for Harms from AI Systems: The Application of U.S. Tort Law.” 2025. Credibility: High — independent research institution, peer-reviewed methodology.

  14. Setnor Byer. “New AI-Specific Insurance Exclusions Underscore Risks.” 2025. Credibility: Moderate-high — insurance law analysis.

  15. Armilla Insurance Services. AI Liability Insurance Policy announcement. April 2025. Underwritten at Lloyd’s by Chaucer Group. Credibility: Moderate — primary source, but vendor marketing context.


Brandon Sneider | brandon@brandonsneider.com March 2026