AI Accountability at Mid-Market Scale: Who Is on the Hook When AI Goes Wrong

Brandon Sneider | March 2026


Executive Summary

  • “The AI did it” is not a defense — in any jurisdiction, in any profession, under any theory of liability. Courts, regulators, and tribunals have uniformly held that organizations bear full responsibility for AI-generated outputs, regardless of which department selected the tool or how sophisticated the vendor’s claims were. Air Canada learned this for $812 in a chatbot case. The next company may learn it for $200,000 per violation under Texas RAIGA or through a class action under Illinois AIPA.
  • The accountability gap at mid-market companies is structural, not philosophical. PwC’s 2025 Responsible AI survey finds 56% of organizations now place first-line teams (IT, engineering, data) in charge of responsible AI — but only 28% have reached the “strategic” stage where accountability is embedded in operations. The remaining 72% are governing AI with good intentions and no named owner for outcomes.
  • Five state AI laws now in effect or taking effect in 2026 create deployer-specific accountability obligations — impact assessments, consumer notification, human oversight documentation, and incident response timelines — each requiring a named individual who can answer: “Who authorized this AI use, who reviewed the output, and who is responsible for the result?”
  • The practical accountability framework for a 200-500 person company maps five decision types to five existing roles — no new hires required, no governance committee necessary. The framework translates regulatory “reasonable care” into a RACI matrix that every CxO can implement in two weeks.
  • 88% of organizations deploy AI but only 25% have board-level policies governing that deployment (NACD/WTW, 2025). The 63% gap is where enforcement exposure, D&O liability, and insurance coverage exclusions converge. Companies that close this gap by assigning accountability before an incident spend $5,000-$15,000 in staff time. Companies that close it after spend six figures in legal fees.

The question of who bears responsibility when AI produces an error is not theoretical. Courts and regulators have answered it, consistently and unambiguously: the organization that deployed the AI system, and within that organization, the individuals who authorized its use, supervised its operation, or failed to review its output.

The Precedent Chain

Moffatt v. Air Canada (BCCRT, February 2024): Air Canada’s chatbot provided incorrect bereavement fare information. The airline argued the chatbot was a “separate entity” from the company. The tribunal rejected this argument entirely, holding that “Air Canada is responsible for all information provided on its website, whether by a static page or a chatbot” (BCCRT 149). Damages were modest ($812 CAD), but the principle is absolute: an organization cannot outsource accountability to an algorithm.

Mata v. Avianca (S.D.N.Y., June 2023) and 300+ subsequent cases: Attorney Steven Schwartz filed a brief containing fabricated case citations generated by ChatGPT. By August 2025, three federal courts imposed sanctions on attorneys for AI hallucinations in a single two-week period. The total documented case count exceeds 300 since mid-2023, with over 200 recorded in 2025 alone — spanning Arizona, Louisiana, Florida, the UK, Australia, Canada, and Israel (Jones Walker, August 2025). Courts hold that “even if misuse of AI is unintentional,” the professional remains fully responsible for filing accuracy.

Cigna PxDx System (multiple courts, 2024-2025): Cigna’s automated claims system batch-denied health insurance claims with physician “reviews” averaging 1.2 seconds each — processing an estimated 300,000 denials over two months. Multiple lawsuits alleged that the speed of “review” demonstrated no human oversight, making the organization liable for automated decisions it treated as physician-approved.

SEC v. Presto Automation (January 2025): The SEC charged Presto for materially misleading AI capability claims, finding the company “had no established process for drafting, reviewing, or approving periodic or current reports” and “never implemented disclosure controls.” The absence of an accountability framework was itself the violation (SEC Administrative Proceeding 33-11352).

The pattern is consistent across industries: the absence of documented accountability amplifies liability, while the presence of documented accountability — even imperfect accountability — provides the basis for demonstrating reasonable care.

The Regulatory Accountability Map

Five state AI laws create specific deployer accountability obligations in 2026, each requiring named individuals responsible for defined functions.

Jurisdiction Law Effective Date Key Accountability Requirement Penalty Range
Texas RAIGA (HB 149) January 1, 2026 Deployers must produce AI system descriptions, training data details, performance metrics, and post-deployment monitoring evidence upon AG demand. 60-day cure period. $10,000-$200,000 per violation; $2,000-$40,000/day continuing
Colorado AI Act (SB 24-205) June 30, 2026 Deployers must implement risk management policy, complete annual impact assessments, notify AG within 90 days of discovering algorithmic discrimination. Civil penalties under unfair trade practices
Illinois AIPA (HB 3773) January 1, 2026 Employers must notify employees/applicants whenever AI influences employment decisions; discriminatory use is a civil rights violation. Civil rights enforcement; private action likely
California CCPA ADMT Regs 2026 (pending final) Meaningful human oversight required for automated decision-making in employment; trained individuals must have power to override AI. CCPA penalty framework
New York City LL 144 In effect Annual independent bias audits for automated employment decision tools; public disclosure of audit results. $500-$1,500 per violation

Beyond state-specific statutes, three federal agencies enforce AI accountability through existing law:

EEOC: Employers — not vendors — are liable under Title VII when AI hiring tools produce discriminatory outcomes. EEOC guidance (September 2025) treats algorithmic tools as “selection procedures” subject to disparate impact analysis. If a vendor’s tool discriminates, the employer pays. The tool vendor may face secondary liability, but the employer cannot delegate primary accountability.

FTC: Operation AI Comply (launched September 2024) has produced enforcement actions against DoNotPay ($193,000 settlement), Rytr, Click Profit, and others — each requiring ongoing compliance documentation and periodic reporting. The FTC does not require a new statute to enforce AI accountability; it applies Section 5 unfair and deceptive practices authority to AI-specific claims.

SEC: The Presto Automation enforcement action establishes that AI governance documentation failures in public companies trigger securities law liability. SEC 2026 examination priorities explicitly flag AI disclosure mismatches.

Why Mid-Market Companies Have the Biggest Gap

Large enterprises assign AI accountability to dedicated governance teams, CAIOs, and cross-functional committees. Mid-market companies — where AI decisions are made by the same people who run everything else — face a structural accountability vacuum.

The data confirms this:

  • Only 23% of IT leaders are “very confident” their organizations can manage AI governance when deploying generative AI tools (Gartner, 2025). The other 77% are deploying anyway.
  • 55% of organizations have not implemented an AI governance framework, though 40% report they have “started developing one” (Gartner, 2025). Starting is not governing.
  • 65% of organizations with AI governance frameworks have defined accountability and decision rights for algorithms — which means 35% have frameworks without the most critical element: a named human who owns the outcome (Gartner, 2025).
  • Two-thirds of board directors report limited or no knowledge of AI, and fewer than one in four companies have board-approved AI governance policies (WTW Global D&O Survey, 2025). The result: 88% of organizations deploy AI while 75% of boards have not formally authorized or governed that deployment.

At a 200-500 person company, the accountability gap typically looks like this: the CEO says “use AI.” IT selects the tool. Marketing, sales, and operations start using it. Nobody documents who approved which use cases, who reviews AI-generated output before it reaches customers, or who is responsible when the output is wrong. The first time the company discovers the gap is when a customer complains, an employee files a charge, or a state AG sends a civil investigative demand.

The Five-Role Accountability Framework

The regulatory standard across all five state laws and three federal enforcement regimes reduces to three questions:

  1. Who authorized this AI use case? (Authorization accountability)
  2. Who reviewed this AI output before it affected someone? (Oversight accountability)
  3. Who is responsible for the outcome? (Result accountability)

At a 200-500 person company, these three questions map to five existing roles. No new hires are required. The framework assigns decision rights to the people who already own the functions AI touches.

The RACI Matrix for AI Accountability

AI Decision Type Responsible (Does the Work) Accountable (Signs Off) Consulted Informed
Tool approval — which AI tools may be used, for what purposes, with what data CIO / IT lead CEO or designated AI sponsor GC (legal risk), CISO (security) Department heads, Board
Use case authorization — whether a specific workflow may use AI, at what risk tier Department head deploying AI CIO / AI sponsor GC (if client-facing or regulated), HR (if employment-related) CEO, IT
Output review for client/customer-facing work — human review before AI-generated content reaches external parties Individual practitioner using AI Department head / practice lead GC (for regulated work product) CIO, Quality/compliance
Incident response — when AI produces a wrong, harmful, or discriminatory output CIO (technical containment), GC (legal exposure), Department head (customer impact) CEO CISO, HR (if employment-related), CFO (if financial exposure) Board
Ongoing monitoring — periodic review of AI system performance, bias testing, impact assessment CIO / IT lead AI sponsor (CIO or designated executive) GC (regulatory compliance), Department heads (operational performance) CEO, Board (quarterly)

What Each Role Must Document

The CEO / AI Sponsor signs the AI acceptable use policy, authorizes the tool registry, and receives quarterly AI performance reports. When a regulator asks “who authorized AI deployment in this company?”, the answer points here. Estimated time: 2-4 hours per quarter.

The CIO / IT Lead maintains the AI tool registry, manages vendor relationships, conducts or commissions annual impact assessments (required by Colorado), and leads technical incident response. When a regulator asks “what AI systems are deployed and how are they monitored?”, the answer points here. This is the person who must produce documentation within 60 days of a Texas AG civil investigative demand. Estimated time: 8-12 hours per month embedded in existing IT operations.

The GC / Legal Counsel reviews AI use cases for regulatory exposure, approves AI use in regulated contexts (hiring, client-facing work product, financial reporting), and leads legal incident response. When the EEOC asks about AI in hiring decisions or a state AG investigates algorithmic discrimination, the answer starts here. Estimated time: 4-8 hours per month, higher during policy development and incident response.

Department Heads authorize AI use within their functions, establish output review standards for their teams, and document that review is occurring. They are the first line of accountability for quality — the people who must answer “did a human review this before it reached the customer?” Estimated time: 2-4 hours per month for review protocol and documentation.

Individual Practitioners follow the AI acceptable use policy, document their review of AI-generated outputs (approved as-is, modified, or rejected), and escalate AI errors through defined channels. They are not accountable for tool selection or policy — they are accountable for their own professional judgment when using AI tools. This mirrors every professional liability standard: the tool does not replace the professional’s duty of care.

The Professional Liability Reality

The accountability framework above is not optional for companies where employees produce professional work product — legal, financial, medical, engineering, or advisory services.

Courts and professional licensing bodies have established an emerging standard: professionals must exercise independent judgment over AI-generated outputs, and the duty of care cannot be delegated to an algorithm.

The liability paradox is real and growing. As AI becomes more accurate and widely available, two forms of liability emerge simultaneously:

Overreliance liability: Using AI output without adequate review. Over 300 documented cases of AI-generated legal hallucinations since mid-2023 demonstrate the risk. The Johnson v. Dunn (N.D. Ala., July 2025) ruling established that immediate withdrawal, candid disclosure, and systemic reform can distinguish warnings from disbarment — but only when the accountability trail demonstrates the failure was an exception to documented practice, not the practice itself.

Underutilization liability: Failing to use AI tools that would have improved the standard of care. Medical malpractice experts note that as AI diagnostic tools achieve 93%+ accuracy in clinical trials (Northwell Health, 2025), litigators may argue that physicians were negligent for not using available AI tools. The same logic extends to any profession where AI tools can demonstrably improve accuracy.

The accountability framework resolves both: document what AI tools are authorized, what review standards apply, and what the professional’s independent judgment contributed. This documentation — not the AI output itself — is what regulators, courts, and licensing boards evaluate.

Key Data Points

Metric Data Source
Organizations deploying AI without board-level governance policies 63% (88% deploy, 25% have policies) NACD/WTW D&O Survey, 2025
IT leaders “very confident” in AI governance capability 23% Gartner, 2025
Organizations without AI governance frameworks 55% Gartner, 2025
AI governance frameworks that lack defined accountability/decision rights 35% of those with frameworks Gartner, 2025
Documented cases of AI legal hallucinations since mid-2023 300+ (200+ in 2025 alone) Jones Walker, August 2025
First-line teams leading responsible AI (IT, engineering, data) 56% PwC Responsible AI Survey, 2025
Organizations at “strategic” or “embedded” AI governance maturity 61% (28% strategic, 33% embedded) PwC Responsible AI Survey, 2025
Board directors with limited/no AI knowledge 67% WTW Global D&O Survey, 2025
Texas RAIGA penalty per uncurable violation $80,000-$200,000 Texas Bus. & Comm. Code § 552.105
Colorado AI Act deployer impact assessment frequency Annual + within 90 days of modification Colorado SB 24-205
Companies deploying AI 40% faster with clear RACI frameworks 40% faster deployment, 60% fewer compliance issues Elevate Consulting, 2025
Cost to build accountability framework (staff time) $5,000-$15,000 Estimated from governance sprint research
Northwell Health AI diagnostic accuracy (clinical trial vs. real-world) 93% trial accuracy; “dramatically variable” across 23 facilities Wharton/Kyndryl, 2025

What This Means for Your Organization

The accountability question is no longer abstract. If AI touches hiring decisions, customer interactions, financial reporting, or professional work product at your company, someone is already on the hook — whether or not anyone knows it. The question is whether that accountability is assigned intentionally, with documentation, or discovered reactively, during an investigation.

The five-role RACI framework above maps to roles that already exist in every 200-500 person company. Implementation does not require new hires, a governance committee, or a dedicated AI budget line. It requires a two-hour conversation among the CEO, CIO, GC, and department heads to answer three questions for each AI use case: Who authorized it? Who reviews the output? Who owns the result?

Companies that answer those questions before an incident spend $5,000-$15,000 in staff time and gain the documentation that satisfies Texas RAIGA’s cure requirements, Colorado’s impact assessment obligations, and the EEOC’s employer liability framework. Companies that answer those questions after an incident spend that amount per week in outside counsel fees — and still cannot produce the timestamped evidence that regulators demand.

The Northwell Health case illustrates why the framework matters beyond compliance. Their AI diagnostic tool achieved 93% accuracy in clinical trials. In real-world deployment across 23 facilities, performance varied “dramatically” — not because the AI changed, but because the human accountability layer varied by facility. Facilities where trained staff reviewed AI outputs with defined protocols captured the 93% accuracy. Facilities where AI operated with ad hoc oversight captured far less. The technology was identical. The accountability framework was the variable.

If your organization is deploying AI without named accountability at each of the five decision points — tool approval, use case authorization, output review, incident response, and ongoing monitoring — the framework above provides the Monday-morning starting point. If this raised questions specific to your situation, I’d welcome the conversation — brandon@brandonsneider.com.

Sources

  1. Moffatt v. Air Canada, BCCRT 149, February 14, 2024. British Columbia Civil Resolution Tribunal ruling holding Air Canada liable for chatbot misinformation. ABA Business Law TodayPrimary source (tribunal decision). High credibility.

  2. Jones Walker LLP, “From Enhancement to Dependency: What the Epidemic of AI Failures in Law Means for Professionals,” August 2025. Analysis of 300+ documented AI legal hallucination cases. Jones WalkerLaw firm analysis of court records. High credibility.

  3. PwC, “2025 Responsible AI Survey: From Policy to Practice,” 2025. Survey of executives on AI governance maturity, accountability structures, and program effectiveness. PwCMajor consulting firm survey. High credibility, though methodology/sample size not publicly disclosed.

  4. Gartner, AI governance framework research, 2025-2026. Multiple reports on governance maturity, accountability gaps, and IT leader confidence. GartnerIndependent analyst firm. High credibility.

  5. WTW, “Global Directors’ and Officers’ Survey Report 2024/2025 — Artificial Intelligence,” March 2025. Survey finding two-thirds of directors report limited AI knowledge and fewer than 25% of companies have board-approved AI governance policies. WTWIndependent survey of board directors. High credibility.

  6. Wiley Rein LLP, “2025 State AI Laws Expand Liability, Raise Insurance Risks,” 2025. Analysis of state AI legislation accountability requirements and penalty structures. WileyLaw firm regulatory analysis. High credibility.

  7. SEC Administrative Proceeding 33-11352, In the Matter of Presto Automation, January 2025. Enforcement action for materially misleading AI capability claims and absence of governance documentation.

  8. EEOC, AI and Algorithmic Fairness Initiative, ongoing. Guidance treating AI hiring tools as selection procedures under Title VII, establishing employer liability for vendor-provided AI discrimination. EEOCFederal regulatory body. High credibility.

  9. FTC, Operation AI Comply, September 2024-present. Enforcement actions against DoNotPay ($193,000), Rytr, and others for deceptive AI claims.

  10. Wharton/Kyndryl, “Who’s Accountable When AI Fails?” and AI Readiness Report, 2025. Analysis of Northwell Health diagnostic AI deployment (93% clinical trial accuracy, variable real-world performance across 23 facilities); Kyndryl finds 71% of technology leaders lack confidence in managing future AI risks. WhartonAcademic/independent research. High credibility.

  11. Dr. Cornelia C. Walther, M4-Matrix Framework for AI Accountability, 2025. Four-level accountability model (Micro/Meso/Macro/Meta) mapping individual, organizational, national, and global accountability. Referenced in Wharton analysis. — Academic framework. High credibility for conceptual structure.

  12. Elevate Consulting, “Designing the AI Governance Operating Model & RACI,” 2025. Analysis finding companies with clear RACI frameworks deploy AI 40% faster with 60% fewer compliance issues. ElevateConsulting firm analysis. Moderate credibility — self-reported metric without disclosed methodology.

  13. Colorado SB 24-205 (AI Act), effective June 30, 2026. Texas HB 149 (RAIGA), effective January 1, 2026. Illinois HB 3773 (AIPA), effective January 1, 2026. — Primary legislative sources. High credibility.


Brandon Sneider | brandon@brandonsneider.com March 2026