Board Fiduciary Duty in the AI Era: When “Wait and See” Becomes Director Liability
Brandon Sneider | March 2026
Executive Summary
- Directors face non-exculpable personal liability under Delaware’s Caremark doctrine for failing to establish AI oversight systems — the same legal standard that produced successful claims against Boeing’s board for safety oversight failures and Blue Bell’s board for food safety lapses.
- AI-related securities class actions more than doubled from 2023 to 2024 (15 filings), with 12 filed in the first half of 2025 alone. The SEC’s January 2025 Presto Automation enforcement action established “AI-washing” as a distinct category of securities fraud.
- Only 27% of boards have formally added AI governance to committee charters, despite 88% of their companies actively deploying AI — a 61-point gap that maps directly to Caremark exposure.
- Glass Lewis’s 2026 proxy guidelines now treat AI oversight like cybersecurity: insufficient oversight that causes material harm triggers vote-against recommendations for responsible directors.
- The fiduciary argument transforms AI governance from a discretionary strategy conversation into a legal obligation. The GC’s case to the reluctant board is no longer “should we?” — it is “we must, or face personal exposure.”
The Legal Foundation: Caremark Applied to AI
The duty of oversight under Delaware law traces to In re Caremark Int’l Inc. Derivative Litig., 698 A.2d 959 (Del. Ch. 1996). Directors face liability when they either (a) utterly fail to implement any reporting or information system, or (b) consciously fail to monitor an existing system — thereby disabling themselves from learning of risks requiring their attention.
Two subsequent decisions sharpened the standard. In Marchand v. Barnhill (2019), the Delaware Supreme Court held that “mission critical” regulatory risks demand rigorous board oversight — finding Blue Bell’s directors personally liable after a listeria outbreak killed three people and sickened others, because the board had no committee, no protocols, and no reporting structure for food safety at an ice cream company. In In re Boeing Co. (2021), the Court of Chancery denied dismissal of Caremark claims against Boeing directors for failing to oversee aircraft safety — a “mission critical” risk for an airplane manufacturer — after two fatal crashes of the 737 MAX.
The pattern is clear: when a risk is central to the business, and the board has no system for monitoring it, directors face non-exculpable liability — meaning exculpation clauses and D&O insurance cannot fully protect them.
AI now fits this framework. As Harvard’s Edmond & Lily Safra Center for Ethics argues, Caremark claims succeed where fact patterns show “sustained or systematic failure of oversight by a board over AI-related harms” — specifically where AI is central to the company’s operations, supports high-risk business functions, or results in foreseeable harm. For a company using AI to generate client deliverables, make hiring decisions, or process customer data, AI governance is not a technology question. It is a mission-critical oversight obligation.
The Governance Gap: Where Boards Stand Today
The numbers paint an uncomfortable picture for directors.
| Metric | Percentage | Source |
|---|---|---|
| Companies deploying AI | 88% | Multiple industry surveys, 2025 |
| Boards with formal AI governance in committee charters | 27% | NACD 2025 Board Practices Survey |
| S&P 500 companies disclosing board-level AI oversight | 32% | ISS-Corporate, 2024 proxy season |
| S&P 100 companies disclosing board AI oversight | 54% | Glass Lewis/Harvard Law, March 2026 |
| Directors citing AI risk in board oversight of risk | 48% | EY Center for Board Matters, 2025 (triple the 16% in 2024) |
| Directors mentioning AI in director qualifications | 44% | EY Center for Board Matters, 2025 (up from 26% in 2024) |
| Boards that have assessed AI’s impact on strategy | 23% | NACD 2025 Board Practices Survey |
The trajectory is rapid — AI risk disclosure tripled in a single proxy season — but the baseline remains dangerously low. A 61-point gap between AI deployment (88%) and formal board governance (27%) means most directors are personally exposed to the exact oversight failure that Caremark penalizes.
PwC’s 2025 Annual Corporate Directors Survey reinforces the problem: only 32% of executives believe their boards have the right skills mix for current challenges, and 55% of directors say at least one board colleague should be replaced. AI is the number one oversight area where directors say they need more time.
The Litigation Signal: What Is Already Happening
AI-Washing Enforcement
The SEC’s January 2025 action against Presto Automation — a formerly Nasdaq-listed restaurant technology company — marked the first AI-washing enforcement against a public company. Presto made materially misleading claims about its “Presto Voice” AI product for drive-throughs, failing to disclose that the speech recognition technology was owned and operated by a third party and required significant human intervention. The SEC found the company “never implemented disclosure controls” for AI claims.
The FTC followed a similar path with DoNotPay, imposing a $193,000 penalty and permanent advertising restrictions after finding the company’s “world’s first robot lawyer” claims were unsubstantiated — the company never tested whether its AI performed at the level of a human lawyer.
Securities Class Action Acceleration
AI-related securities class actions more than doubled from 2023 to 2024, with 15 suits filed. By mid-2025, 12 additional cases had been filed — putting the category on pace to exceed the prior year’s total. DLA Piper and NERA Economic Consulting both identify AI claims as the fastest-growing category of event-driven litigation.
The Super Micro Computer litigation illustrates the scale of exposure: multiple consolidated class actions allege misrepresentation of AI-related capabilities, with institutional plaintiffs and prominent securities firms driving the case through the Northern District of California.
The D&O Insurance Squeeze
These litigation trends directly affect director protection. Average settlement values for D&O claims rose 27% in recent periods. Boards operating without documented AI governance face both the liability itself and the prospect that their D&O carriers will question coverage — the same dynamic emerging in the cyber insurance market, where governance documentation determines whether policies include or exclude AI-related claims.
The Proxy Advisor Inflection: 2026 Season
Two developments make the 2026 proxy season a turning point for AI governance.
Glass Lewis added AI oversight to its 2026 guidelines, mirroring the cybersecurity oversight framework introduced in 2023. The standard: when evidence shows insufficient oversight of AI technologies has caused material harm, Glass Lewis will identify which directors or committees bear oversight responsibility and may recommend votes against those directors. Companies that develop or use AI should disclose the board’s role in AI oversight and how directors are educated on the topic.
The SEC’s Investor Advisory Committee voted on December 4, 2025 to recommend that the Commission require issuers to (1) define what they mean by “artificial intelligence,” (2) disclose board oversight mechanisms for AI deployment, and (3) report separately on how AI affects internal operations and consumer-facing matters. The IAC noted that only 40% of S&P 500 companies provide AI-related disclosures and just 15% disclose board oversight of AI — while 60% view AI as a material risk. The current SEC leadership has signaled skepticism about prescriptive rules, but the recommendation establishes a disclosure floor that plaintiffs’ lawyers and proxy advisors will treat as a governance expectation regardless.
The practical effect: boards that cannot document AI oversight in their 2026 proxy statements face withhold recommendations, reputational damage, and — if AI-related losses materialize — derivative claims citing the board’s own disclosure gap as evidence of conscious disregard.
The Five Obligations: What Caremark Requires for AI
Synthesizing the case law, SEC signals, and proxy advisor expectations, boards face five specific obligations. These are not aspirational best practices — they are the minimum standard that Caremark and its progeny establish for mission-critical risk oversight.
1. Assign Committee Responsibility
Designate a specific board committee — audit, risk, technology, or a new standing committee — with explicit charter language covering AI governance. The Marchand court specifically cited the absence of a food safety committee as evidence of Caremark failure. The AI parallel is direct: a board that deploys AI without assigning oversight has no monitoring system.
2. Establish Reporting Protocols
Require regular management reporting to the designated committee on AI deployment, risks, incidents, and compliance. Boeing failed because the board lacked a system for receiving safety information. The AI equivalent: management must report on what AI systems are deployed, what data they process, what decisions they influence, and what incidents have occurred.
3. Document Director Education
Proxy advisors now expect evidence that directors understand AI. Glass Lewis’s 2026 guidelines specifically ask how companies ensure directors are “fully versed” on AI. WTW’s D&O survey finds AI is the area where fewest directors believe the board has adequate skills. Documented training — not a single briefing, but ongoing education — addresses both the governance gap and the disclosure expectation.
4. Monitor for Red Flags
Caremark’s second prong penalizes boards that consciously ignore red flags within existing systems. AI-specific red flags include: employee use of consumer AI tools for confidential data, algorithmic outputs that produce discriminatory results, vendor claims about AI capabilities that cannot be verified, and customer complaints about AI-driven decisions. The board must have a channel for learning about these issues and evidence that it acts on them.
5. Review Disclosure Adequacy
The SEC’s AI-washing enforcement against Presto Automation and the IAC’s December 2025 recommendations establish a clear expectation: companies must not overstate AI capabilities or understate AI risks in public disclosures. The board’s role is to ensure that management’s AI-related statements — in earnings calls, proxy statements, marketing materials, and SEC filings — are accurate and adequately hedged.
Key Data Points
| Data Point | Detail |
|---|---|
| AI securities class actions filed in 2024 | 15 (doubled from 2023) |
| AI securities class actions filed in H1 2025 | 12 (on pace to exceed 2024) |
| D&O claim settlement increase | 27% rise in average values |
| Boards with AI in committee charters | 27% (NACD 2025) |
| Companies deploying AI without board governance | 61% gap (88% deploy, 27% govern) |
| S&P 500 companies disclosing AI board oversight | 32% (ISS-Corporate, 2024) |
| S&P 500 directors citing AI in risk oversight | 48% (tripled from 16% in one year) |
| SEC AI-washing enforcement | Presto Automation, January 2025 (first public company action) |
| FTC AI enforcement penalty | DoNotPay, $193K + permanent restrictions |
| Glass Lewis AI governance standard | Effective January 2026 (mirrors cybersecurity framework) |
| SEC IAC AI disclosure recommendation | December 4, 2025 (three-part disclosure framework) |
What This Means for Your Organization
The fiduciary duty argument changes the AI governance conversation from optional to obligatory. A director who votes to deploy AI tools across the enterprise but votes against establishing an AI oversight committee has created precisely the gap that Marchand and Boeing penalize — deployment without monitoring, risk without reporting, capability without controls.
For mid-market companies, the practical exposure is acute. Large-cap companies have governance infrastructure — risk committees, compliance officers, internal audit — that can absorb AI oversight. A 500-person company typically does not. The GC bears the dual burden of advising the board on its obligations and building the oversight system that satisfies them. The five obligations above — committee assignment, reporting protocols, director education, red flag monitoring, and disclosure review — are achievable at mid-market scale for $10K-$25K in outside counsel fees plus internal governance time. The cost of not building them is a Caremark claim where the board’s own proxy disclosures serve as plaintiff’s Exhibit A.
The 2026 proxy season is the first where Glass Lewis will evaluate AI oversight alongside cybersecurity oversight. Companies that cannot describe their board’s AI governance role face the same trajectory that cybersecurity governance followed after 2023: first a proxy advisor expectation, then an institutional investor expectation, then a litigation standard. The boards that build oversight systems now — before an incident, before a regulatory inquiry, before a shareholder demand letter — position themselves in the 5% that treated governance as a competitive advantage rather than a compliance burden.
If this raised questions about your board’s specific obligations or governance readiness, I’d welcome the conversation — brandon@brandonsneider.com
Sources
-
In re Caremark Int’l Inc. Derivative Litig., 698 A.2d 959 (Del. Ch. 1996) — foundational Delaware duty of oversight standard. Authoritative.
-
Marchand v. Barnhill, 212 A.3d 805 (Del. 2019) — Delaware Supreme Court establishes “mission critical” risk standard for board oversight. Authoritative.
-
In re Boeing Co. Derivative Litig., C.A. No. 2019-0907 (Del. Ch. 2021) — Court of Chancery applies Marchand mission-critical standard to aircraft safety. Authoritative.
-
Akin Gump, “Does AI Care About Caremark? Applying the Core Principles of Corporate Governance to Artificial Intelligence Integration” (2025) — legal analysis of Caremark obligations for AI governance, SEC IAC recommendations, Glass Lewis 2026 guidelines. https://www.akingump.com/en/insights/articles/does-ai-care-about-caremark-applying-the-core-principles-of-corporate-governance-to-artificial-intelligence-integration High — major law firm analysis.
-
Harvard Edmond & Lily Safra Center for Ethics, “Post #6: The Caremark Rule and Board Level AI Risk Management” (2025) — four-tier framework for AI-related Caremark exposure. https://www.ethics.harvard.edu/blog/post-6-caremark-rule-and-board-level-ai-risk-management High — independent academic analysis.
-
SEC Investor Advisory Committee, “Artificial Intelligence Disclosure Recommendation” (December 4, 2025) — three-part AI disclosure framework. https://www.sec.gov/files/approved-artificial-intelligence-disclosure-recommendation-120425.pdf Authoritative — SEC advisory body.
-
D&O Diary, “SEC Investor Advisory Committee Recommends AI-Related Disclosure Guidelines” (December 2025) — analysis of IAC vote and SEC response. https://www.dandodiary.com/2025/12/articles/securities-laws/sec-investor-advisory-committee-recommends-ai-related-disclosure-guidelines/ High — leading D&O commentary.
-
Glass Lewis, “US AI Oversight Through Three Lenses: Investor Expectations, the S&P 100 and Company-Specific Analysis” (March 2026) — 2026 proxy guidelines and AI oversight framework. https://www.glasslewis.com/article/us-ai-oversight-through-three-lenses-investor-expectations-sp-100-company-specific-analysis Authoritative — proxy advisory firm.
-
Norton Rose Fulbright, “ISS and Glass Lewis 2026 Proxy Voting Guidelines” (2026) — summary of proxy advisor policy updates. https://www.nortonrosefulbright.com/en/knowledge/publications/f7d36051/iss-and-glass-lewis-2026-proxy-voting-guidelines High — major law firm analysis.
-
NACD 2025 Public Company Board Practices and Oversight Survey — 62% of boards hold AI discussions, 27% have AI in committee charters. https://www.nacdonline.org/all-governance/governance-resources/governance-surveys/surveys-benchmarking/2025-public-company-board-practices--oversight-survey/2025-board-practices-oversight-ai/ High — independent governance organization.
-
ISS-Corporate, “Roughly One-Third of Large U.S. Companies Now Disclose Board Oversight of AI” (2025) — 31.6% S&P 500 disclosure rate. https://www.iss-corporate.com/press/roughly-one-third-of-large-u-s-companies-now-disclose-board-oversight-of-ai-iss-corporate-finds/ High — independent governance data.
-
EY Center for Board Matters, “Cyber and AI Oversight Disclosures: What Companies Shared in 2025” — 48% cite AI risk (triple prior year), 44% mention AI in director qualifications. https://corpgov.law.harvard.edu/2025/10/28/cyber-and-ai-oversight-disclosures-what-companies-shared-in-2025/ High — Big Four audit firm analysis.
-
PwC 2025 Annual Corporate Directors Survey — 32% of executives believe boards have right skills, 55% say a colleague should be replaced. https://www.pwc.com/us/en/services/governance-insights-center/library/annual-corporate-directors-survey.html High — Big Four audit firm, large sample.
-
WTW Global Directors’ and Officers’ Survey 2024/2025 — AI ranked lowest area of board knowledge/skill confidence. https://www.wtwco.com/en-us/insights/2025/04/how-board-level-ai-governance-is-changing High — major insurance broker survey.
-
SEC Enforcement, “SEC Charges Presto Automation for Misleading Statements About AI Product” (January 2025) — first AI-washing enforcement against a public company. https://www.sec.gov/enforcement-litigation/administrative-proceedings/33-11352-s Authoritative — SEC enforcement action.
-
FTC, “FTC Finalizes Order with DoNotPay” (February 2025) — $193K penalty, permanent advertising restrictions for AI capability misrepresentation. https://www.ftc.gov/news-events/news/press-releases/2025/02/ftc-finalizes-order-donotpay-prohibits-deceptive-ai-lawyer-claims-imposes-monetary-relief-requires Authoritative — federal enforcement action.
-
DLA Piper, “AI-Related Securities Class Action Filings Are on the Rise” (September 2025) — litigation trend analysis, case filing counts. https://www.dlapiper.com/en-us/insights/publications/2025/09/ai-related-securities-class-action-filings-are-on-the-rise-key-observations High — major law firm analysis.
-
NERA Economic Consulting, “Recent Trends in Securities Class Action Litigation: 2025 Full-Year Review” (2026) — aggregate filing and settlement data. https://www.nera.com/insights/publications/2026/recent-trends-in-securities-class-action-litigation--2025-full-y.html High — independent economic consulting firm.
Brandon Sneider | brandon@brandonsneider.com March 2026