The Board AI Briefing: What a $200M-$2B Company Should Actually Report Quarterly
Brandon Sneider | March 2026
Executive Summary
- Board AI oversight tripled in one year — 48% of Fortune 100 companies now cite AI risk in board oversight, up from 16% in 2024 (EY Center for Board Matters, 2025). Mid-market boards are behind, and the gap creates governance exposure
- Only 15% of boards currently receive AI-related metrics (McKinsey, 2025). Most boards get either nothing or a vendor pitch deck repackaged as a “strategy update.” Neither constitutes oversight
- The SEC Investor Advisory Committee recommended formal AI disclosure rules in December 2025 — covering board oversight mechanisms, material AI deployments, workforce impact, and cybersecurity risks. While not yet a rule, the direction is clear: boards that cannot demonstrate informed oversight face increasing regulatory and liability exposure
- The practical quarterly reporting package for a mid-market board fits on two pages — six metrics, three risk indicators, and one decision item. Anything more creates the illusion of oversight without the substance
- 54% of S&P 100 companies now disclose board-level AI oversight, but only 28% disclose both oversight and a formal AI policy (ISS Corporate, Harvard Law School Forum, March 2026). The gap between “saying you oversee AI” and “demonstrating governance” is where liability lives
Why Mid-Market Boards Need This Now
The regulatory trajectory is unambiguous. The SEC’s Division of Examinations lists AI as a top priority in its 2026 Examination Priorities, signaling it will “closely examine companies’ use of AI and other automated technologies, scrutinizing whether related disclosures, supervisory frameworks and controls align with actual practices” (Harvard Law School Forum on Corporate Governance, January 2026).
For public mid-market companies ($200M-$2B revenue), this means SEC examiners will compare disclosed AI governance against actual practice. For private companies in this range, the pressure comes from three directions: insurance carriers asking about AI governance during renewals, clients including AI due diligence in vendor assessments, and potential acquirers evaluating AI risk as part of deal diligence.
The board’s job is not to manage AI. It is to ensure management has a credible plan, adequate resources, and functioning risk controls. To do that job, the board needs a reporting instrument that is specific enough to reveal problems and concise enough to actually get read.
The Two-Page Quarterly Board AI Report
Page 1: Operations and Value
Section A — AI Deployment Status (3 metrics)
| Metric | What It Measures | Why the Board Needs It |
|---|---|---|
| Active AI tools and systems (count, with classification: approved/shadow/evaluation) | Scope of AI footprint | You cannot govern what you do not know exists. Shadow AI count is the single most diagnostic number |
| AI-enabled processes (% of total, by business function) | Penetration depth | Distinguishes between “bought licenses” and “changed how work gets done” |
| AI spend (quarterly, broken into licenses + integration + training + consumption overages) | Full cost visibility | License fees are 10-17% of total AI cost. If the board sees only license spend, it is seeing 15 cents on the dollar |
Section B — Value Capture (3 metrics)
| Metric | What It Measures | Why the Board Needs It |
|---|---|---|
| Productivity impact by function (hours saved/week, output change, quality delta) | Whether AI is working | Self-reported “time saved” is unreliable. Pair it with output metrics: PRs merged, tickets resolved, documents produced |
| ROI by initiative (cost invested vs. measured return, by project) | Whether the investment thesis holds | Forces management to connect AI spend to business outcomes, not activity metrics |
| Adoption rate (% of licensed users actively using tools weekly) | Whether the organization is capturing the investment | Industry average: 30-40% of licensed seats see regular use. Below 25% signals a deployment problem, not a tool problem |
Page 2: Risk and Governance
Section C — Risk Indicators (3 metrics)
| Metric | What It Measures | Why the Board Needs It |
|---|---|---|
| AI incidents (count and severity: data exposure, incorrect outputs used in decisions, policy violations) | Active risk materialization | Organizations trigger an average of 223 GenAI data policy violations per month (Kiteworks/IBM, 2025). The board should know the company’s number |
| Vendor concentration (% of AI capability dependent on single vendor, backup readiness) | Supply chain risk | If 80% of AI capability runs through one vendor, a pricing change, outage, or policy shift is a material business risk |
| Regulatory compliance status (EU AI Act milestones, SEC disclosure readiness, state-level AI legislation) | Legal exposure | Deadlines are fixed. Board needs a stoplight chart showing green/yellow/red against each applicable requirement |
Section D — Decision Item (1 per quarter)
Each quarterly report should include one specific decision or endorsement the board is being asked to make. Examples:
- Q1: Approve the annual AI governance policy refresh
- Q2: Review and endorse the AI risk assessment results
- Q3: Approve the AI budget for the next fiscal year
- Q4: Review the annual AI program audit findings
This forces management to treat the board as a governance body, not an audience. A board that only receives information without making decisions is not providing oversight.
What Separates Good Board AI Reporting from Theater
McKinsey identifies four board “AI postures” — Passive, Performative, Informed, and Strategic — and finds that most boards operate in the Performative category: they receive presentations about AI but lack the data or structure to exercise real oversight (McKinsey, 2025).
The markers of performative oversight:
- Annual AI presentations from the CIO with no metrics
- Board members who “have AI experience” listed in skills matrices but no governance training
- AI mentioned in the risk section of the 10-K but no committee charter assigns ownership
- No incident reporting mechanism — the board learns about AI problems from the press
The markers of informed oversight:
- Quarterly structured reports with consistent metrics (the two-page format above)
- One board committee explicitly owns AI oversight (audit committee is most common at 21% of companies, but a dedicated technology/risk committee provides better focus)
- Board receives incident reports within defined escalation windows
- Annual independent review of AI systems and governance effectiveness
- At least one board member with genuine AI governance competence (not just “worked at a tech company”)
The Audit Committee Question
EY’s analysis of 2025 proxy disclosures finds that audit committees are the most common home for AI oversight at 21% of companies (EY Center for Board Matters, 2025). This makes sense: the audit committee already oversees risk management, internal controls, and compliance — and AI governance touches all three.
For mid-market companies without a dedicated technology committee, assigning AI oversight to the audit committee is the pragmatic choice. The charter amendment takes one meeting. The quarterly reporting cadence already exists. The relationship with internal and external auditors provides a natural verification mechanism.
The risk of not assigning ownership to any committee: AI governance becomes everyone’s responsibility and no one’s accountability. At mid-market scale, this ambiguity is where problems hide.
Key Data Points
| Metric | Value | Source |
|---|---|---|
| Fortune 100 companies citing AI in board risk oversight | 48% (up from 16% in 2024) | EY Center for Board Matters, 2025 |
| S&P 100 companies disclosing board AI oversight | 54% | ISS Corporate / Harvard Law Forum, March 2026 |
| S&P 100 disclosing both oversight AND AI policy | 28% | ISS Corporate / Harvard Law Forum, March 2026 |
| Boards receiving AI-related metrics | 15% | McKinsey, 2025 |
| AI expertise in director skills matrices | 44% (up from 26% in 2024) | EY, 2025 |
| Companies assigning AI oversight to audit committee | 21% | EY, 2025 |
| Average GenAI policy violations per month | 223 (top quartile: 2,100) | Kiteworks/IBM, 2025 |
What This Means for Your Organization
The board AI reporting question is simpler than it appears. Most mid-market companies have two problems: the board does not know what to ask for, and management does not know what to provide. The two-page quarterly format above solves both — it gives management a clear reporting obligation and gives the board a consistent instrument for exercising oversight.
The regulatory direction makes this urgent. The SEC Investor Advisory Committee’s December 2025 recommendation for formal AI disclosure rules — even if not yet codified — signals that “informed oversight” is becoming the standard against which boards will be measured. For mid-market companies preparing for potential IPO, acquisition, or simply good governance, building this reporting discipline now is cheaper and less disruptive than retrofitting it under regulatory pressure.
Start with the audit committee charter. Add three sentences assigning AI oversight responsibility. Build the first quarterly report using the six-metric, three-risk-indicator format. Iterate from there. If you want to calibrate this reporting package against what peer companies at your scale are actually providing their boards, that is a short conversation with significant downstream value — brandon@brandonsneider.com
Sources
- Corporate Compliance Insights — “Board Oversight of AI Triples Since '24” (October 2025). Credibility: HIGH — industry publication, citing primary proxy data
- Crowell & Moring LLP — “Investor Advisory Committee Recommends SEC Disclosure Guidelines for Artificial Intelligence” (December 2025). Credibility: HIGH — law firm analysis of primary regulatory proceedings
- EY Center for Board Matters — “Cyber and AI Oversight Disclosures: What Companies Shared in 2025” (2025). Credibility: HIGH — Big Four analysis of proxy filings, primary data
- Harvard Law School Forum on Corporate Governance — “Key Considerations for the 2025 Annual Reporting Season” (January 2026). Credibility: HIGH — academic institution, expert commentary
- Harvard Law School Forum on Corporate Governance — “US AI Oversight Through Three Lenses” (March 2026). Credibility: HIGH — ISS Corporate primary research
- Kiteworks/IBM — GenAI policy violation data (2025). Credibility: HIGH — primary incident data
- McKinsey — “The AI Reckoning: How Boards Can Evolve” (2025). Credibility: HIGH — major consulting firm, board governance specialty
- SEC Division of Examinations — 2026 Examination Priorities. Credibility: HIGH — primary regulatory source
- The D&O Diary — “SEC Investor Advisory Committee Recommends AI-Related Disclosure Guidelines” (December 2025). Credibility: HIGH — specialty legal publication
Brandon Sneider | brandon@brandonsneider.com March 2026