The Board’s Quarterly AI Question Set: Seven Questions That Separate Oversight from Theater

Brandon Sneider | March 2026


Executive Summary

  • Only 26% of corporate boards discuss AI at every meeting — but those that do are 4.8x more likely to achieve high AI ROI. Protiviti and BoardProspects’ Global Board Governance Survey (n=772 board members and C-suite executives, Q4 2025) finds 63% of high-ROI organizations include AI discussion at every board meeting versus 13% of low-ROI organizations. The question is not whether boards should discuss AI quarterly. It is what, specifically, they should ask.
  • Board AI oversight tripled in a single year — from 16% to 48% of Fortune 100 companies citing AI in board risk oversight — yet only 12% disclosed that directors received any AI education. The EY Center for Board Matters (Fortune 100 proxy filings through July 2025) documents a governance apparatus growing faster than the competence to use it.
  • The fiduciary case is settled. Delaware’s Caremark doctrine, SEC enforcement against AI-washing (Presto Automation, January 2025), and proxy advisor withhold recommendations for AI oversight gaps mean AI governance is a legal obligation, not an optional agenda item.
  • Seven questions — covering value, risk, people, competitive position, spend discipline, data readiness, and governance maturity — give directors a quarterly instrument that fits in 30 minutes and produces the oversight evidence that satisfies fiduciary, regulatory, and investor scrutiny.

Why Quarterly — and Why These Questions

Most boards get AI wrong in one of two ways. They either avoid the topic — 45% of boards have not placed AI on their agenda at all (Deloitte, n=468, May-July 2024) — or they receive a CTO monologue about technology that produces no actionable oversight. Both fail the Caremark standard.

The Protiviti/BoardProspects data makes the ROI case cleanly. Among high-ROI organizations, 95% of directors express confidence in their company’s AI integration ability. Among low-ROI organizations, the number is 33%. Confidence in responsible AI strategy follows the same pattern: 93% versus 42%. These are not technology metrics. They are governance outcomes.

EY’s Fortune 100 analysis shows the structural gap. AI oversight disclosure tripled from 16% to 48% in one year. Committee assignments for AI jumped from 11% to 40%. AI expertise in director skills matrices climbed from 26% to 44%. But board AI education disclosure sits at 12%. Directors are accepting oversight responsibility for something most of them do not yet understand.

The quarterly question set solves this by creating a recurring structure that educates the board while simultaneously exercising oversight. The questions are designed so that management’s answers — and inability to answer — tell directors exactly where the organization stands.

The Seven-Question Framework

These questions draw on governance guidance from NACD, EY Center for Board Matters, Harvard Law School Forum on Corporate Governance, WilmerHale, and Akin Gump’s Caremark analysis. Each question maps to a specific oversight domain and produces a specific governance artifact.

Question 1: Where Is AI Operating Today — and What Changed Since Last Quarter?

Oversight domain: Visibility and control What good looks like: Management presents a current AI inventory — every system, vendor, internal build, and shadow deployment — with additions, removals, and modifications since the prior quarter.

The inventory is the foundation. Without it, every subsequent question operates on partial information. Harvard Law School Forum’s 2026 analysis finds that 65% of U.S. investors expect companies to disclose board oversight of AI governance — but oversight requires knowing what is deployed. CIO.com’s 2026 analysis of board expectations identifies the core directorial demand: CIOs must articulate “the entire AI footprint in narrative terms: where intelligence exists, what purpose it serves, how it behaves, and where it intersects with key decisions.”

For a mid-market company, this should fit on one page. The red flag is when the list is the same as last quarter (no one is tracking) or when someone in the room identifies a deployment not on the list (shadow AI).

Question 2: What Measurable Value Has AI Produced — in Business Terms, Not Technology Metrics?

Oversight domain: Value realization What good looks like: Dollar figures tied to specific deployments — cost avoidance, revenue influenced, cycle time reduced, error rates changed. Not model accuracy. Not API call volume. Business outcomes.

RGP’s CFO survey (n=200, October-November 2025) finds only 14% of CFOs see measurable AI ROI. Pertama Partners (n=2,400+ initiatives, 2025) finds that organizations with pre-defined success metrics achieve 54% initiative success rates versus 12% without — the single largest controllable variable in AI project outcomes.

The quarterly cadence matters. The first answer may be “too early to tell.” By Q2, that answer signals a measurement vacuum — one of the six root-cause failure patterns behind the 42% abandonment rate (S&P Global 451 Research, n=1,006, October-November 2024). By Q3, absence of measurable value is a kill signal.

Question 3: What Are the Top Three AI Risks Right Now — and What Changed?

Oversight domain: Risk management What good looks like: Management ranks current risks by severity and likelihood, with changes from the prior quarter. Categories should span operational risk (system failures, data quality), legal and regulatory risk (state AI laws, SEC disclosure requirements), people risk (skill gaps, adoption resistance), and reputational risk (customer-facing AI errors, bias).

EY’s analysis finds 36% of Fortune 100 companies now list AI as a separate 10-K risk factor — up from 14% the prior year. The topics disclosed: regulatory uncertainty, cybersecurity threats from AI-enabled attacks, operational disruptions, and AI hallucination risk (22% of Fortune 100 companies flag this specifically).

The quarterly delta is the critical element. Static risk reports are compliance theater. A board that tracks how risks evolve — which new risks emerged, which were mitigated, which grew — exercises the kind of ongoing monitoring that Caremark requires. Akin Gump’s analysis emphasizes that patchwork operational-level oversight is insufficient: boards must take “an enterprise-level view of AI risk.”

Question 4: How Are Employees Actually Using AI — and How Do They Feel About It?

Oversight domain: Workforce and adoption What good looks like: Adoption metrics (active users versus licensed seats, frequency of use, use cases by department) paired with sentiment data (engagement scores, anxiety indicators, training completion rates).

BCG’s AI at Work survey (n=10,635, June 2025) identifies a governance paradox: high-anxiety employees use AI more frequently but resist organizational AI strategy more actively. Writer/Workplace Intelligence (n=1,600, March 2025) documents 31% of employees actively sabotaging AI strategy through metric tampering, low-quality outputs, and tool refusal. ActivTrak (n=163,638 workers, 443 million hours, 2025) finds that no work category decreased after AI deployment — meaning AI added work without subtracting any.

Directors need this data because the CEO’s view from the top and the frontline reality diverge sharply. HBR (n=100+ executives, November 2025) documents companies achieving 30-40% individual productivity gains with flat organizational performance — the gains disappear into coordination overhead and cultural friction. The quarterly check prevents the “everything is going great” narrative from persisting unchallenged.

Question 5: What Is Our Competitive Position — Are We Falling Behind or Pulling Ahead?

Oversight domain: Strategic positioning What good looks like: Management benchmarks the company’s AI maturity against industry peers using external data — not internal self-assessment. Specific comparisons: deployment breadth (number of functions using AI), depth (workflow redesign versus superficial use), and spend as a percentage of revenue.

BCG’s Build for the Future report (n=1,250, September 2025) finds only 5% of companies are “future-built” for AI — achieving 1.7x revenue growth, 3.6x total shareholder return, and 2.7x ROI versus the 60% generating minimal gains. McKinsey’s State of AI (n=1,993, July 2025) finds 88% use AI but only 6% achieve material EBIT impact. The gap between adoption and value is where competitive distance opens.

For mid-market companies, the competitive question has a structural advantage: RSM’s survey (n=966, 2025) finds 91% of mid-market firms use AI but only 34% have a strategy. The 200-2,000 employee company that builds structured governance creates distance from peers operating on instinct.

Question 6: Is Our AI Spending Disciplined — and What Are the Kill Criteria?

Oversight domain: Capital allocation and portfolio discipline What good looks like: A clear answer to three sub-questions: How much are we spending on AI this quarter (tools, people, training)? What is the expected return on each active initiative? What thresholds trigger a kill, pivot, or scale decision?

MIT NANDA (August 2025) documents a 380% average cost overrun from pilot to production. Pertama Partners finds the median abandoned project sinks $4.2 million over 11 months before someone calls it. The quarterly board question forces explicit kill criteria before sunk-cost psychology takes hold.

McKinsey’s high-performer analysis (n=1,993, July 2025) shows the 6% capturing real EBIT impact commit more than 20% of digital budgets to AI (4.9x the rate of others). But they also concentrate spending — fewer, deeper bets with rigorous measurement — rather than spreading thin. The board’s job is to ensure the company invests enough to matter and cuts fast enough to avoid the pilot trap.

Question 7: Is Our Governance Keeping Pace with Our Deployment?

Oversight domain: Governance maturity What good looks like: Management demonstrates that governance structures — policies, roles, training, incident response, regulatory monitoring — are advancing at the same speed as AI deployment. If the company added three new AI use cases this quarter, governance expanded to cover them.

The gap between deployment and governance is the fastest-growing source of D&O liability exposure (Harvard Law/Akin Gump, 2025-2026). NACD’s 2025 survey finds 62% of directors now set aside full-board agenda time for AI discussions, but only 27% have formally added AI governance to committee charters. SEC’s December 2025 Investor Advisory Committee recommendation calls for companies to disclose how they define AI, what board oversight mechanisms exist, and how AI deployment affects operations.

For a mid-market company without a General Counsel on staff, this question also serves as the regulatory early-warning system. Nearly 700 AI-related bills were introduced in states during 2024; 113 became law. The quarterly governance check surfaces whether anyone is tracking these requirements.

Key Data Points

Metric Finding Source
Boards discussing AI at every meeting 26% Protiviti/BoardProspects (n=772, Q4 2025)
High-ROI orgs discussing AI at every meeting 63% vs. 13% for low-ROI Protiviti/BoardProspects (n=772, Q4 2025)
Fortune 100 citing AI in board risk oversight 48% (up from 16%) EY Center for Board Matters, 2025
Fortune 100 with AI in committee assignments 40% (up from 11%) EY Center for Board Matters, 2025
Fortune 100 with board AI education disclosure 12% EY Center for Board Matters, 2025
S&P 100 disclosing board AI oversight 54% Harvard Law School Forum, 2025 proxy
S&P 100 disclosing both oversight AND policy 28% Harvard Law School Forum, 2025 proxy
Directors setting aside AI agenda time 62% NACD 2025 Board Practices Survey
Boards with AI in committee charters 27% NACD 2025 Board Practices Survey
Companies deploying AI 88% Multiple industry surveys, 2025
CFOs seeing measurable AI ROI 14% RGP (n=200, Oct-Nov 2025)
Initiatives with pre-defined metrics: success rate 54% vs. 12% without Pertama Partners (n=2,400+, 2025)
Confidence in AI integration (high vs. low ROI) 95% vs. 33% Protiviti/BoardProspects (n=772, Q4 2025)
Fortune 100 listing AI as separate 10-K risk factor 36% (up from 14%) EY Center for Board Matters, 2025
U.S. investors expecting AI oversight disclosure 65% Sustainalytics, 2025

What This Means for Your Organization

The quarterly question set is designed to take 30 minutes of board time and produce 90 days of management accountability. It works because it is the same seven questions every quarter — progress becomes visible through the delta in answers, not through slide deck sophistication.

Most mid-market companies will find that the first quarterly session produces discomfort. Management may not have an AI inventory (Question 1). Value metrics may not exist (Question 2). Risk rankings may not have been formalized (Question 3). This discomfort is the point. The Protiviti data shows that the act of asking — consistently, quarterly, with the expectation of specific answers — is what separates high-ROI from low-ROI organizations. The 63% versus 13% gap is not about smarter directors. It is about more disciplined governance.

The fiduciary imperative is now clear enough that avoidance carries legal risk. But the stronger argument is economic: companies that treat AI as a standing board topic build the organizational muscle to capture value from it. Those that treat it as a periodic update build nothing.

If adapting these questions to your board’s specific context — industry dynamics, existing committee structure, current AI maturity — would be useful, I welcome that conversation: brandon@brandonsneider.com.

Sources

  1. Protiviti and BoardProspects, “Global Board Governance Survey” (n=772 board members and C-suite executives, Q4 2025, published March 18, 2026). Independent survey; credible. https://www.prnewswire.com/news-releases/only-26-of-directors-discuss-ai-at-every-board-meeting-global-survey-finds-302714274.html

  2. EY Center for Board Matters, “Cyber and AI Oversight Disclosures: What Companies Shared in 2025” (Fortune 100 proxy filings through July 31, 2025). Independent professional services firm analysis of SEC filings; highly credible. https://www.ey.com/en_us/board-matters/cyber-disclosure-trends

  3. Harvard Law School Forum on Corporate Governance, “US AI Oversight Through Three Lenses” (March 11, 2026). Academic/practitioner forum; highly credible. https://corpgov.law.harvard.edu/2026/03/11/us-ai-oversight-through-three-lenses-investor-expectations-the-sp-100-and-company-specific-analysis/

  4. EY Center for Board Matters, “Board Oversight of AI” (December 2025). Independent analysis; credible. https://www.ey.com/en_us/board-matters/board-oversight-of-ai

  5. NACD, “2025 Public Company Board Practices & Oversight Survey — AI Analysis” (2025). Leading governance organization; highly credible. https://www.nacdonline.org/all-governance/governance-resources/governance-surveys/surveys-benchmarking/2025-public-company-board-practices--oversight-survey/2025-board-practices-oversight-ai/

  6. Harvard Law School Forum on Corporate Governance, “Oversight in the AI Era: Understanding the Audit Committee’s Role” (July 12, 2025). Academic/practitioner forum; highly credible. https://corpgov.law.harvard.edu/2025/07/12/oversight-in-the-ai-era-understanding-the-audit-committees-role/

  7. Harvard Law School Forum on Corporate Governance, “How Boards Can Lead in a World Remade by AI” (February 19, 2026). Academic/practitioner forum; highly credible. https://corpgov.law.harvard.edu/2026/02/19/how-boards-can-lead-in-a-world-remade-by-ai/

  8. WilmerHale, “Board Oversight and Artificial Intelligence: Key Governance Priorities for 2026” (January 22, 2026). Major law firm client alert; credible. https://www.wilmerhale.com/en/insights/client-alerts/20260122-board-oversight-and-artificial-intelligence-key-governance-priorities-for-2026

  9. Akin Gump, “Does AI Care About Caremark? Applying the Core Principles of Corporate Governance to Artificial Intelligence Integration” (2025). Major law firm analysis; credible. https://www.akingump.com/en/insights/articles/does-ai-care-about-caremark-applying-the-core-principles-of-corporate-governance-to-artificial-intelligence-integration

  10. CIO.com, “AI Hits the Boardroom: What Directors Will Demand from CIOs in 2026” (2026). Trade publication analysis; credible. https://www.cio.com/article/4113214/ai-hits-the-boardroom-what-directors-will-demand-from-cios-in-2026.html

  11. Deloitte, “Board Practices Quarterly” (n=468 board members and C-suite, 57 countries, May-July 2024). Big Four survey; credible, note 2024 vintage. Referenced in multiple governance analyses.

  12. Pertama Partners (n=2,400+ enterprise AI initiatives, 2025). Independent consultancy; credible. Referenced in failure pattern library research.

  13. S&P Global 451 Research (n=1,006 IT/business leaders, October-November 2024). Independent research division; highly credible. Referenced in failure pattern library research.

  14. RGP (n=200 CFOs, October-November 2025). Independent consulting firm survey; credible, small sample. Referenced in capital allocation research.

  15. BCG, “AI at Work: Friend and Foe” (n=10,635, June 2025). Major consulting firm; credible. Referenced in culture research.


Brandon Sneider | brandon@brandonsneider.com March 2026