AI Decision Rights: Who Gets to Say “Yes” and “No” to AI at Your Company
Brandon Sneider | March 2026
Executive Summary
- 72% of CEOs now call themselves the main AI decision maker — double last year’s figure. But at the department level, 52% of AI initiatives operate without formal approval or oversight. The gap between CEO ownership and operational reality is where AI investments go to die. (BCG AI Radar 2026, n=2,360; EY AI Survey 2026, n=500)
- The decision rights vacuum is the hidden prerequisite. Every AI playbook — governance, coaching, cognitive load management, talent retention — assumes someone has authority to execute it. At most mid-market companies, that someone does not exist. The absence of clear decision authority explains more pilot failures than bad technology does.
- Three governance models exist. One works. Centralized, decentralized, and hub-and-spoke. McKinsey (n=1,993) and IBM (n=600+) converge on the same finding: hub-and-spoke delivers 36% higher ROI than decentralized structures. But the model only works when decision rights are explicitly assigned to specific roles for specific decision types.
- The kill decision is the governance stress test. EY finds only 50% of AI governance leaders have independent authority to halt a high-priority project. At the other half of organizations, stopping a failing initiative requires board or CEO approval — which means politically championed projects survive long past their evidence-based expiration date.
- 42% of companies now abandon the majority of their AI initiatives before production. Up from 17% the prior year. The organizations with lower failure rates share one trait: they considered compliance, risk, and data availability criteria during project selection — not after launch. (S&P Global 451 Research 2025, n=1,000+)
The Six-Way Turf War for AI Ownership
Harvard Business Review published a case study in March 2026 that captures the problem precisely. Toby Stuart, Helzel Chair at UC Berkeley Haas, describes a Fortune 500 insurance company where the CEO convened senior leadership to discuss AI ownership. Every executive at the table had a legitimate claim:
| Role | Jurisdiction Claim |
|---|---|
| CIO | Infrastructure, technical stewardship, security |
| COO | Operational workflows — AI agents execute business processes |
| CFO | P&L impact, investment budgeting, ROI measurement |
| CRO / GC | Risk management, autonomous decision-making exposure |
| CHRO | Labor relations — AI agents function as a novel worker category |
| CDO | Data governance — foundational to all AI functionality |
The meeting ended without resolution. Stuart’s diagnosis: the question “who owns AI?” is unanswerable. The right question is “who owns which AI-related decisions?”
This is the core insight. AI is not a technology initiative with a single owner. It is an operating capability that cuts across every function. The organizations that get decision rights wrong end up in one of two failure modes: paralysis (everyone needs approval from everyone) or chaos (everyone buys whatever they want).
Source credibility: Stuart’s HBR article (March 2026) applies sociologist Andrew Abbott’s jurisdictional framework to AI governance — academic rigor applied to a practical problem. No survey data, but the case study matches patterns across every major consulting survey.
The Data on Who Decides — and Who Should
CEO Ownership Is Surging
BCG’s AI Radar 2026 (n=2,360 executives including 640 CEOs, 16 markets, 9 industries, February 2026) documents a dramatic centralization of AI decision-making at the CEO level:
- 72% of CEOs identify as the main AI decision maker — doubled from the prior year
- 50% of CEOs believe their job stability depends on getting AI right
- 94% will continue investing at current or higher levels even without near-term payoff
- Companies plan to double AI spending to 1.7% of revenues in 2026
McKinsey’s State of AI survey (n=1,993, June–July 2025, 105 nations) corroborates: nearly 30% of organizations report the CEO is directly responsible for AI governance, double the prior year’s figure. On average, two leaders share AI governance responsibility — confirming that single-owner models are the exception, not the rule.
Source credibility: BCG AI Radar is a large-sample, multi-market executive survey — strong methodology. McKinsey’s sample is even larger and more geographically diverse. Both are independent research, not vendor-funded.
But the Middle Is Ungoverned
CEO engagement at the top does not mean governance exists in the middle. EY’s AI survey (n=500 US technology leaders, director-level and above, January–February 2026) reveals the operational reality:
- 52% of department-level AI initiatives operate without formal approval or oversight
- 85% prioritize speed-to-market over pre-launch vetting
- 78% acknowledge AI adoption outpaces risk management capabilities
- 45% experienced confirmed or suspected data leaks from unauthorized third-party AI tools
UpGuard’s shadow AI research (November 2025) adds the employee perspective: more than 80% of workers use unapproved AI tools. Half use them regularly. Only 33% say company-approved tools fully meet their needs. Marketing and sales departments report the highest unauthorized usage.
The pattern: the CEO has declared AI a priority and taken personal ownership of the strategy. But between the CEO’s strategic commitment and the department-level reality sits a governance vacuum where purchasing decisions, workflow changes, and risk exposure happen without clear authority or accountability.
Source credibility: EY survey is limited to technology companies with 5,000+ employees — biased toward larger, more tech-forward organizations. UpGuard is a vendor report (cyber risk monitoring). Both directionally reliable but neither represents mid-market specifically.
Three Governance Operating Models
McKinsey, IBM, and Gartner describe three operating models for AI decision-making. The choice determines whether AI governance is real or performative.
Centralized
A single team — typically under the CIO or a Chief AI Officer — owns all AI strategy, purchasing, governance, and execution. This prevents fragmentation and works well for organizations early in their AI journey. The downside: the central team becomes a bottleneck. Large enterprises running every AI request through a central queue report nine-month average time from pilot to scale.
EY finds 70% of technology companies currently use a centralized model for AI approvals and guardrails. But the same data shows the model is under pressure — 52% of department-level initiatives already bypass it.
Decentralized
Each business unit runs its own AI program with minimal coordination. This promotes speed and domain-specific innovation. The predictable problems: duplicated infrastructure, inconsistent governance, incompatible tools, and shadow AI proliferation. Organizations with decentralized models consistently report higher failure rates.
Hub-and-Spoke (The Model That Works)
A lean central hub sets enterprise standards for governance, evaluation frameworks, guardrails, and cost controls. Business units operate as spokes, building and scaling use cases on shared infrastructure with local domain expertise.
IBM’s CAIO survey (n=600+, 2025) finds hub-and-spoke delivers 36% higher ROI than decentralized structures. McKinsey’s data shows this hybrid model is the most common approach for tech talent and solution adoption, while risk and compliance remain fully centralized.
For a 200–500 person company, hub-and-spoke translates to: a small central governance function (1–3 people) that sets standards and approves high-risk use cases, with department heads empowered to approve low-risk tools within those standards. The central function does not execute AI projects — it sets the rules and reviews the decisions.
The Decision Rights Matrix: Who Decides What
Stuart’s HBR framework provides the template. Adapted for a mid-market company, decision rights should be assigned by decision type, not by technology:
| Decision Type | Primary Authority | Consulted | Informed |
|---|---|---|---|
| AI strategy and annual budget | CEO / executive committee | CIO, CFO, department heads | Board |
| New tool approval (high-risk: customer data, financial, legal) | CIO + GC or compliance | Department head, CISO | CEO |
| New tool approval (low-risk: internal productivity) | Department head | CIO (standards check) | Finance |
| Workflow automation decisions | COO / process owner | CIO (integration), HR (job impact) | Affected teams |
| Pilot launch criteria | Business sponsor + CIO | Finance (budget), Legal (risk) | Executive committee |
| Scale / kill decision | Executive committee | Business sponsor, CIO, Finance | Board |
| Data access and model governance | CIO / CDO (if role exists) | Legal, CISO | Department heads |
| Vendor contract terms | Procurement + CIO | Legal, Finance, CISO | Department head |
| Employee AI usage policy | CHRO + CIO | Legal, department heads | All employees |
The critical row is scale / kill. This is the decision most organizations get wrong. EY’s data shows the split: 50% of organizations give their AI governance leaders independent authority to halt projects, while 42% require board or CEO approval. At mid-market scale, the answer should be explicit before the first pilot launches — not discovered during a failing project’s political crisis.
The Kill Decision: The Governance Stress Test
S&P Global’s 451 Research (n=1,000+, 2025) documents the cost of indecision: the share of companies abandoning most AI initiatives jumped from 17% to 42% year over year. The average organization scraps 46% of proof-of-concepts before production.
The organizations with lower failure rates share a common trait: they established kill criteria before launch. They evaluated projects against compliance, risk, and data availability standards during selection — not after the money was spent and the executive sponsor was politically invested.
The practical kill framework for mid-market companies:
Pre-launch gate: Every AI pilot gets written success criteria, a budget ceiling, and a sunset date (typically 90 days). The business sponsor signs off on what “failure” looks like before the project begins.
Monthly review cadence: A 30-minute executive check-in compares actuals to pre-defined criteria. The question is not “how’s the project going?” — it is “have the success thresholds been met?”
Kill authority: The executive committee — not the business sponsor — holds authority to continue or stop. This prevents the sunk-cost dynamic where the person who championed the project is also the person who decides whether it lives.
Post-mortem requirement: Every killed project gets a one-page post-mortem documenting what the organization learned. This converts failure from political embarrassment into institutional knowledge.
The Chief AI Officer Question
Forrester predicts 60% of Fortune 100 companies will appoint a head of AI governance in 2026. Sony, Bank of America, and UBS have already done so. IAPP’s AI Governance Profession Report (n=670+ professionals, 45 countries, 2025) finds over 60% of Fortune 500 companies have established dedicated AI governance committees or Chief AI Governance Officers.
For a mid-market company, a full-time CAIO is rarely warranted. Stuart’s framework offers the right alternative: the coordination layer. Someone — typically the CIO, a fractional CAIO, or a senior VP with cross-functional authority — owns the decision rights map itself. Not the decisions. The map. Their job is to ensure every AI-related decision has a clear owner, to identify gaps in assignments, and to convene leadership when novel use cases create jurisdictional ambiguity.
The RSM Middle Market AI Survey (n=966, February–March 2025) quantifies the gap: 34% of mid-market firms cite absence of clear AI strategy as a top barrier to readiness. Only 37% claim a well-formulated approach. The decision rights vacuum is not a theoretical risk — it is the reported reality of one-third of mid-market companies.
Key Data Points
| Finding | Source | Sample / Date | Credibility |
|---|---|---|---|
| 72% of CEOs are main AI decision maker (doubled YoY) | BCG AI Radar 2026 | n=2,360, Feb 2026 | High — independent, large sample |
| 52% of department AI initiatives lack formal oversight | EY AI Survey 2026 | n=500, Jan–Feb 2026 | Moderate — tech sector only, 5K+ employees |
| Hub-and-spoke delivers 36% higher ROI | IBM CAIO Survey 2025 | n=600+ | Moderate — vendor research but large sample |
| 42% of companies abandon majority of AI initiatives | S&P Global 451 Research | n=1,000+, 2025 | High — independent research firm |
| 80%+ of workers use unapproved AI tools | UpGuard 2025 | Nov 2025 | Moderate — vendor report |
| 34% of mid-market firms lack clear AI strategy | RSM 2025 | n=966, Feb–Mar 2025 | High — mid-market specific, independent |
| 60% of Fortune 100 to appoint AI governance head | Forrester 2026 | Prediction | Moderate — analyst forecast |
| Only 50% of AI governance leaders can independently halt projects | EY AI Survey 2026 | n=500, Jan–Feb 2026 | Moderate — tech sector bias |
| CEO responsible for AI governance doubled to ~30% | McKinsey State of AI | n=1,993, Jun–Jul 2025 | High — largest sample, most geographically diverse |
What This Means for Your Organization
The decision rights question is the prerequisite for everything else. Governance policies, AI coaching programs, cognitive load management, talent retention strategies — none of these execute themselves. Someone needs the authority to approve the tool, fund the training, kill the failing pilot, and enforce the usage policy. If that authority is unclear, the playbook sits in a shared drive while departments buy whatever they want and managers coach with no mandate.
For a company with 200–500 employees, the decision rights framework does not require a Chief AI Officer or an AI governance committee with a charter and quarterly reporting. It requires a two-hour leadership session that produces a one-page decision rights matrix: who approves tool purchases above and below a dollar threshold, who sets the kill criteria for pilots, who owns the employee usage policy, and who resolves disputes when department heads disagree. The BCG data shows 72% of CEOs have taken personal ownership of AI strategy — the question is whether that ownership has been translated into operational authority below them.
The organizations scrapping 46% of their AI proof-of-concepts are not failing because the technology does not work. They are failing because no one established what success looks like before launch, no one has authority to stop a project the CEO mentioned in a board meeting, and no one knows whether the VP of Marketing’s new AI tool was approved by IT or purchased on a corporate card. The decision rights matrix is the twenty-minute document that prevents the twenty-month governance crisis.
If this raised questions about how decision authority is structured — or unstructured — at your organization, I would welcome the conversation: brandon@brandonsneider.com.
Sources
-
BCG AI Radar 2026. “As AI Investments Surge, CEOs Take the Lead.” n=2,360 executives (640 CEOs), 16 markets, 9 industries. February 2026. https://www.bcg.com/publications/2026/as-ai-investments-surge-ceos-take-the-lead — Independent consulting research. High credibility.
-
McKinsey & Company. “The State of AI: Global Survey.” n=1,993 participants, 105 nations. June–July 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai — Independent, largest multi-market AI survey. High credibility.
-
EY. “Autonomous AI Adoption Surges at Tech Companies as Oversight Falls Behind.” n=500 US technology leaders, director-level+. January–February 2026. https://www.ey.com/en_us/newsroom/2026/03/ey-survey-autonomous-ai-adoption-surges-at-tech-companies-as-oversight-falls-behind — Independent consulting firm. Moderate credibility — tech sector, 5K+ employees only.
-
Stuart, Toby E. “Who in the C-Suite Should Own AI?” Harvard Business Review. March 12, 2026. https://hbr.org/2026/03/who-in-the-c-suite-should-own-ai — Academic/practitioner analysis. High credibility for framework; no primary survey data.
-
S&P Global 451 Research. “Generative AI Shows Rapid Growth but Yields Mixed Results.” n=1,000+ respondents, North America and Europe. 2025. https://www.spglobal.com/market-intelligence/en/news-insights/research/2025/10/generative-ai-shows-rapid-growth-but-yields-mixed-results — Independent research firm. High credibility.
-
IBM Institute for Business Value. CAIO Survey. n=600+ Chief AI Officers. 2025. — Vendor research (IBM sells AI platforms). Moderate credibility — large sample but potential bias.
-
RSM US. “Middle Market Firms Rapidly Embracing Generative AI.” n=966 (762 US, 204 Canada). February–March 2025. https://rsmus.com/newsroom/2025/middle-market-firms-rapidly-embracing-generative-ai-but-expertise-gaps-pose-risks-rsm-2025-ai-survey.html — Independent, mid-market-specific. High credibility for this audience.
-
UpGuard. Shadow AI Research. November 2025. — Vendor report (cyber risk monitoring). Moderate credibility — potential bias toward overstating risk.
-
Forrester. 2026 Predictions for AI and Tech Leadership. December 2025. https://itwire.com/it-industry-news/strategy/forrester-unveils-2026-predictions-for-ai-and-tech-leadership.html — Analyst firm prediction. Moderate credibility.
-
IAPP and Credo AI. “AI Governance Profession Report.” n=670+ professionals, 45 countries. 2025. https://iapp.org/resources/article/ai-governance-profession-report — Professional association research. High credibility.
-
HBR. “Most AI Initiatives Fail. This 5-Part Framework Can Help.” November 2025. https://hbr.org/2025/11/most-ai-initiatives-fail-this-5-part-framework-can-help — Practitioner framework. High credibility for methodology.
Brandon Sneider | brandon@brandonsneider.com March 2026