AI-Augmented Executive Decision-Making: Where AI Sharpens Strategic Judgment — and Where It Quietly Degrades It
Brandon Sneider | March 2026
Executive Summary
- More than half of C-suite executives now use AI to support strategic decisions, but only 41% of CEOs and CFOs report above-average trust in the outputs — Capgemini Research Institute (n=500 CXOs, January 2026). The gap between usage and trust is the tell: executives are experimenting faster than they are learning to verify.
- When AI is wrong, decision-makers follow it 80% of the time. Wharton’s cognitive surrender study (n=1,372, ~10,000 trials, 2025) found that AI users adopted faulty AI guidance with a massive effect size (Cohen’s h=0.81), and their confidence increased even when half the answers were deliberately wrong.
- AI performs at parity with experienced strategists on structured analysis — and actively degrades decisions on novel, ambiguous problems. The Harvard/BCG jagged frontier experiment (n=758 BCG consultants, September 2023) found AI users completed 12.2% more tasks at 40% higher quality inside the frontier, but were 19% less likely to reach correct solutions outside it.
- 47% of enterprise AI users made at least one major business decision based on hallucinated AI content in 2024 — a number that rose as models grew more fluent and therefore more convincingly wrong.
- The executives who capture value from AI-assisted decisions do three things differently: they match AI to the right decision type, they verify outputs against independent data, and they preserve the organizational judgment capacity that AI is quietly eroding.
The Decision Landscape: How Executives Are Actually Using AI
The Capgemini Research Institute surveyed 500 C-suite executives (including 100 CEOs) at organizations with more than $10 billion in revenue across 16 countries in August-September 2025. The findings paint a picture of rapid, unstructured adoption:
- One in six CXOs actively uses AI in strategic decision-making, a figure expected to more than double within three years
- More than half use AI to support decisions either “actively” or “selectively,” with another third experimenting
- 41% of CEOs are testing AI for decisions — more than any other C-suite role
- Only 1% of CXOs believe AI could make strategic decisions autonomously within three years
The pattern: executives are using AI as an input to strategic decisions right now — not waiting for anyone’s permission or framework. They are querying ChatGPT about competitive positioning, asking Claude to synthesize board materials, running scenario analyses through AI tools. The Conference Board’s 2026 C-Suite Outlook confirms AI has moved “from the margins of corporate strategy to the center of executive decision-making.”
This is happening without guardrails. Only 11% of CXOs publicly disclose their use of AI in business decisions, driven by reputational risk concerns. Two-thirds say clearer governance and accountability frameworks would help them use AI more effectively for strategic choices.
Where AI Sharpens Strategic Judgment
The evidence identifies four categories where AI demonstrably improves executive decision quality:
1. Data Synthesis at Scale
AI excels at processing volumes of information that exceed human cognitive capacity. When a CEO needs to synthesize earnings call transcripts, market reports, competitive filings, and internal performance data into a strategic picture, AI reduces what was a 4-6 hour task to 15-20 minutes. Capgemini’s survey found more than half of CXOs report significant improvements in speed, foresight, and creativity when AI handles synthesis.
The MIT Sloan Management Review/TCS research (2025-2026) introduces the concept of “intelligent choice architectures” — AI systems that redesign the decision environment itself, surfacing options executives would not have considered. As MIT Sloan research fellow Michael Schrage puts it: “ICAs flip the script. That’s not analytics, that’s architecture.” When AI takes on the cognitive load of structuring choices, executives become more capable of exercising meaningful judgment — not less.
2. Structured Strategic Analysis
The Harvard/BCG jagged frontier experiment (n=758 BCG consultants, September 2023) provides the most rigorous evidence. For tasks inside the AI capability frontier — market sizing, financial modeling, segmentation analysis, SWOT construction — consultants using GPT-4 completed 12.2% more tasks, 25.1% faster, and 40% produced higher-quality results.
A separate study published in Strategy Science (Csaszar, Ketkar, and Kim, 2024) tested AI against entrepreneurs and experienced venture capital investors. Current LLMs generate and evaluate strategies “at a level comparable to entrepreneurs and investors” for structured analytical tasks. AI matches human quality while operating at dramatically higher speed.
3. Scenario Modeling and Stress Testing
AI-driven scenario planning systems can test 10,000 scenarios in 24 hours — a capacity that transforms executive strategic planning from annual exercises into continuous monitoring. Companies report forecast accuracy improvements of 30-35% and organizations implementing AI-driven competitive intelligence report 85-95% reduction in manual research time.
4. Bias Detection in Group Dynamics
AI can surface analytical blind spots in executive teams. When used as a structured “red team” tool — generating counterarguments, identifying unstated assumptions, stress-testing consensus positions — AI adds a form of cognitive diversity that is difficult to maintain in hierarchical organizations where dissent carries career risk.
Where AI Degrades Strategic Judgment
The same evidence reveals four categories where AI actively harms decision quality:
1. Novel, Ambiguous, and Stakeholder-Complex Decisions
The jagged frontier experiment found that consultants using AI for tasks outside the capability boundary — problems requiring creativity in genuinely novel domains, ethical judgment, or stakeholder intuition — were 19% less likely to produce correct solutions than those working without AI. The AI confidently produced plausible-sounding analyses that were wrong in ways that required domain expertise to detect.
This is precisely the category that defines executive work. Market entry decisions in unfamiliar geographies, crisis response, board-level stakeholder navigation, M&A judgment calls — these are the decisions where AI’s limitations are most dangerous and hardest to spot.
2. The Cognitive Surrender Problem
Wharton researchers Shaw and Nave (2025) conducted three preregistered experiments with 1,372 participants across approximately 10,000 trials. Their findings are sobering for any executive using AI as a decision input:
- Participants consulted AI on more than 50% of trials regardless of whether responses were correct (54.4%) or incorrect (52.8%)
- When AI was wrong, participants followed faulty guidance 80% of the time
- Correct AI answers boosted accuracy 25 percentage points above baseline; wrong answers dropped accuracy 15 points below — a 40-point swing
- On flawed trials: 73% represented cognitive surrender (accepting without scrutiny), only 20% involved appropriate override
- Participants’ confidence increased with AI access, even when half the answers were deliberately wrong
- High-trust individuals had 3.5x greater odds of following faulty AI advice
The study distinguishes “cognitive offloading” (strategic delegation with verification) from “cognitive surrender” (accepting AI outputs without scrutiny). Most AI-assisted decisions in the experiment fell into the second category.
For executives, this is particularly dangerous. The CEO who asks AI to analyze a competitive threat and receives a confident, well-structured response faces an asymmetric verification problem: the output reads like something a McKinsey team would produce, but there is no engagement manager to push back, no analyst who checked the underlying data, no partner whose reputation depends on accuracy.
3. The Hallucination Tax on Strategic Decisions
MIT research (2025) found AI models are 34% more likely to use confident language — “definitely,” “certainly,” “without doubt” — when generating incorrect information than when generating correct information. The more wrong the AI is, the more certain it sounds.
The consequences are measured: 47% of enterprise AI users admitted to making at least one major business decision based on hallucinated content in 2024. Industry surveys estimate AI hallucinations cost businesses $67.4 billion in losses that year. In response, 76% of enterprises now include human-in-the-loop processes — but the Wharton data suggests those loops fail when humans defer to AI confidence.
The Krafton case (March 2026) offers a concrete cautionary tale at the executive level. When Krafton CEO Changhan Kim faced a $250 million earnout obligation, he bypassed his legal team and consulted ChatGPT for strategies to avoid payment. ChatGPT advised forming a task force to renegotiate and suggested “locking down” publishing rights. A Delaware judge found the resulting actions improper, ordered the ousted CEO reinstated, and the company now faces expanded litigation. The CEO who deferred to AI over his own lawyers produced a textbook case of how AI-assisted executive judgment fails catastrophically in novel, high-stakes contexts.
4. Judgment Atrophy Across the Organization
HBR (February 2026) identifies a structural risk: AI simultaneously increases the need for judgment while eroding the experiences that develop it. Five forms of organizational judgment are at stake:
| Judgment Type | What It Requires | How AI Erodes It |
|---|---|---|
| Evaluative | Assessing quality and appropriateness | AI-generated output appears polished regardless of substance |
| Contextual | Knowing when rules require exceptions | AI applies rules uniformly, misses context |
| Tradeoff | Weighing competing objectives | AI optimizes for stated criteria, misses unstated ones |
| Anticipatory | Predicting second-order consequences | AI extrapolates from patterns, misses disruptions |
| Ownership | Taking personal responsibility under uncertainty | Diffused accountability when “the AI recommended it” |
Junior employees who once developed judgment through messy, hands-on work now receive AI-generated drafts that look polished but lack substance — what the article terms “workslop.” Mid-level managers oversee work they never learned to perform themselves. Leadership pipelines thin as fewer people develop decision-making capabilities under uncertainty.
For the CEO relying on AI-assisted strategic analysis, the downstream effect is corrosive: the people who should be pushing back on AI-generated strategy recommendations may no longer have the judgment to know when to push back.
Key Data Points
| Metric | Finding | Source |
|---|---|---|
| CXOs using AI for strategic decisions | >50% actively or selectively | Capgemini (n=500, Jan 2026) |
| CEO trust in AI for decisions | Only 41% above-average trust | Capgemini (n=500, Jan 2026) |
| Follow rate for wrong AI guidance | 80% | Wharton (n=1,372, 2025) |
| Accuracy swing: correct vs. incorrect AI | 40-point swing | Wharton (n=1,372, 2025) |
| AI quality improvement (inside frontier) | +40% higher quality | HBS/BCG (n=758, Sep 2023) |
| AI accuracy degradation (outside frontier) | -19% correct solutions | HBS/BCG (n=758, Sep 2023) |
| Decisions based on hallucinated content | 47% of enterprise users in 2024 | Industry surveys |
| AI confidence when wrong | 34% more confident language | MIT (2025) |
| CXOs wanting governance frameworks | 67% | Capgemini (n=500, Jan 2026) |
| Strategic decisions AI can make autonomously | 1% of CXOs think this is plausible | Capgemini (n=500, Jan 2026) |
What This Means for Your Organization
The evidence points to a clear operating principle: AI is a powerful tool for decision support on structured analytical problems, and a dangerous one for decision delegation on novel strategic questions. The executives capturing value treat AI the way a good CEO treats a brilliant but inexperienced analyst — valuable for synthesis and pattern recognition, unreliable for judgment calls that require context, stakeholder awareness, and ethical reasoning.
Three practices separate executives who gain from AI-augmented decisions from those who suffer from them:
Match AI to decision type. Before querying AI on a strategic question, categorize the decision. Is this a data synthesis problem (AI excels), a structured analysis (AI matches human quality at higher speed), or a novel judgment call with ambiguous data and stakeholder complexity (AI degrades accuracy by 19%)? The jagged frontier is not an abstraction — it is the operating boundary every executive needs to internalize.
Build verification into the workflow. The Wharton data is unambiguous: humans follow wrong AI guidance 80% of the time because the output sounds authoritative. The antidote is structural, not motivational. Require that AI-generated strategic analyses include explicit assumption lists, identify what data the analysis does not have access to, and route through at least one person whose job is to challenge the conclusion — not just review the formatting.
Protect organizational judgment capacity. If AI is handling the analytical work that once built judgment in rising leaders, the organization must deliberately replace that development pathway. This is not a training problem — it is a structural redesign of how people develop the capacity to make decisions under uncertainty.
If the intersection of AI capability and strategic decision quality raised questions specific to your organization’s context, I’d welcome that conversation — brandon@brandonsneider.com.
Sources
-
Capgemini Research Institute, “Inside the C-Suite: How AI is Quietly Reshaping Executive Decisions” (n=500 CXOs including 100 CEOs, $10B+ revenue organizations, 16 countries, survey conducted August-September 2025, published January 15, 2026). HIGH — independent consulting research with rigorous methodology and large enterprise focus. https://www.capgemini.com/insights/research-library/ai-and-decision-making/
-
Shaw & Nave, “Thinking — Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender,” Wharton School (n=1,372 participants, ~10,000 trials, three preregistered experiments, 2025). HIGH — peer-reviewed academic research with preregistered methodology and large trial count. https://www.thealgorithmicbridge.com/p/a-new-wharton-study-on-ai-warns-of
-
Dell’Acqua et al., “Navigating the Jagged Technological Frontier,” Harvard Business School Working Paper (n=758 BCG consultants, 18 consulting tasks, September 2023). HIGH — field experiment with real consultants performing real tasks; the gold standard for AI productivity research. https://www.hbs.edu/faculty/Pages/item.aspx?num=64700
-
Csaszar, Ketkar, and Kim, “Artificial Intelligence and Strategic Decision-Making: Evidence from Entrepreneurs and Investors,” Strategy Science (2024). HIGH — peer-reviewed academic journal with real-world experimental contexts. https://pubsonline.informs.org/doi/10.1287/stsc.2024.0190
-
Conference Board, “AI and the C-Suite: Implications for CEO Strategy in 2026” (2026 C-Suite Outlook Survey). HIGH — independent, long-running survey with established methodology. https://www.conference-board.org/research/ced-policy-backgrounders/ai-and-the-c-suite-implications-for-ceo-strategy-in-2026
-
MIT Sloan Management Review/TCS, “Winning With Intelligent Choice Architectures” (year-long research, six major industries, interviews with executives at Mayo Clinic, Sanofi, Walmart, Meta, Mastercard, 2025-2026). HIGH — independent academic-industry collaboration. https://sloanreview.mit.edu/projects/winning-with-intelligent-choice-architectures/
-
Duncan, “How Do Workers Develop Good Judgment in the AI Era?,” Harvard Business Review (February 2026). MEDIUM-HIGH — practitioner-oriented synthesis without original quantitative data but grounded in organizational research.
-
Krafton v. Unknown Worlds Entertainment, Delaware Court of Chancery (March 2026). HIGH — primary legal source; court ruling on record. https://fortune.com/2026/03/17/krafton-subnautica-chatgpt-delaware-court-ruling-ceo-reinstated/
-
AllAboutAI/Industry surveys on AI hallucination costs ($67.4B estimated losses, 47% enterprise decision impact, 2024-2025). MEDIUM — aggregated industry data; methodology not independently verified.
-
MIT AI confidence-language research (34% more confident language when incorrect, 2025). MEDIUM-HIGH — academic research; specific methodology details not fully verified in secondary sources.
Brandon Sneider | brandon@brandonsneider.com March 2026