The AI Business Case That Prevents the $4.2M Write-Off: Seven Mandatory Fields Before the CEO Signs the Check
Brandon Sneider | March 2026
Executive Summary
- Projects with pre-defined success metrics at the approval stage achieve a 54% success rate versus 12% without them — a 4.5x difference created entirely by the discipline of documentation before deployment (Pertama Partners, n=2,400+ enterprise AI initiatives, 2025-2026).
- The median abandoned AI project consumes 11 months and $4.2 million before termination. Most of these failures were diagnosable at the approval stage: 73% lacked clear executive alignment on success metrics, 68% underinvested in data foundations, and 61% treated AI as an IT project rather than a business transformation.
- PwC’s 29th Global CEO Survey (n=4,454 CEOs, 95 countries, January 2026) finds 56% of CEOs report zero revenue or cost benefit from AI investments. The common thread: no pre-approval framework forced the hard questions before the money moved.
- A two-page business case with seven mandatory fields — problem economics, data readiness certification, workflow redesign commitment, executive sponsor named by title, success metrics with kill criteria, total cost of ownership, and a 90-day checkpoint schedule — would have prevented the majority of the 42% abandonment rate S&P Global documented in 2025.
The Approval-Stage Problem
The data on AI project failure is no longer ambiguous. RAND Corporation finds an 80% failure rate for AI projects — double the rate for non-AI IT projects. S&P Global’s Voice of the Enterprise survey (n=1,006 IT and business leaders, North America and Europe, October-November 2024) found that 42% of companies abandoned the majority of their AI initiatives before reaching production, up from 17% the prior year. Organizations reported scrapping an average of 46% of their proofs-of-concept.
The instinct is to blame technology, vendors, or talent. The data points elsewhere.
Pertama Partners’ analysis of 2,400+ enterprise AI initiatives tracked through 2025-2026 reveals that successful projects do not spend less — they spend differently. Successful projects invest 47% of their budget in foundations (data readiness, workflow redesign, change management) versus 18% in failed projects. The difference is visible at the approval stage, not at deployment.
The business case document is where this discipline either exists or does not. Most organizations approve AI projects with the same documentation they use for a software license: a vendor quote, a features list, and a vague promise of efficiency. That approval process is the root cause of the 11-month, $4.2M failure pattern.
What the Failure Data Tells Us About Approvals
Pertama Partners breaks down AI project outcomes into four categories. The financial profile of each reveals where the approval process failed:
| Outcome | % of Projects | Avg. Cost | Avg. Value | ROI |
|---|---|---|---|---|
| Abandoned before production | 34% | $4.2M | $0 | -100% |
| Completed but failed | 28% | $6.8M | $1.9M | -72% |
| Completed, cost-unjustified | 18% | $8.4M | $3.1M | -63% |
| Successful | 20% | $5.1M | $14.7M | +188% |
Source: Pertama Partners AI Project Failure Statistics 2026. Mid-market projects typically run at 15-30% of these dollar amounts, but the ratios hold.
The worst financial outcome is not killing a project — it is completing one that delivers no value. Abandoned projects cost $4.2M. Completed failures cost $6.8M. The discipline to kill early saves $2.6M per project. But the discipline to approve correctly saves the entire investment.
Cross-referencing the five root causes RAND Corporation identifies with the approval-stage data produces a clear pattern: every root cause is diagnosable before deployment begins.
| RAND Root Cause | Present in % of Failures | Approval-Stage Diagnostic |
|---|---|---|
| Leadership misalignment | 73% | No measurable success criteria defined |
| Data quality gaps | 71% | No formal data readiness assessment conducted |
| Technology over problem-solving | — | Business case leads with solution, not problem economics |
| Infrastructure underinvestment | — | Budget allocates <20% to non-license costs |
| Loss of executive sponsorship | 56% lose sponsor within 6 months | No named executive owner with explicit time commitment |
The implication is direct: a structured approval document that forces answers to these five questions before the first dollar moves would eliminate the majority of preventable failures.
The Seven Mandatory Fields
The following template distills the failure data into seven fields that must be completed — and reviewed — before an AI initiative receives funding. Each field maps to a specific failure mode. The business case should fit on two pages. If it requires more, the project scope is too broad.
Field 1: Problem Economics (Not Solution Features)
What it requires: A quantified description of the business problem being solved, expressed in dollars and hours — not in AI capabilities.
What it prevents: RAND’s “technology obsession over problem-solving” failure mode. MIT NANDA’s research (150 interviews, 350-person survey, 300 public deployments, July 2025) finds that internal AI builds succeed only 22% of the time versus 67% for purchased vendor solutions — largely because internal builds start with “what can AI do?” instead of “what is this costing us?”
Format:
| Element | Requirement |
|---|---|
| Process being improved | Named workflow, not department |
| Current cost per transaction (fully loaded) | Labor + tools + error correction + management overhead |
| Current volume | Transactions per week/month |
| Current error/rework rate | Percentage requiring correction |
| Annual cost of the status quo | Volume × cost per transaction × 52 |
If the team cannot fill this table, they do not understand the problem well enough to deploy AI against it.
Field 2: Data Readiness Certification
What it requires: A signed assessment — not a verbal assurance — that the data required for this use case exists, is accessible, and meets minimum quality thresholds.
What it prevents: Gartner predicts 60% of AI projects will be abandoned through 2026 due to lack of AI-ready data. A Q3 2024 Gartner survey of 248 data management leaders found 63% of organizations either do not have or are unsure whether they have the right data management practices for AI. RSM’s mid-market survey (n=966) found 41% of companies cite data quality as their top barrier.
Format:
| Data Element | Available? | Format Clean? | Access Granted? | Owner |
|---|---|---|---|---|
| [specific to use case] | Y/N | Y/N | Y/N | Named person |
If any row shows “N” in the first three columns, the project does not proceed until the gap is closed. The certification is signed by the data owner, not the project sponsor — this prevents the optimism bias that kills 71% of failed projects.
Field 3: Workflow Redesign Commitment
What it requires: A documented plan for how the workflow will change — not just what tool will be added to the existing workflow.
What it prevents: McKinsey’s data shows workflow redesign is the #1 predictor of AI value, with redesigned workflows 3.6x more likely to appear in high-performing organizations. BCG’s Build for the Future report (September 2025) finds that the 5% of “future-built” companies capturing value at scale allocate 70% of their AI budget to people and processes, not technology. Organizations that bolt AI onto existing workflows see the productivity gains evaporate — the Faros AI finding (n=3,000+ engineering teams, 2025) that 98% more pull requests produced zero delivery improvement is the canonical example.
Format:
| Current State | AI-Augmented State | Who Redesigns | Timeline |
|---|---|---|---|
| Step-by-step current workflow | Step-by-step revised workflow | Named business + technology pair | Weeks, not months |
The “who redesigns” column is the critical field. If the answer is “IT” or “the vendor,” the project will fail. Workflow redesign requires the business process owner and a technology counterpart working as a pair — the “two-in-the-box” model.
Field 4: Named Executive Sponsor with Time Commitment
What it requires: A specific executive, identified by name and title, who commits to a defined cadence of involvement.
What it prevents: Projects with sustained executive sponsorship achieve a 68% success rate; those that lose it fall to 11% (Pertama Partners, 2025-2026). The median time to sponsorship loss is six months — the period when initial enthusiasm fades and hard organizational change begins. The 56% sponsorship dropout rate is the single most reliable predictor of AI project death.
Format:
| Field | Requirement |
|---|---|
| Executive sponsor | Name, title |
| Review cadence | Minimum: monthly 30-minute review with project lead |
| Escalation authority | Can this person reallocate budget, reassign staff, or kill the project? |
| Commitment period | Minimum: through the 6-month evaluation gate |
If the named sponsor cannot commit to monthly reviews, the project lacks the organizational authority to succeed. This is not optional oversight — it is the mechanism that prevents the 56% dropout rate from killing the initiative.
Field 5: Success Metrics with Pre-Defined Kill Criteria
What it requires: Three to five measurable outcomes, with specific thresholds that trigger continuation, pivot, or termination.
What it prevents: The 54% vs. 12% success rate gap. Projects that define success after deployment — Pertama Partners finds the average retroactive metric is added 8 months post-approval — are 4.5x more likely to fail. Without pre-defined kill criteria, the sunk-cost fallacy extends projects past the point of recovery.
Format:
| Metric | Baseline (Today) | 90-Day Target | 6-Month Target | Kill Threshold |
|---|---|---|---|---|
| [e.g., cost per invoice] | $18.50 | $14.00 | $9.00 | No improvement from baseline |
| [e.g., processing time] | 4.2 hours | 2.5 hours | 1.0 hour | <15% improvement at 90 days |
| [e.g., error rate] | 8% | 6% | 3% | Error rate increases |
| User adoption rate | 0% | 40% | 65% | <25% at 90 days |
The kill threshold column is the field most organizations resist completing — and the one that prevents the 11-month sunk-cost failure pattern. If the team cannot articulate what failure looks like before starting, they do not have a business case. They have a hope.
Field 6: Total Cost of Ownership (Not License Cost)
What it requires: A complete cost model using the 2.5x first-year multiplier, with all seven cost layers itemized.
What it prevents: The 40-60% cost underestimation that Mavvrik’s 2025 State of AI Cost Management research documents. License fees represent 40-60% of actual first-year costs. Integration, training, change management, security review, and the productivity dip during adoption push total cost of ownership to 2-3x the vendor quote.
Format:
| Cost Layer | Estimate | When It Hits |
|---|---|---|
| Software licensing (annual) | $ | Month 1 |
| Integration and configuration | $ | Months 1-3 |
| Security review and compliance | $ | Months 1-3 |
| Training and change management | $ | Months 1-6 |
| Productivity dip (4-8 weeks at 5-15% output reduction) | $ | Months 1-3 |
| Ongoing support and optimization (15-20% of license/year) | $ | Ongoing |
| Data preparation and governance | $ | Months 1-6 |
| Year 1 Total | $ |
Three-scenario requirement: The business case must present conservative (40% of projected benefits materialize), moderate (70%), and optimistic (100%) scenarios. If the project does not justify approval under the conservative scenario, it should not be approved. Pertama Partners’ data shows the conservative scenario is the one that materializes in 80% of cases.
Field 7: The 90-Day Checkpoint Schedule
What it requires: A pre-committed calendar of evaluation points with defined decision outcomes at each gate.
What it prevents: The 38% of failures that occur in months 3-9 — the period when projects drift without formal evaluation. Build-measure-learn loops should occur every two weeks with a formal gate at 90 days and 6 months.
Format:
| Checkpoint | Date | Decision Options | Who Decides |
|---|---|---|---|
| Week 2: Pilot launch confirmation | [date] | Proceed / Delay with specific fix | Project lead |
| Week 4: Early adoption review | [date] | Proceed / Adjust training / Expand scope | Project lead + sponsor |
| Week 8: Mid-pilot assessment | [date] | Proceed / Narrow scope / Pause | Sponsor |
| Day 90: Formal gate review | [date] | Continue / Pivot / Kill | Sponsor + CFO |
| Month 6: Business impact review | [date] | Scale / Maintain / Wind down | Executive team |
The formal 90-day gate is the most important checkpoint. At this point, compare actual results against the kill thresholds in Field 5. If two or more metrics fall below the kill threshold, the default action is termination — not “give it more time.” The burden of proof shifts to continuation, not cancellation.
Key Data Points
| Metric | Data Point | Source |
|---|---|---|
| Success rate with pre-defined metrics vs. without | 54% vs. 12% (4.5x) | Pertama Partners (n=2,400+, 2025-2026) |
| AI project failure rate | 80% (2x non-AI IT projects) | RAND Corporation, 2025 |
| Companies abandoning majority of AI initiatives | 42% (up from 17% YoY) | S&P Global (n=1,006, Oct-Nov 2024) |
| CEOs reporting zero AI revenue/cost benefit | 56% | PwC (n=4,454, Jan 2026) |
| Median cost of abandoned AI project | $4.2M (11-month median) | Pertama Partners, 2026 |
| Success with sustained executive sponsorship | 68% vs. 11% without | Pertama Partners, 2025-2026 |
| Success with formal data readiness assessment | 47% vs. 14% without | Pertama Partners, 2025-2026 |
| AI projects abandoned due to data issues | 60% predicted through 2026 | Gartner (n=248 data leaders, Q3 2024) |
| Organizations unsure of data readiness for AI | 63% | Gartner (n=248, Q3 2024) |
| Mid-market citing data quality as top barrier | 41% | RSM (n=966) |
| Successful projects’ foundation investment | 47% of budget vs. 18% in failures | Pertama Partners, 2026 |
| Internal AI builds vs. purchased solutions | 22% vs. 67% success | MIT NANDA (n=150 interviews, 350 survey, 300 deployments, July 2025) |
What This Means for Your Organization
The business case template is not a bureaucratic exercise. It is the single highest-leverage intervention available to a mid-market CEO approving AI investments. The 4.5x success rate difference between projects with pre-defined metrics and those without is the clearest signal in the entire AI adoption evidence base. No tool selection, no vendor negotiation, and no training program comes close to that impact.
For a 200-500 person company, the practical application is direct. Before approving any AI initiative — whether a $30,000 pilot or a $300,000 platform deployment — require the seven fields completed on two pages. The person presenting the business case should be the business process owner, not the IT team and not the vendor. If the problem economics cannot be quantified, if the data readiness cannot be certified, if no executive will commit to monthly reviews, the project is not ready. That is not a rejection. That is a diagnosis of what needs to happen first.
The organizations capturing value from AI — the 5% that BCG identifies as “future-built,” the 20% that Pertama Partners documents as successful — share one trait: they made the hard decisions before spending the money, not after. The business case is where those decisions live. If this raised questions about how to structure the approval process for AI investments at your organization, I’d welcome the conversation — brandon@brandonsneider.com.
Sources
-
Pertama Partners, “AI Project Failure Statistics 2026” (February 2026). Analysis of 2,400+ enterprise AI initiatives. Primary source for success rate differentials, cost data, and timeline data. Credibility: Independent advisory firm; large dataset but methodology details limited. Cross-validated against RAND and S&P Global findings.
-
Pertama Partners, “AI Business Case Template” (2025-2026). Financial projection methodology developed across advisory engagements in banking, insurance, telecommunications, and professional services. Credibility: Practitioner-derived framework; not peer-reviewed but grounded in engagement data.
-
RAND Corporation, “The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed” (RRA2680-1) (2024). Qualitative study interviewing experienced data scientists and engineers. Credibility: High — independent nonprofit research institution; peer-reviewed methodology. Limitation: interview-based, no large-n quantitative analysis.
-
S&P Global Market Intelligence, “Voice of the Enterprise: AI & Machine Learning, Use Cases 2025” (March 2025). n=1,006 IT and business leaders, North America and Europe, October-November 2024. Margin of error +/- 3 pts at 95% confidence. Credibility: High — established market intelligence firm with rigorous survey methodology.
-
PwC, 29th Annual Global CEO Survey (January 2026). n=4,454 CEOs across 95 countries, September-November 2025. Credibility: High — large sample, established methodology, annual longitudinal survey.
-
MIT NANDA, “The GenAI Divide: State of AI in Business 2025” (July 2025). 150 interviews, 350-person survey, analysis of 300 public AI deployments. Credibility: High — MIT institutional research. Finding on buy vs. build (67% vs. ~22% success) is among the most actionable findings for mid-market companies.
-
Gartner, “Lack of AI-Ready Data Puts AI Projects at Risk” (February 2025). n=248 data management leaders, Q3 2024. Credibility: High — established analyst firm. Limitation: self-reported readiness assessment.
-
BCG, “Build for the Future: The Widening AI Value Gap” (September 2025). Analysis of “future-built” companies. Credibility: High for frameworks; consulting firm data. 10-20-70 budget allocation finding (70% people/process, 20% technology, 10% algorithms) is well-documented across multiple BCG publications.
-
Mavvrik, “2025 State of AI Cost Management” (2025). Industry survey on AI cost overruns. Credibility: Moderate — vendor-published research; corroborated by DX Research findings on implementation costs exceeding licensing by 30-40%.
-
RSM US Middle Market AI Survey (2025). n=966 mid-market executives. Credibility: High for mid-market-specific data — RSM specializes in this segment.
Brandon Sneider | brandon@brandonsneider.com March 2026