The AI Kill Decision: How to Run a Post-Mortem and Know When to Stop, Pivot, or Push Through
Brandon Sneider | March 2026
Executive Summary
- 42% of companies abandoned the majority of their AI initiatives in 2025, up from 17% in 2024 — a rate that more than doubled in twelve months (S&P Global, n=1,006).
- The median abandoned AI project consumed 11 months and $4.2 million before termination, suggesting most companies wait far too long to make the kill decision (Pertama Partners, 2026).
- Five root causes explain the vast majority of AI project failures: leadership misalignment, data quality gaps, technology obsession over problem-solving, infrastructure underinvestment, and loss of executive sponsorship (RAND Corporation, 2025).
- Projects with pre-defined success metrics at approval achieve a 54% success rate versus 12% without them — the single largest controllable variable in AI project outcomes.
- The 5% of organizations capturing value at scale share one trait: they made fast, disciplined decisions to kill what was not working and double down on what was. The post-mortem is not a funeral — it is a diagnostic tool.
The Abandonment Surge
Something shifted in 2025. S&P Global’s Voice of the Enterprise survey (n=1,006 IT and business leaders across North America and Europe, March 2025) found that 42% of companies abandoned the majority of their AI initiatives — up from 17% just twelve months earlier. Organizations reported scrapping an average of 46% of their AI proofs-of-concept before they reached production. Despite 60% of surveyed companies investing in generative AI and 40% claiming full organizational integration, 46% of respondents reported that no single enterprise objective had seen a “strong positive impact” from those investments.
This is not a technology problem. RAND Corporation’s research identifies an 80% failure rate for AI projects — double the failure rate for non-AI IT projects. BCG’s Build for the Future report (September 2025) finds 60% of companies generate no material value from AI. McKinsey’s State of AI (n=1,993, 2025) narrows the field further: only 6% of organizations — 109 out of 1,933 respondents — attribute more than 5% of EBIT to AI and report “significant value.”
The pattern is clear. Most AI projects are not failing because the technology is inadequate. They are failing because organizations lack a structured methodology for diagnosing what is wrong and deciding what to do about it.
Why AI Projects Fail: The Five Root Causes
RAND Corporation’s analysis of AI project failures across industries identifies five anti-patterns that recur with striking consistency. Understanding these is the prerequisite for any post-mortem — you cannot fix what you have not diagnosed.
1. Leadership Misalignment (present in 73% of failures)
Executives misunderstand AI capabilities, hold inflated expectations about timelines, and underestimate resource requirements. The MIT NANDA report (July 2025) confirms this: companies attempted to force AI into existing processes rather than reconsidering workflows, and executives believed the problem was insufficient AI capability when the actual issue was implementation strategy.
The diagnostic question: Did leadership define measurable success criteria before approving the project — and have those criteria been revisited since launch?
2. Data Quality Gaps (present in 71% of failures)
Gartner predicts 60% of AI projects will be abandoned through 2026 due to lack of AI-ready data. A Q3 2024 Gartner survey of 248 data management leaders found 63% of organizations either do not have or are unsure whether they have the right data management practices for AI. RSM’s mid-market survey (n=966) found 41% of companies cite data quality as their top barrier.
The diagnostic question: Was a formal data readiness assessment conducted before deployment — and did the project proceed despite data gaps?
3. Technology Obsession Over Problem-Solving
RAND identifies a persistent pattern: engineering teams pursue cutting-edge solutions unnecessarily, creating unmaintainable complexity rather than solving practical problems. MIT NANDA’s finding reinforces this — internal AI builds succeed only 22% of the time versus 67% for purchased vendor solutions. Companies building when they should buy are solving the wrong problem.
The diagnostic question: Is this project solving a business problem or showcasing a technology capability?
4. Infrastructure Underinvestment
Companies that invest in demos but not in data pipelines, automated testing, and monitoring systems find that prototypes cannot scale. Deloitte’s State of AI 2026 (n=3,235) reports only 25% of leaders have moved 40% or more of their AI pilots into production — the gap between demonstration and deployment is where most initiatives die.
The diagnostic question: Does the project have production-grade infrastructure, or is it still running on prototype architecture?
5. Loss of Executive Sponsorship
Executive sponsorship dropout is the single most reliable predictor of AI project death. Projects with sustained executive sponsorship achieve a 68% success rate; those that lose it fall to 11%. The median time to sponsorship loss is six months — precisely the period when initial enthusiasm fades and hard organizational change begins.
The diagnostic question: Does a named executive still own this project’s outcomes — and when did they last review progress against the original business case?
The Kill-Pivot-Persist Framework
Most organizations lack structured decision criteria for underperforming AI initiatives. The result is either premature abandonment (killing projects that need more time) or the sunk-cost trap (persisting with projects that will never deliver value). The financial data makes the case for structured decisions stark:
| Outcome Category | Avg. Cost | Avg. Value | ROI |
|---|---|---|---|
| Abandoned before production (34% of projects) | $4.2M | $0 | -100% |
| Completed but failed (28%) | $6.8M | $1.9M | -72% |
| Completed, cost-unjustified (18%) | $8.4M | $3.1M | -63% |
| Successful (20%) | $5.1M | $14.7M | +188% |
Source: Pertama Partners AI Project Failure Statistics 2026. Note: These figures reflect large enterprise averages; mid-market projects typically run at 15-30% of these dollar amounts, but the ratios hold.
The data reveals a critical insight: the worst financial outcome is not killing a project — it is completing a project that delivers no value. Abandoned projects cost $4.2M on average. Completed failures cost $6.8M. The discipline to kill early saves $2.6M per project.
Decision Gate 1: The 90-Day Checkpoint
At 90 days post-deployment, evaluate against three criteria:
KILL if two or more are true:
- No measurable baseline improvement on the original success metric
- User adoption below 25% despite training and change management
- The business problem the project was designed to solve has changed or no longer exists
- Data quality gaps identified at launch remain unresolved
- Executive sponsor has disengaged or been reassigned
PIVOT if one is true:
- The technology works but is solving the wrong problem (redirect to a higher-value use case)
- Adoption is strong in one department but failing elsewhere (narrow scope and deepen)
- The original workflow was not redesigned before deployment (pause deployment, redesign workflow, restart)
PERSIST if all are true:
- Measurable progress toward the original success metric, even if short of target
- Active user adoption above 40% with qualitative evidence of workflow integration
- Executive sponsor remains engaged and has reviewed progress within the last 30 days
- Known obstacles have documented remediation plans with timelines
Decision Gate 2: The 6-Month Review
This is the high-stakes decision point. Six months is long enough to separate “needs more time” from “will never work.” By this point:
- The J-curve should be complete. Initial productivity dips from adoption should have reversed. If performance is still declining, the project has structural problems, not timing problems.
- Cost-to-value ratio should be visible. The project does not need to be profitable at six months, but the trajectory should be clear. If cost is rising and value remains flat, the economics will not improve.
- Organizational resistance should be declining. If resistance is stable or increasing after six months of change management, the project has a people problem that technology cannot fix.
The Post-Mortem Template
When the kill decision is made, a structured post-mortem prevents the same $4.2M mistake from recurring. This template is designed for a 200-500 person company’s executive team — not for the engineering team that built the project.
Section 1: What Did This Project Set Out to Do?
State the original business case in one paragraph. Include the approved budget, timeline, expected ROI, and the executive sponsor’s name. This anchors the post-mortem in the original decision, not in what the project became.
Section 2: What Actually Happened?
Timeline of key milestones, decision points, and deviations from plan. Flag where scope changed, where timelines slipped, and where new requirements were added after approval. Most failed AI projects die from scope expansion, not from the original use case.
Section 3: Root Cause Classification
Classify the primary failure into one of five categories. Be honest about which category applies — the instinct will be to blame the technology, but the data shows the technology is rarely the problem.
| Root Cause | Frequency | Recoverable? |
|---|---|---|
| Leadership misalignment | 73% of failures | Yes — if metrics are reset and sponsor re-engages |
| Data quality gaps | 71% of failures | Yes — with a 90-day data readiness sprint ($75K-$175K) |
| Technology over problem-solving | ~40% of failures | Sometimes — if the problem is re-scoped to a simpler solution |
| Infrastructure underinvestment | ~35% of failures | Yes — but requires capital reallocation |
| Executive sponsorship loss | 56% within 6 months | Rarely — a project without a champion is politically dead |
Section 4: What Should the Organization Do Differently?
This is the only section that matters for the future. Not “what went wrong” — that is Section 3. This section answers: given what this failure revealed about our organization, what structural changes prevent the next project from failing the same way?
Common structural changes include:
- Mandatory data readiness assessment before any AI project is approved
- Pre-defined success metrics and kill criteria at the approval stage
- 90-day checkpoint reviews with documented kill/pivot/persist decisions
- Workflow redesign requirement before deployment (McKinsey’s data shows this is 3.6x more correlated with EBIT impact than any other factor)
Section 5: Salvage Assessment
Not every failed project is a total loss. Assess what is recoverable:
- Data assets: Did the project produce clean, structured data that other initiatives can use?
- Organizational learning: Did the team develop AI implementation skills transferable to the next project?
- Vendor relationships: Are contract terms or vendor capabilities reusable for a different use case?
- Process insights: Did the failed deployment reveal workflow inefficiencies worth fixing independent of AI?
Key Data Points
| Metric | Value | Source |
|---|---|---|
| Companies abandoning majority of AI initiatives (2025) | 42%, up from 17% in 2024 | S&P Global VotE (n=1,006) |
| AI POCs scrapped before production | 46% average | S&P Global VotE (n=1,006) |
| AI project failure rate vs. non-AI IT | 80% vs. ~40% | RAND Corporation, 2025 |
| Companies generating no material AI value | 60% | BCG Build for the Future (Sept 2025) |
| Organizations reporting >5% EBIT from AI | 6% (109 of 1,933) | McKinsey State of AI (n=1,993, 2025) |
| Success rate with pre-defined metrics | 54% vs. 12% without | Pertama Partners, 2026 |
| Success rate with sustained exec sponsorship | 68% vs. 11% without | Pertama Partners, 2026 |
| Success rate with formal data readiness assessment | 47% vs. 14% without | Pertama Partners, 2026 |
| Median sunk cost per abandoned project | $4.2M (enterprise avg) | Pertama Partners, 2026 |
| Median time to project abandonment | 11 months | Pertama Partners, 2026 |
| Gartner prediction: AI projects abandoned due to data | 60% through 2026 | Gartner (Feb 2025, n=248) |
| Vendor-purchased AI success rate | 67% | MIT NANDA (July 2025) |
| Internal AI build success rate | 22% | MIT NANDA (July 2025) |
What This Means for Your Organization
The 42% abandonment rate is not evidence that AI does not work. It is evidence that most organizations lack the management infrastructure to diagnose what is wrong and make disciplined decisions about what to do next. The 5% capturing value at scale are not using better technology — they are making better decisions faster.
Three changes reduce AI project failure rates by measurable margins. First, define kill criteria before approving any AI initiative. Projects approved with pre-defined success metrics succeed at 4.5x the rate of those without them. The time to set the standard is at budget approval, not six months into a failing deployment. Second, run formal 90-day checkpoints with documented kill/pivot/persist decisions. The median abandoned project runs 11 months before termination — 8 months longer than necessary. A 90-day checkpoint forces the conversation early, when the cost of killing is $100K-$200K rather than $2M-$4M. Third, conduct honest post-mortems that classify root causes and produce structural changes. The five root causes identified by RAND recur because organizations treat each failure as unique rather than as a symptom of a systemic gap.
The hardest discipline in AI deployment is not starting projects — it is stopping the ones that are not going to work. If your organization is 90 days into an AI deployment and the questions above are producing uncomfortable answers, that discomfort is information. The worst outcome is not a failed AI project; it is a failed AI project that runs for another nine months because no one had the framework to call it. If this raised questions about an initiative already underway in your organization, I’d welcome the conversation — brandon@brandonsneider.com
Sources
-
S&P Global Market Intelligence, “Voice of the Enterprise: AI & Machine Learning, Use Cases 2025” (n=1,006 IT and business leaders, North America and Europe, March 2025). 42% abandonment rate, 46% POC failure rate. Independent analyst survey; high credibility. https://www.spglobal.com/market-intelligence/en/news-insights/research/ai-experiences-rapid-adoption-but-with-mixed-outcomes-highlights-from-vote-ai-machine-learning
-
RAND Corporation, “The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed,” RRA2680-1 (2025). Five anti-patterns, 80% AI project failure rate (2x non-AI IT). Independent nonprofit research; high credibility. https://www.rand.org/pubs/research_reports/RRA2680-1.html
-
BCG, “Build for the Future 2025: The Widening AI Value Gap” (September 2025). 60% generate no material value; 5% achieve value at scale. Consulting survey; methodology not disclosed. Moderate-high credibility. https://www.bcg.com/publications/2025/are-you-generating-value-from-ai-the-widening-gap
-
McKinsey, “The State of AI in 2025” (n=1,993, 105 nations, 2025). 6% high performers with >5% EBIT impact; workflow redesign 3.6x correlation with EBIT. Large consulting survey; self-reported data. Moderate-high credibility. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
-
Pertama Partners, “AI Project Failure Statistics 2026: The Complete Picture” (2026). Financial impact data by outcome category, root cause frequencies, success factor correlations. Advisory firm analysis; aggregates multiple sources. Moderate credibility — verify underlying data independently. https://www.pertamapartners.com/insights/ai-project-failure-statistics-2026
-
MIT Sloan / NANDA, “State of AI in Business 2025” (July 2025). 95% GenAI pilot failure rate; buy vs. build success rates (67% vs. 22%). Academic research center; high credibility. https://fortune.com/2025/08/21/an-mit-report-that-95-of-ai-pilots-fail-spooked-investors-but-the-reason-why-those-pilots-failed-is-what-should-make-the-c-suite-anxious/
-
Deloitte, “State of AI in the Enterprise 2026” (n=3,235 executives, 2026). 25% have moved 40%+ pilots to production; 37% surface-level adoption. Large consulting survey; high credibility. https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html
-
Gartner, “Lack of AI-Ready Data Puts AI Projects at Risk” (February 2025, n=248 data management leaders). 60% abandonment prediction through 2026; 63% lack right data practices. Leading analyst firm; high credibility. https://www.gartner.com/en/newsroom/press-releases/2025-02-26-lack-of-ai-ready-data-puts-ai-projects-at-risk
-
HBR, Israeli & Ascarza, “Most AI Initiatives Fail. This 5-Part Framework Can Help” (November 2025). 5Rs framework: Roles, Responsibilities, Rituals, Resources, Results. Harvard Business School faculty; case-study based. Moderate-high credibility. https://hbr.org/2025/11/most-ai-initiatives-fail-this-5-part-framework-can-help
Brandon Sneider | brandon@brandonsneider.com March 2026