When to Say No to AI: A Decision Framework for the Use Cases That Should Wait
Brandon Sneider | March 2026
Executive Summary
- 42% of companies abandoned most AI initiatives in 2025 — up from 17% in 2024. The average organization scrapped 46% of proof-of-concept projects before production. The leading cause was not bad technology. It was too many projects competing for too few resources (S&P Global Voice of the Enterprise, n=1,006, October-November 2024).
- Mid-market companies reach full AI deployment 3x faster than enterprises — by running fewer pilots, not more. Companies with 200-500 employees that pick one or two high-impact use cases see value in 6-9 months. Those that spread across five or six simultaneous experiments see pilot purgatory (MIT NANDA, 300 deployments analyzed, August 2025).
- 73% of organizations are already at or beyond their change saturation point. Every AI pilot competes for the same finite pool of human attention, willingness to learn, and IT bandwidth. Adding a third pilot during an ERP migration is not ambitious — it is the recipe for failing at all three (Prosci, n=2,600+ change practitioners, 2007-2024).
- Data quality doubled as the top AI obstacle in a single year — from 19% to 44%. Gartner predicts 60% of AI projects will be abandoned through 2026 specifically because the underlying data is not ready. The cheapest AI decision a mid-market company can make is to defer the project until the data is clean (Gartner, n=248 data management leaders, Q3 2024; Informatica CDO Insights, n=600, January 2025).
- The executive who says “not yet” to three proposals and “yes” to one gets more value than the one who says “yes” to all four. The evidence is consistent: focus beats breadth at every company size, and the advantage compounds at mid-market scale where there is no bench.
The “AI Everywhere” Syndrome
Harvard Business Review diagnosed the pattern directly: organizations adopt AI the same way they adopted digital transformation a decade ago — funding dozens of disconnected pilots and hoping for breakthroughs. Nathan Furr and Alexander Shipilov call it the “AI experimentation trap” and note that 95% of generative AI investments produce zero returns because experiments lack connection to core business problems and operate without clear success metrics (HBR, August 2025).
The pattern has a specific anatomy at mid-market scale. A 300-person company with a 5-person IT team launches a customer service chatbot, a document summarization tool, and a sales forecasting experiment simultaneously. Each pilot gets approximately 10-15% of one person’s attention. None gets the data preparation, workflow redesign, or change management it needs. Sixty days later, all three are technically “live” and none is delivering measurable value.
BCG’s 2025 analysis of 1,250 executives across 9 industries confirms the outcome: 60% of companies generate no material value from AI. Only 5% qualify as “future-built” firms with substantial returns. The 5% share a decisive characteristic — they deployed sequentially, not simultaneously, scaling each use case to production before starting the next (BCG, “The Widening AI Value Gap,” September 2025).
McKinsey’s global survey reinforces the point from a different angle. While 88% of organizations use AI in at least one function, only 6% qualify as high performers reporting significant EBIT impact. Those 6% are 3x more likely to have redesigned workflows around AI (55% versus 20% of everyone else). Workflow redesign is time-intensive, context-specific work. It cannot happen across five projects simultaneously with a lean team (McKinsey State of AI, n=1,993 across 105 countries, Q4 2024).
Bain’s survey of 197 executives reveals the sequencing discipline directly: growth winners deploy an average of 4.5 use cases in production versus 3.3 for laggards — but they reached those numbers by scaling one at a time, not launching all five in Q1 (Bain & Company, “AI Moves from Pilots to Production,” Q3 2025).
The Three Blockers: Data, Capacity, and Change Saturation
Data Readiness
Data quality is the single most underestimated blocker in AI deployment. Gartner predicts 60% of AI projects will be abandoned through 2026 because the underlying data is not AI-ready — and 63% of organizations either lack or are unsure whether they have proper data management practices for AI (Gartner, n=248 data management leaders, Q3 2024).
The trend is accelerating. Informatica’s CDO Insights survey found that data quality and readiness tied as the top obstacle to AI success at 44% in 2025, more than doubling from 19% in 2024 (Informatica, n=600 CDOs across U.S., Europe, and Asia, January 2025). This is vendor-funded research, but the finding aligns precisely with Gartner’s independent assessment.
At the project level, the data tax is concrete. MIT NANDA’s deployment analysis found that data preparation consumes approximately 61% of AI project timelines. A company that starts an AI pilot expecting to “fix the data as it goes” is not being agile — it is tripling its timeline and budget without acknowledging either.
IT Capacity
RSM’s 2025 Middle Market AI Survey (n=966 decision-makers, February-March 2025) found that 92% of mid-market companies experienced AI implementation challenges, 62% found it harder than expected, and 70% needed outside help. The underlying constraint: mid-sized companies ask existing IT staff to absorb AI oversight on top of current responsibilities. There is no AI team. There is no bench. The CIO’s unspoken fear — “I’m already underwater, who does this?” — is the correct assessment of the situation.
Each AI pilot requires vendor evaluation, security review, data integration, user training, and ongoing monitoring. Running three pilots with a 5-person IT team is not a resource allocation problem. It is a resource exhaustion problem.
Change Saturation
Prosci’s longitudinal research across 2,600+ change practitioners finds that 73% of organizations are near, at, or beyond their change saturation point — a figure consistent across multiple survey waves from 2007 to 2024. Accenture’s Pulse of Change Index confirms the environment: the rate of organizational change rose 183% since 2019, and 52% of C-suite leaders say they are not fully prepared for the change they face (Accenture, n=3,400 C-suite leaders, January 2024).
Gartner’s HR research quantifies the human cost: 73% of HR leaders report employees are fatigued from change, 74% say managers are not equipped to lead it, and among employees experiencing change fatigue, intent to stay declines by up to 42% and performance drops by up to 27% (Gartner HR, n=473, July 2024).
A mid-market company does not have a change management team. The people absorbing AI workflow changes are the same people who absorbed the last CRM migration and the ERP upgrade before that. Launching an AI pilot during another major technology change is not a scheduling conflict. It is a failure-rate multiplier.
The Decision Framework
Say No (Decline Entirely)
The data does not exist and cannot be created in under six months. If the use case requires consolidating data from multiple systems that have never been integrated, the AI project should wait for the data project to finish. Gartner’s 60% abandonment prediction is driven specifically by this gap.
The organization is mid-migration. If an ERP upgrade, CRM transition, or major system change is underway, AI pilots will compete for the same IT bandwidth, the same user training hours, and the same leadership attention. Adding a third initiative does not divide resources by three — it multiplies failure risk across all three.
The use case started with a tool, not a problem. HBR’s “AI Experimentation Trap” diagnosis: if the initiative began because a vendor gave a compelling demo or a board member read an article — rather than because a specific, measurable business problem demands a solution — the project will fail. RAND Corporation’s analysis of AI project failures (65 expert interviews, 2024) confirms that misalignment between the business problem and the technical approach is the most common root cause.
There is no measurable business outcome defined. Pertama Partners’ analysis of 2,400+ AI initiatives (2025-2026) found that companies defining success metrics before project approval achieve 54% success rates. Companies that skip this step achieve 12%. If the business case is “explore AI for customer service,” it is not ready. “Reduce ticket misrouting from 23% to under 10% within 60 days” is ready.
Say Not Yet (Defer)
The company already has two active AI pilots and fewer than 10 IT staff. MIT NANDA’s finding is specific: mid-market companies that succeed pick one or two high-impact use cases. Running more than two simultaneous pilots with a small IT team means none gets adequate attention. Defer until one pilot reaches production and moves to maintenance mode.
Data quality issues affect more than 20% of the relevant records. The data readiness investment is not optional — it is prerequisite. Fix the data quality problem first. Then deploy AI against clean data. The reverse sequence has an 80%+ failure rate.
The workforce has not been prepared. HBR’s behavioral science research (November 2025) found that AI projects fail when leaders treat adoption as a technology purchase instead of a behavioral change problem. People resist tools that disrupt routines and overreact to visible AI errors. If there is no plan for training, communication, and gradual rollout, the project should wait until one exists.
Leadership has not designated a business sponsor. McKinsey’s high performers are 3x more likely to have executive leadership demonstrating AI ownership. Without a named business leader — not IT — who owns the success metric, the project lacks organizational gravity and will stall when priorities compete.
Say Go (Pursue)
The use case solves a specific, measurable business problem with clean data, a willing business sponsor, and a kill threshold defined before the pilot begins. The organization has change capacity — no major system migrations underway, the IT team has bandwidth, and employees are not already overwhelmed. A purchased solution exists from a specialized vendor where the company can pilot with 10-30 users before committing to an annual contract. The expected payback is under 12 months, and the company can define a 90-day checkpoint: “if this has not achieved X by day 90, the pilot stops.”
The Optimal Number of Simultaneous AI Initiatives
No single study provides a definitive number, but the research converges on a clear range for a company with 200-500 employees and 3-10 IT staff:
| Source | Finding |
|---|---|
| MIT NANDA (August 2025) | Mid-market companies that succeed “pick one or two high-impact use cases” |
| Bain (Q3 2025, n=197) | Growth winners average 4.5 use cases in production — scaled sequentially, not launched simultaneously |
| McKinsey (Q4 2024, n=1,993) | The 6% of high performers prioritize fewer initiatives with deeper workflow redesign |
| RAND Corporation (2024) | All five root causes of AI failure compound when organizations run too many projects simultaneously |
The synthesis: one active AI implementation at a time, with a second in planning and evaluation. This is not conservative. It is what the 5% that succeed actually do.
Key Data Points
| Metric | Finding | Source |
|---|---|---|
| AI initiative abandonment rate | 42% of companies abandoned most AI projects in 2025 (up from 17% in 2024) | S&P Global VotE, n=1,006, 2025 |
| Mid-market vs. enterprise deployment speed | Mid-market reaches production ~3x faster by focusing on 1-2 use cases | MIT NANDA, 300 deployments, August 2025 |
| Change saturation | 73% of organizations at or beyond saturation point | Prosci, n=2,600+, 2007-2024 |
| Data quality as top obstacle | 44% cite data quality (doubled from 19% in 2024) | Informatica CDO Insights, n=600, January 2025 |
| AI projects abandoned for data reasons | 60% through 2026 | Gartner prediction, n=248, Q3 2024 |
| AI project failure rate | 80%+ — 2x the rate of non-AI IT projects | RAND Corporation, 65 expert interviews, 2024 |
| Success with pre-defined metrics | 54% vs. 12% without | Pertama Partners, 2,400+ initiatives, 2025-2026 |
| Mid-market implementation challenges | 92% experienced challenges; 70% needed outside help | RSM, n=966, February-March 2025 |
| Employee change fatigue | 73% of HR leaders report employees fatigued from change | Gartner HR, n=473, July 2024 |
| Companies generating material AI value | Only 5% are “future-built” with substantial returns | BCG, n=1,250, September 2025 |
What This Means for Your Organization
The most valuable AI decision a mid-market company can make in 2026 is not which tool to buy. It is which proposals to decline — or defer — so the one initiative that gets approval also gets the data preparation, IT bandwidth, and organizational attention required to succeed.
The research is unambiguous on the math. A company running one well-resourced AI pilot with clean data, a business sponsor, and defined success metrics has a 54% chance of reaching production. The same company running four simultaneous pilots without those foundations has a near-certain probability of joining the 42% that abandon most of their initiatives before they deliver value.
The framework above — say no, say not yet, say go — translates directly into a 15-minute leadership conversation. Bring the three criteria for “say no” and the four criteria for “not yet” to the next meeting where an AI initiative is proposed. If the proposal trips any of them, the right answer is not rejection. It is sequencing: “This is a strong idea. It goes second, after we finish the first pilot and free up the capacity to do it right.”
If applying this framework to your current AI proposals raised questions about sequencing, data readiness, or where to focus first, I would welcome that conversation — brandon@brandonsneider.com.
Sources
-
MIT Media Lab / Project NANDA — “The GenAI Divide: State of AI in Business 2025” (August 2025). 150 executive interviews, 350 employee surveys, 300 AI deployment analyses. Independent academic research. Credibility: Very High. https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
-
S&P Global Market Intelligence — “Voice of the Enterprise: AI & ML, Use Cases 2025” (October-November 2024). n=1,006 IT and business professionals. Independent market intelligence. Credibility: Very High. https://www.ciodive.com/news/AI-project-fail-data-SPGlobal/742590/
-
McKinsey / QuantumBlack — “The State of AI: Global Survey 2025” (Q4 2024). n=1,993 respondents across 105 countries. Independent consulting. Credibility: High. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
-
BCG — “The Widening AI Value Gap” (September 2025). n=1,250 executives across 9 industries and 25+ sectors. Independent consulting. Credibility: High. https://www.bcg.com/publications/2025/are-you-generating-value-from-ai-the-widening-gap
-
RAND Corporation — “The Root Causes of Failure for Artificial Intelligence Projects” (2024). 65 data scientists and engineers with 5+ years of experience. Independent federally funded research. Credibility: Very High. https://www.rand.org/pubs/research_reports/RRA2680-1.html
-
Bain & Company — “AI Moves from Pilots to Production” (Q3 2025). n=197 executives. Independent consulting. Credibility: High (small sample). https://www.bain.com/insights/executive-survey-ai-moves-from-pilots-to-production/
-
RSM — “Middle Market AI Survey 2025” (February-March 2025). n=966 decision-makers at mid-market companies. Accounting/consulting firm with mid-market specialization. Credibility: High for target audience. https://rsmus.com/insights/services/digital-transformation/rsm-middle-market-ai-survey-2025.html
-
Prosci — “Best Practices in Change Management” (multiple waves, 2007-2024). n=2,600+ change practitioners. Independent change management research. Credibility: Very High — longitudinal, large sample. https://www.prosci.com/change-saturation
-
Accenture — “Pulse of Change Index” (January 2024). n=3,400 C-suite leaders. Consulting firm (vendor interests). Credibility: Medium-High. https://newsroom.accenture.com/news/2024/businesses-anticipate-unprecedented-rate-of-change-in-2024-new-accenture-pulse-of-change-index-shows
-
Gartner — AI project abandonment and data readiness predictions (July 2024, February 2025, June 2025). Various samples (248-3,412 respondents). Independent analyst. Credibility: Very High. https://www.gartner.com/en/newsroom/press-releases/2025-02-26-lack-of-ai-ready-data-puts-ai-projects-at-risk
-
Gartner HR — Employee change fatigue survey (July 2024). n=473 HR leaders. Independent analyst. Credibility: High. Referenced in Gartner HR press materials.
-
Informatica — “CDO Insights 2025” (January 2025). n=600 CDOs across U.S., Europe, and Asia. Vendor-funded with large sample. Credibility: Medium — vendor-funded, but findings align with independent Gartner data. https://www.informatica.com/blogs/cdo-insights-2025-global-data-leaders-racing-ahead-despite-headwinds-to-being-ai-ready-latest-survey-finds.html
-
Pertama Partners — AI Project Success Analysis (2025-2026). 2,400+ AI initiatives analyzed. Practitioner dataset. Credibility: Moderate-High — large sample, methodology not independently verified. Referenced in industry analyses.
-
HBR — “Beware the AI Experimentation Trap” by Nathan Furr and Alexander Shipilov (August 2025). Framework/analysis, not survey data. Credibility: High — academic authors, aligns with quantitative findings. https://hbr.org/2025/08/beware-the-ai-experimentation-trap
-
HBR — “Most AI Initiatives Fail: This 5-Part Framework Can Help” (November 2025). Behavioral science analysis. Credibility: Medium-High — case-study based. https://hbr.org/2025/11/most-ai-initiatives-fail-this-5-part-framework-can-help
Brandon Sneider | brandon@brandonsneider.com March 2026