Buy, Build, or Both: The Mid-Market AI Decision That Determines Whether You Join the 5%
Brandon Sneider | March 2026
Executive Summary
- MIT’s NANDA research (n=800, 300 public deployments, July 2025) finds purchased AI tools succeed 67% of the time; internal builds succeed roughly 22%. That 3:1 ratio holds across company sizes, but mid-market firms feel it hardest because a failed build consumes a disproportionate share of their IT budget.
- PwC’s 29th Global CEO Survey (n=4,454, January 2026) reports 56% of CEOs see neither revenue gains nor cost reduction from AI. The 12% reporting both outcomes are 2-3x more likely to have bought and embedded vendor solutions across functions rather than building from scratch.
- The real answer is neither pure buy nor pure build. Andreessen Horowitz’s enterprise CIO survey (n=100+, 2025) shows innovation budgets collapsed from 25% to 7% of AI spend as organizations shifted from custom experiments to purchased applications extended with domain-specific configuration.
- IBM’s Institute for Business Value (n=800 executives, November 2025) finds companies that account for technical debt in AI planning achieve 29% higher ROI. Mid-market firms that ignore existing infrastructure liabilities watch AI returns drop 18-29% — often the difference between a successful program and a write-off.
- The winning pattern for 200-2,000 person companies: buy the platform, configure it to your workflows, build only where you have proprietary data that creates competitive advantage. The organizations capturing value treat AI procurement as a strategic decision, not a technology experiment.
The Evidence Is Clear: Buying Outperforms Building
The single most useful data point for any CTO weighing this decision comes from MIT’s NANDA research group. Across 300 public AI deployments and 150 leadership interviews, purchased AI tools succeed at roughly triple the rate of custom-built solutions. The study attributes this gap to three factors: vendor tools arrive with pre-trained models and tested integrations; they carry no internal maintenance burden; and they let organizations focus their limited technical talent on configuration and adoption rather than infrastructure.
This finding aligns with what the broader failure data shows. RAND Corporation research (n=65 data scientists and engineers, 2025) puts the overall AI project failure rate at 80.3% — twice the failure rate of non-AI IT projects. Custom builds account for a disproportionate share of that failure. Abandoned projects average $4.2 million in sunk costs with a median time to abandonment of 11 months. For a mid-market firm with a $2-5 million IT budget, that single failed project can consume two years of discretionary spend.
The Pertama Partners analysis (aggregating RAND, MIT, and Deloitte data, 2026) breaks down the 80% failure rate into its components: 33.8% abandoned before production, 28.4% completed but delivering no value, and 18.1% unable to justify their costs. Only 19.7% achieve business objectives. The successful projects return 188% ROI with a 1.4-year payback. The failed ones average -72% ROI. There is no middle ground — AI projects either work well or fail badly.
Why Building Fails at Mid-Market Scale
Large enterprises can absorb a $7.2 million abandoned AI initiative (Deloitte’s average sunk cost per failed project). A 500-person company cannot. The specific failure modes that disproportionately affect mid-market builders:
The talent problem. ML engineer turnover runs at 34% annually (Pertama Partners, 2026). A mid-market company that hires two data scientists to build a custom solution faces a coin-flip probability of losing one within 18 months — along with all the institutional knowledge embedded in the codebase. Organizations cycle through an average of 2.1 consulting teams per AI project. Each transition resets context and extends timelines.
The data preparation trap. Data preparation consumes 61% of the total project timeline (Pertama Partners, 2026). For mid-market firms, this is worse than it sounds. Gartner finds 65% of organizations lack AI-ready data or are uncertain about readiness. The companies that do achieve data readiness see a 26% improvement in business outcomes — but getting there requires foundational governance work that most 200-500 person companies have not done.
The integration multiplier. Integration complexity averages 2.4x the original estimate (Pertama Partners, 2026). Security and compliance reviews add an average of 4.3 months. A project scoped for six months routinely extends to fourteen. For a mid-market firm, that means the business case built around Year 1 savings now stretches into Year 2 before delivering anything.
The maintenance iceberg. IBM’s Institute for Business Value research (n=800 executives, November 2025) finds that 86% of executives say technical debt constrains AI success. The hidden costs are substantial: annual maintenance runs $5,000-$20,000 per month for enterprise AI systems, with compliance requirements adding $10,000-$100,000 annually. Budget 15-25% of initial development costs annually for maintenance, retraining, and scaling. For a $300,000 build, that is $45,000-$75,000 per year in perpetuity — before you hire the person to manage it.
The Buy-Side Calculus
Purchased AI tools eliminate most of these failure modes. Vendor solutions arrive with pre-trained models, tested integrations, customer support, and — critically — a maintenance roadmap funded by the vendor’s entire customer base. A mid-market firm buying Microsoft 365 Copilot at $30/user/month for 200 users spends $72,000/year with near-zero implementation risk. The same firm building a custom productivity assistant would spend $250,000-$500,000 in development alone, plus $60,000-$100,000 in annual maintenance.
But buying introduces its own risks. Andreessen Horowitz’s CIO survey (2025) documents the lock-in problem: agentic workflows make model switching increasingly costly. Customized prompts require extensive configuration that does not transfer between vendors. One CIO noted that switching costs now include “lots of pages of instruction” that would need to be rebuilt from scratch.
The a16z data also reveals a pricing shift that favors buyers who negotiate well: 64% of enterprises prefer usage-based pricing, but vendors are pushing outcome-based models. The firms capturing value are those that lock in usage-based terms before vendor leverage increases.
The Third Option: Buy-Configure-Extend
The evidence points toward a hybrid approach, but not the vague “build and buy” advice that consultants default to. The specific pattern that works:
Layer 1 — Buy the platform. Start with the AI capabilities already embedded in your existing vendors. Deloitte’s State of AI survey (n=3,235, August-September 2025) finds 30% of organizations are redesigning key processes around AI, and another 34% are using AI to create new products or reinvent core processes. The successful ones started with vendor AI features — not separate tools. This is where 70%+ of mid-market AI value lives: the features already included in the software you pay for.
Layer 2 — Configure for your workflows. The a16z data shows enterprises shifting rapidly from standalone AI tools to embedded capabilities. Innovation budgets dropped from 25% to 7% of total AI spend. That is not companies abandoning AI — it is companies recognizing that configuring existing tools delivers more value than building new ones. The configuration layer is where mid-market firms should concentrate their technical talent: building prompts, training workflows, integrating data sources, and tuning outputs.
Layer 3 — Build only where you have proprietary advantage. The 22% success rate for custom builds improves dramatically when the project has three characteristics: proprietary training data that vendors cannot replicate, a workflow unique enough that no vendor product addresses it, and sustained executive sponsorship (projects with sustained CEO involvement succeed at 68% vs. 11% for those that lose sponsorship). For a 500-person manufacturing company, that might be a quality inspection model trained on decades of defect images. For a professional services firm, it might be a knowledge retrieval system built on proprietary case files. For most mid-market companies in most functions, it means building nothing.
Key Data Points
| Metric | Data | Source |
|---|---|---|
| Buy success rate | ~67% | MIT NANDA (n=800, 300 deployments, July 2025) |
| Build success rate | ~22% | MIT NANDA (n=800, 300 deployments, July 2025) |
| Overall AI project failure rate | 80.3% | RAND Corporation (n=65, 2025) |
| CEOs seeing no AI financial benefit | 56% | PwC CEO Survey (n=4,454, January 2026) |
| CEOs seeing both cost and revenue gains | 12% | PwC CEO Survey (n=4,454, January 2026) |
| Average sunk cost per abandoned AI project | $4.2M | Pertama Partners (aggregated data, 2026) |
| Successful project ROI | +188% | Pertama Partners (aggregated data, 2026) |
| Failed project ROI | -72% | Pertama Partners (aggregated data, 2026) |
| Successful project payback period | 1.4 years | Pertama Partners (aggregated data, 2026) |
| Technical debt ROI penalty if ignored | 18-29% reduction | IBM IBV (n=800, November 2025) |
| Data preparation share of project timeline | 61% | Pertama Partners (aggregated data, 2026) |
| ML engineer annual turnover | 34% | Pertama Partners (aggregated data, 2026) |
| Integration complexity vs. estimate | 2.4x | Pertama Partners (aggregated data, 2026) |
| Innovation budget as % of AI spend (2025) | 7% (down from 25%) | a16z CIO Survey (n=100+, 2025) |
What This Means for Your Organization
The buy-vs-build question is often framed as a technology decision. It is not. It is a resource allocation decision — and for mid-market companies, the math is unambiguous. A custom AI build with a 22% success rate and a $4.2 million average sunk cost when it fails is not a responsible bet for a company with a $3-5 million annual IT budget. A purchased solution with a 67% success rate and a $50,000-$150,000 annual cost is.
The practical framework: audit what AI capabilities your existing vendors already offer. Most mid-market companies are paying for AI features they have not activated. Start there. Then identify the two or three workflows where your organization has truly proprietary data — data no vendor can match. Those are the only candidates for custom development. Everything else should be bought and configured.
The 12% of CEOs reporting real financial returns from AI are not the ones with the most sophisticated technology. They are the ones who embedded AI extensively across products, services, and decision-making — which requires disciplined procurement, not ambitious engineering. If the right path forward for your specific situation would benefit from outside perspective, I am reachable at brandon@brandonsneider.com.
Sources
-
MIT NANDA — “The GenAI Divide” (July 2025). Based on 150 interviews, survey of 350 employees, analysis of 300 public deployments. Finds purchased AI tools succeed ~67% vs. ~22% for internal builds. Independent academic research — high credibility. Fortune coverage | Gartner Peer Community discussion
-
PwC 29th Global CEO Survey (January 2026). n=4,454 CEOs across 95 countries. 56% report no financial benefit from AI; 12% report both cost and revenue gains. Independent survey — high credibility. PwC press release | Fortune coverage
-
Andreessen Horowitz — “How 100 Enterprise CIOs Are Building and Buying Gen AI” (2025). Survey of 100+ CIOs across 15 industries, $500M+ revenue. Documents shift from build to buy, innovation budget collapse from 25% to 7%. VC-affiliated research — moderate credibility but strong primary data. a16z report
-
RAND Corporation — “The Root Causes of Failure for AI Projects” (2025). Based on interviews with 65 data scientists and engineers. Identifies 80%+ failure rate, 2x higher than non-AI IT projects. Independent research institution — high credibility. RAND report
-
IBM Institute for Business Value — “The Tech Debt Reckoning” (November 2025). n=800 C-suite executives across 20 countries. Companies addressing technical debt achieve 29% higher AI ROI. Vendor-affiliated research — moderate credibility, but large sample size and specific findings. IBM report
-
Deloitte — “State of AI in the Enterprise” (March 2026). n=3,235 leaders surveyed August-September 2025. 37% using AI at surface level only. 25% have moved 40%+ of experiments to production. Consulting firm survey — moderate-high credibility with strong sample size. Deloitte report
-
Pertama Partners — “AI Project Failure Statistics 2026” (2026). Aggregation of RAND, MIT, and Deloitte data. Provides granular cost breakdowns: $4.2M average sunk cost for abandoned projects, -72% ROI for failures, +188% for successes. Aggregated analysis — moderate credibility, dependent on underlying sources. Pertama Partners analysis
Brandon Sneider | brandon@brandonsneider.com March 2026