The Second Workflow Decision: How to Expand AI After a Successful First Pilot

Brandon Sneider | March 2026


Executive Summary

  • The first AI pilot proves the technology works. The second workflow proves the organization can scale. Implementation cycles are 80% longer when deploying AI across three or more business units versus single-unit deployments (MIT Sloan Management Review, 2026). The expansion decision is where most mid-market AI programs either accelerate or stall.
  • 68% of enterprises achieve expected efficiency gains within their first 12 months, but only 31% report enterprise-wide financial impact after expanding beyond pilots (McKinsey Global AI Survey, 2026; Deloitte Global AI Leadership Study, 2026). The gap between “it worked once” and “it works at scale” is an expansion methodology problem, not a technology problem.
  • 73% of organizations are already at or near their change saturation point (Prosci, 2025). Employees experience an average of ten enterprise changes simultaneously (Gartner). Expanding too fast triggers change fatigue — 54% of fatigued employees look for a new role, and 48% report increased stress. Expanding too slowly loses momentum and executive attention.
  • The adjacency principle — expanding to workflows that share data, systems, or stakeholders with the first pilot — cuts implementation time by 30-40% and doubles the probability of second-workflow success. Companies that reuse 71% of their data infrastructure from the first deployment scale faster than those that start from scratch each time (Menlo Ventures, 2025).
  • A structured 60-180 day expansion roadmap, built on internal pilot data rather than vendor promises, is the difference between the 5% that capture enterprise-wide value and the 95% that remain stuck in single-workflow success.

Why the Second Workflow Is Harder Than the First

The first AI pilot benefits from novelty, executive attention, and hand-picked champions. The second workflow has none of these advantages — and faces new obstacles the first one did not.

The ROI plateau is real. McKinsey’s 2026 Global AI Survey finds 68% of enterprises hit their efficiency targets in the first 12 months. But Deloitte’s parallel study (n=3,235, August-September 2025) shows only 25% of organizations have moved 40% or more of their AI experiments into production. The distance between one successful workflow and a second one is where most programs die.

Three forces explain the gap:

1. The infrastructure assumption. The first pilot often runs on prototype architecture — a dedicated sandbox, hand-curated data, and direct vendor support. Expanding to a second workflow exposes whether the data pipelines, governance frameworks, and integration points can actually support multiple concurrent use cases. Gartner’s AI Infrastructure Forecast (2026) finds 45% of enterprises will restructure AI deployments to optimize costs by year-end — a symptom of pilot infrastructure that cannot scale.

2. The attention deficit. Executive sponsorship dropout is the single most reliable predictor of AI project death. The median time to sponsorship loss is six months — precisely when the first pilot has demonstrated value and the organization assumes the hard part is done. Projects with sustained executive sponsorship achieve a 68% success rate; those that lose it fall to 11% (Pertama Partners, 2026). The second workflow needs renewed sponsorship commitment, not residual enthusiasm from the first.

3. The change capacity ceiling. Prosci’s research finds 73% of organizations report being near, at, or beyond their change saturation point. Gartner data shows employees navigate an average of ten enterprise changes simultaneously. Launching a second AI workflow into an already-saturated workforce does not produce adoption — it produces resistance. The 54% of employees who experience change fatigue and start looking for new roles are disproportionately the high performers an AI program depends on.

The Adjacency Principle: Why Your Second Workflow Should Be a Neighbor, Not a Stranger

The single most important decision in second-workflow selection is not which process has the highest theoretical ROI. It is which process shares the most infrastructure, data, and stakeholders with the workflow that already works.

BCG’s “Build for the Future” report (n=2,000+ organizations, September 2025) finds the 5% of “future-built” companies that generate substantial value from AI do not treat each use case as an independent project. They build shared platforms that enable rapid deployment across related workflows. The 60% generating no material value treat each deployment as a standalone initiative — new data pipelines, new integrations, new change management, every time.

The adjacency principle has three dimensions:

Data Adjacency

The second workflow should consume data that is already collected, cleaned, and governed by the first deployment. Menlo Ventures’ 2025 State of Generative AI in the Enterprise reports that 71% of organizations have shared customer data platforms and 89% have supporting cloud infrastructure. The question is whether the second workflow can leverage what already exists.

Practical test: If the first pilot automated invoice processing using ERP data, the adjacent workflow is expense categorization or vendor payment reconciliation — not sales forecasting, which requires entirely different data pipelines.

System Adjacency

The second workflow should operate within the same core system — or share direct integration points with the system the first pilot used. RSM’s 2025 AI survey (n=966) finds 92% of mid-market companies encounter implementation challenges, with 41% citing data quality and 39% citing integration difficulties. Each new system integration multiplies these challenges.

Practical test: If the first pilot runs in Salesforce, the adjacent workflow uses Salesforce data or a system with an existing Salesforce integration — not a standalone tool requiring new API development.

Stakeholder Adjacency

The second workflow should be visible to people who already experienced the first pilot’s success. McKinsey’s State of AI (n=1,993, 2025) finds AI high performers are 2.8x more likely to report fundamental workflow redesign. That redesign capacity exists in teams that have already done it once — not in teams hearing about AI for the first time.

Practical test: If the first pilot’s champions sit in finance, the adjacent workflow involves finance stakeholders — not a cold start in a department with no AI exposure.

The Expansion Decision Framework

Not every successful first pilot should lead to a second workflow. The expansion decision requires honest answers to five questions.

Question 1: Did the First Pilot Meet Its Pre-Defined Success Criteria?

Not “did people like it” or “did usage go up.” Did it hit the numeric KPIs established in the pilot charter? Projects with pre-defined success metrics achieve a 54% success rate versus 12% without them (Pertama Partners, 2026). If the first pilot did not have measurable criteria — or met them only loosely — the expansion decision has no foundation.

Go signal: Pilot met or exceeded 2 of 3 pre-defined KPIs, with data documented and independently verifiable.

Stop signal: Pilot generated positive anecdotes but no measurable baseline-to-outcome improvement.

Question 2: Is the Infrastructure Production-Grade or Still Prototype?

Deloitte’s State of AI 2026 finds only 25% of leaders have moved 40%+ of AI experiments into production. The most common reason: pilot infrastructure cannot support additional workflows without significant rearchitecting. The first pilot may have succeeded on a sandbox — the second workflow needs production infrastructure.

Go signal: Data pipelines are automated, governance policies are documented, and the technical team can articulate the integration architecture without vendor assistance.

Stop signal: The first pilot still depends on manual data feeds, vendor-managed configurations, or workarounds that would not survive a second concurrent user base.

Question 3: Does the Organization Have Remaining Change Capacity?

This is the question most organizations skip — and the one that determines whether the second workflow produces adoption or resistance.

Gartner’s March 2026 survey of 110 CHROs finds 78% agree that workflows and roles need to change to capture AI value. But organizations that continuously adapt change plans based on employee response are 4x more likely to achieve change success (Gartner, n=313, July 2025). The question is not whether change is needed — it is whether the workforce can absorb more change right now.

Go signal: Employee engagement scores in the first pilot cohort are stable or improved. The pilot champions are enthusiastic about expanding, not relieved it is over.

Stop signal: The first pilot cohort shows signs of fatigue — declining usage after initial adoption, complaints about “another new tool,” or champion burnout. Address these before adding a second workflow.

Question 4: Is the Executive Sponsor Still Actively Engaged?

Active means reviewing progress monthly, removing obstacles, and communicating the program’s strategic importance. Passive means having approved the budget and moved on.

Go signal: The executive sponsor can articulate the first pilot’s results from memory and has cleared calendar time for the second workflow’s planning phase.

Stop signal: The sponsor has not reviewed pilot results in the last 30 days or has delegated AI oversight to a direct report who was not involved in the original commitment.

Question 5: Can You Identify an Adjacent Workflow That Scores Above the Feasibility Threshold?

Apply the adjacency test (data, system, stakeholder) and the same Impact × Feasibility scoring used for the first pilot. The second workflow should score at least 50 on the PathOpt matrix (Impact 1-10 × Feasibility 1-10).

Go signal: At least one candidate workflow shares two of three adjacency dimensions with the first pilot and scores above 50.

Stop signal: The most compelling second workflow requires new data infrastructure, a new system integration, and stakeholders with no AI exposure. That is not a second workflow — it is a first pilot in a different department.

The 60-180 Day Expansion Roadmap

For a 200-500 person company that passes all five expansion questions, the following timeline translates pilot success into scaled value.

Days 1-30: Assessment and Selection

Week 1-2: Harvest pilot intelligence. Before selecting the next workflow, extract everything the first pilot taught:

Pilot Learning What to Document Why It Matters for Expansion
Actual vs. projected time savings Hours saved per week, by task type Sets realistic targets for second workflow
Data quality surprises Which data required cleaning, how long it took Predicts data readiness for adjacent workflows
Adoption patterns Who adopted fast, who resisted, what drove each Informs champion selection and change approach
Integration friction Where the tool struggled with existing systems Identifies infrastructure gaps before they block the next deployment
Hidden costs Training time, configuration hours, vendor support needs Builds accurate budget for expansion

Week 2-3: Score candidate workflows. Using the adjacency principle, identify 3-5 candidate workflows and score each on seven criteria:

Criterion Weight Scoring (1-10)
Data adjacency (shares data with first pilot) 20% How much existing data infrastructure can be reused?
System adjacency (same or integrated platform) 15% Does it operate within the same core system?
Stakeholder adjacency (overlapping champions) 15% Are first-pilot champions involved or visible?
Business impact (cost reduction or revenue gain) 20% What is the quantifiable P&L impact?
Process maturity (documented, measured, owned) 15% Is the workflow already standardized and measured?
Change capacity (team readiness for more change) 10% Is the target team below its change saturation threshold?
Executive visibility (strategic relevance) 5% Does success here matter to the executive sponsor?

Workflows scoring below 60% weighted should be deferred. The highest-scoring workflow becomes the expansion candidate.

Week 3-4: Establish baselines and draft the expansion charter. Apply the same baseline methodology from the first pilot — cost per transaction, hours per cycle, error rate — to the selected workflow. Draft a one-page expansion charter with:

  • The specific workflow being augmented
  • Three KPIs with numeric targets (informed by first-pilot actuals, not vendor projections)
  • Named workflow owner and champion (from the first pilot’s cohort if possible)
  • 90-day checkpoint with explicit continue/kill criteria
  • Budget that includes training, integration, and the productivity dip period

Days 31-90: Controlled Deployment

The deployment pace should match the organization’s demonstrated — not theoretical — change absorption rate. TechClass research (December 2025) finds only 23% of organizations took more than a year to go from pilot to scale, suggesting most move too fast, not too slow. The mid-market speed advantage is real — top performers hit 90-day pilot-to-production (MIT NANDA, 2025) — but that timeline assumes a focused, single-workflow expansion, not a multi-front deployment.

Deploy using the same cohort-first model from the first pilot: 15-30 users, structured support, weekly feedback loops. The critical difference is leveraging first-pilot champions as peer coaches. Citi’s model of scaling through 4,000 accelerators across 182,000 employees works because existing practitioners train the next wave — not because a central team tries to support everyone simultaneously.

The 30-60-90 check cadence:

  • Day 30: Are baseline metrics trending in the right direction? If not, diagnose before proceeding.
  • Day 60: Has adoption stabilized above 60% of the pilot cohort? Are champions reporting genuine productivity gains (not just usage)?
  • Day 90: Has the workflow met or exceeded 2 of 3 KPIs? Formal continue/kill decision.

Days 91-180: Consolidation or Third Workflow

The 90-day checkpoint produces one of three outcomes:

Outcome 1: Success — both workflows producing measurable value. This is the foundation for a third workflow selection. Apply the same adjacency scoring, but now the “adjacent” calculation includes two reference points, not one. The third workflow should share infrastructure with either the first or second deployment. BCG’s data shows the 5% of future-built companies treat AI as a portfolio, not a collection of independent projects — the third workflow is where that portfolio discipline becomes critical.

Outcome 2: Partial success — the second workflow shows promise but has not hit targets. Diagnose before expanding. The five root causes from the post-mortem framework apply: leadership misalignment, data quality gaps, technology obsession, infrastructure underinvestment, or loss of executive sponsorship. Expanding to a third workflow while the second is underperforming is the classic mistake that triggers pilot fatigue.

Outcome 3: Failure — the second workflow did not produce measurable improvement. Run the post-mortem. The most common finding is a violation of the adjacency principle — the organization selected a high-theoretical-ROI workflow that required entirely new infrastructure, data, or stakeholder relationships. The corrective action is not to abandon expansion but to select a more adjacent workflow and try again.

The Cadence Trap: Too Fast vs. Too Slow

The expansion cadence is a strategic choice with consequences in both directions.

Too fast looks like: launching three AI workflows simultaneously across different departments, each with separate champions, separate data requirements, and separate change management needs. The result is the change saturation problem — 73% of organizations already at their ceiling, and each additional concurrent initiative reduces success probability for all of them. MIT Sloan’s finding that implementation cycles are 80% longer across three or more business units is the mathematical expression of this constraint.

Too slow looks like: a single successful pilot followed by six months of “strategic planning” for the next deployment. Executive attention drifts. Champions return to their day jobs. The institutional learning from the first pilot decays. McKinsey’s data shows the median time to executive sponsorship dropout is six months — a slow expansion cadence hands the program’s most critical asset to entropy.

The right cadence for a 200-500 person company:

Phase Timeline Focus Constraint
First pilot Days 1-90 One workflow, 15-30 users Build infrastructure and institutional learning
Expansion assessment Days 91-120 Score adjacent workflows, extract pilot lessons Do not skip this — rushed selection is the #1 expansion failure
Second workflow Days 121-210 One adjacent workflow, 15-30 new users Champions from first pilot coach second cohort
Consolidation checkpoint Day 210 Both workflows producing value? If not, diagnose before proceeding
Third workflow (if warranted) Days 211-300 Portfolio-adjacent workflow Only if change capacity and infrastructure support it

This cadence produces 2-3 operational AI workflows in 10 months — consistent with the mid-market top-performer timeline (90 days per workflow with assessment gaps) and sustainable within the change capacity of a 200-500 person organization.

Key Data Points

Metric Data Source
Enterprises achieving first-pilot efficiency targets 68% within 12 months McKinsey Global AI Survey, 2026
Organizations achieving enterprise-wide AI financial impact 31% Deloitte Global AI Leadership Study, 2026
Implementation time increase across 3+ business units 80% longer MIT Sloan Management Review, 2026
Organizations at or near change saturation 73% Prosci, 2025
Average simultaneous enterprise changes per employee 10 Gartner
Fatigued employees seeking new roles 54% Prosci, 2025
Success rate with pre-defined metrics vs. without 54% vs. 12% Pertama Partners, 2026
Success rate with active executive sponsorship vs. without 68% vs. 11% Pertama Partners, 2026
High performers redesigning workflows (vs. others) 55% vs. 20% (2.8x) McKinsey State of AI, n=1,993, 2025
Mid-market companies encountering implementation challenges 92% RSM AI Survey, n=966, 2025
Mid-market top performers pilot-to-production 90 days MIT NANDA, 2025
Organizations with shared customer data platforms 71% Menlo Ventures, 2025
Organizations that adapt change plans 4x more likely to succeed n=313 Gartner, July 2025

What This Means for Your Organization

If the first AI pilot is working, the temptation is to expand everywhere simultaneously. The data says the opposite: disciplined, adjacent expansion produces compound returns; scattered deployment produces pilot fatigue and executive disillusionment.

The practical sequence is clear. Start by extracting every lesson from the first pilot — not just the KPIs, but the adoption patterns, integration friction, and data quality surprises that will determine whether the second workflow succeeds or stalls. Score candidate workflows on adjacency first, impact second. Expand to one additional workflow at a time, using first-pilot champions as coaches rather than attempting to build new capability from scratch in each department.

The companies that reach enterprise-wide AI value — the 5% in BCG’s data, the 6% in McKinsey’s — treat each workflow as a building block in a portfolio, not as an isolated experiment. The second workflow decision is where that portfolio discipline either begins or never materializes. If this transition point raises questions specific to your organization’s expansion path, I’d welcome the conversation — brandon@brandonsneider.com.

Sources

  1. McKinsey, “The State of AI,” March 2025 (n=1,993 organizations). Independent survey. High performers 2.8x more likely to redesign workflows. Only 6% of organizations attribute 5%+ EBIT to AI. Credibility: High — large sample, established methodology, annual repetition. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

  2. BCG, “Build for the Future: The Widening AI Value Gap,” September 2025 (n=2,000+ organizations). 5% “future-built” achieve 1.7x revenue growth and 3.6x three-year TSR vs. laggards. 60% generate no material value. Credibility: High — large sample, specific financial metrics, third-party validated. https://www.bcg.com/publications/2025/are-you-generating-value-from-ai-the-widening-gap

  3. Deloitte, “State of AI in the Enterprise 2026,” October 2025 (n=3,235, 24 countries). 25% have moved 40%+ of experiments to production; 54% expect to reach that level in 3-6 months. 25% report transformative AI effects, double from prior year. Credibility: High — large sample, multi-country, established methodology. https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html

  4. Prosci, “Best Practices in Change Management,” 2025. 73% of organizations at or near change saturation point. 54% of fatigued employees seek new roles; 48% report increased stress. Credibility: High — Prosci is the recognized authority on organizational change management research. https://www.prosci.com/change-saturation

  5. Pertama Partners, AI Project Outcomes Database, 2026 (n=2,400+ projects). Median abandoned project consumed 11 months and $4.2M. Pre-defined success metrics produce 54% vs. 12% success rate. Credibility: Moderate-high — large dataset, specific to AI projects, methodology not fully published.

  6. RSM, “Middle Market AI Survey,” February-March 2025 (n=966, margin ±3.2%). 91% of mid-market firms use generative AI; 92% encounter implementation challenges; 41% cite data quality as top barrier. Credibility: High — focused on mid-market segment, robust methodology. https://rsmus.com/newsroom/2025/middle-market-firms-rapidly-embracing-generative-ai-but-expertise-gaps-pose-risks-rsm-2025-ai-survey.html

  7. MIT NANDA, “The GenAI Divide: State of AI in Business 2025,” July 2025 (n=300+, 52 interviews). 95% of AI pilots produce no measurable P&L impact. Mid-market top performers hit 90-day pilot-to-production. External partnerships 2x more likely to reach production. Credibility: High — academic rigor, MIT affiliation, primary interviews. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf

  8. McKinsey Global AI Survey, 2026. 68% of enterprises achieve expected efficiency gains in first 12 months. Credibility: High — continuation of established survey series.

  9. Gartner, “Top Change Management Trends for CHROs in the Age of AI,” March 2026 (n=110 CHROs; n=313 senior respondents). 78% agree workflows and roles must change for AI value. Organizations that adapt change plans based on employee response are 4x more likely to succeed. Credibility: High — Gartner proprietary research, specific to change management. https://www.gartner.com/en/newsroom/press-releases/2026-3-16-gartner-identifies-top-change-management-trends-for-chros-in-age-of-ai

  10. MIT Sloan Management Review, 2026. Implementation cycles 80% longer across 3+ business units vs. single-unit deployments. Credibility: High — MIT affiliation, peer-reviewed publication.

  11. TechClass, “From Pilot to Scale: How Mid-Sized Companies Can Successfully Expand AI Adoption,” December 2025. Only 23% of organizations took more than a year to go from pilot to scale. GenAI adoption rates by function: IT 75%, Marketing 64%, Customer Service 59%, Finance 58%. Credibility: Moderate — synthesis of multiple sources, clear citations. https://www.techclass.com/resources/learning-and-development-articles/from-pilot-to-scale-how-mid-sized-companies-can-successfully-expand-ai-adoption

  12. Menlo Ventures, “State of Generative AI in the Enterprise,” 2025. 71% of organizations have shared customer data platforms; 89% have supporting cloud infrastructure. Credibility: Moderate — VC firm with enterprise AI investment focus, methodology not fully disclosed. https://menlovc.com/perspective/2025-the-state-of-generative-ai-in-the-enterprise/

  13. Gartner, “AI Infrastructure Forecast,” 2026. 45% of enterprises will restructure AI deployments to optimize inference costs by year-end 2026. Credibility: High — Gartner forecast methodology.

  14. Dave Goyal, “The AI Plateau,” 2026. Synthesis of McKinsey, Deloitte, Gartner, and Accenture data on ROI flattening after initial pilot success. 64% struggle with fragmented data estates. Credibility: Moderate — secondary analysis, well-cited. https://davegoyal.com/the-ai-plateau-why-roi-flattens-after-initial-wins/


Brandon Sneider | brandon@brandonsneider.com March 2026