The Second Attempt: How to Re-Engage a Workforce That Watched Your First AI Initiative Fail

Brandon Sneider | March 2026


Executive Summary

  • 42% of companies abandoned the majority of their AI initiatives in 2025, up from 17% in 2024 (S&P Global Voice of the Enterprise, n=1,006, March 2025). Most will try again. The strategic restart is documented. The employee re-engagement problem is not.
  • Trust in company-provided AI fell 31% between May and July 2025 alone (Deloitte TrustID Index, ~60,000 U.S. employees, ongoing). For companies whose first deployment was poorly executed, trust never climbed in the first place. Usage of employer-provided AI tools declined 15% in the same period, while 43% of employees with AI access admit to using unapproved tools instead.
  • The belief-anxiety paradox makes re-engagement harder than first engagement. HBR’s cross-national study (n=2,000+, Fall 2025) finds 80% of employees experience strong concern about at least one AI-related threat. High-anxiety employees use AI more frequently (65% of tasks vs. 42% for low-anxiety workers) but resist it more intensely (4.6/5 resistance vs. 2.1/5). Fear drives compliance, not commitment. In a failed-experimenter organization, this dynamic is amplified by evidence.
  • The workforce psychology of a second attempt is fundamentally different from a first. Employees have formed conclusions: “AI doesn’t work here,” “management overpromised,” “I wasted effort last time.” The communication approach, pilot structure, and success-sharing cadence that work for a first deployment will backfire on a restart because they trigger pattern recognition — “here they go again.”
  • Companies that pilot with skeptics rather than enthusiasts, acknowledge failure publicly rather than rebranding it, and measure business outcomes rather than usage dashboards convert the second attempt from a credibility liability into a credibility asset. Cisco’s 3P Organization piloted with five skeptic-heavy teams and achieved 30% workflow augmentation across 24 workflows within ten weeks.

The Trust Contamination Problem

A late starter faces ignorance. A failed experimenter faces something harder: institutional memory of failure that has been processed into organizational folklore.

HBR’s Fall 2025 cross-national study (n=2,000+) provides the clearest evidence of how this processing works. The research identifies four employee profiles that emerge during AI adoption:

Profile Share Characteristics After a failed first attempt
Visionaries ~40% High belief, low risk perception Shrinks to ~25%. Some converted to Disruptors by the evidence of failure.
Disruptors ~30% High belief, high anxiety Grows to ~40%. Still believe AI works — but now believe this company cannot execute it.
Endangered ~20% Low belief, high anxiety Grows to ~25%. First failure confirmed their fears. “Told you so” is their operating stance.
Complacent ~10% Low belief, low anxiety Stable. Disengaged before; disengaged now.

The critical shift: after a failed first attempt, the Disruptor segment — employees who believe in AI but distrust their organization’s ability to deploy it — becomes the largest group. These are not AI skeptics. They are organizational skeptics. The communication challenge is not convincing them AI works. It is convincing them that this time will be different, with evidence that does not yet exist.

Deloitte’s TrustID data puts a number on the velocity of trust erosion. The 31% decline in generative AI trust across all companies occurred in just two months. For a company that publicly championed AI, enrolled employees in training, asked them to change workflows, and then quietly abandoned the initiative, the trust decline is steeper and stickier. The organizational memory of “we tried that” has a half-life measured in years, not months.

The deeper problem is what HBR’s research calls “performative compliance” — the pattern where employees use AI to check boxes without changing how they work. In a restart scenario, performative compliance is the default because it is the rational response to uncertainty. If management might abandon this initiative too, the safe strategy is visible participation with minimal personal investment. Usage dashboards cannot detect this pattern. They measure activity, not commitment.

Why the Standard Playbook Backfires on a Restart

The typical AI deployment communication follows a predictable arc: executive announcement, vision narrative, training rollout, adoption metrics, success stories. Each of these triggers a different form of resistance when the organization has already failed once.

The executive announcement triggers pattern recognition. “Last time the CEO said this was transformational and it went nowhere.” BCG’s AI at Work survey (n=10,635, June 2025) finds that employee positivity about AI rises from 15% to 55% with strong leadership support — but “support” that lacks credibility because of prior failure actually accelerates cynicism. The executive’s enthusiasm becomes evidence of poor judgment rather than a signal of organizational commitment.

The vision narrative triggers the “overpromise” response. Employees who were told AI would free them from repetitive tasks and instead watched a poorly scoped pilot produce no measurable value have formed a calibrated expectation: leadership overpromises on AI. Any new vision narrative that sounds similar — even if the substance is different — will be filtered through this expectation.

Training rollouts trigger the “wasted effort” response. Employees who invested time in learning tools that were subsequently abandoned are not eager to invest again. BCG finds only 36% of employees believe their AI training is “enough.” For employees who completed training during a failed first attempt, the issue is not adequacy — it is perceived futility. “Why learn something that won’t stick?”

Usage dashboards trigger performative compliance. If the organization tracks logins and prompts, employees produce logins and prompts. The work does not change. The measurement creates the appearance of adoption without the substance.

Success stories trigger skepticism. Cherry-picked examples of one team’s AI wins — the standard internal communications approach — land differently in a failed-experimenter organization. Employees who experienced failure have evidence that contradicts the story. The success story feels like propaganda rather than proof.

The Five Re-Engagement Principles

The evidence points to five principles that distinguish successful re-engagement from repeating the same mistakes with better marketing.

Principle 1: Acknowledge the Failure — Specifically, Not Vaguely

HBR’s systematic experimentation research (January 2026) finds that organizations framing AI deployment as structured experiments with explicit hypotheses and measurement protocols — rather than as tool rollouts — convert skeptics more reliably because the framing validates their caution.

The re-engagement equivalent: publicly acknowledge what went wrong, with numbers.

Not: “We learned a lot from our first AI journey and are excited to take the next step.” But: “Our first AI investment produced $X in licensing and training costs and $0 in measurable business value. The root cause was [specific gap]. Here is what changes.”

The specificity matters. The Institute for Public Relations’ trust repair research identifies six components of organizational trust restoration after failure: investigation, accountability, corrective action, transparent reporting, stakeholder engagement, and monitoring. Vague acknowledgment satisfies none of them. Specific acknowledgment — naming the spend, naming the gap, naming the correction — satisfies all six simultaneously.

The S&P Global data provides the language. Among companies that abandoned AI initiatives, 38% cite data quality issues, 29% cite business case failure, 21% cite lost executive sponsorship, and 12% cite technical infeasibility. Only 12% of failures are technology failures. The other 88% are organizational. Saying so out loud — in a company meeting, not an email — signals that leadership has done the honest diagnosis that was missing the first time.

Principle 2: Pilot with Skeptics, Not Enthusiasts

The instinct is to restart with volunteers — the eager adopters who still believe. The evidence says the opposite.

Cisco’s 3P Organization case study (March 2026) piloted a working-with-AI program in four weeks with five skeptic-heavy teams. These teams surfaced operational insights that enthusiast teams missed — edge cases, workflow complications, quality concerns that only emerge when the user is looking for reasons the tool will fail. The pilot scaled to the broader organization in six weeks and achieved 30% average workflow augmentation across 24 reviewed workflows.

The mechanism is credibility transfer. When a known skeptic endorses the second attempt, that endorsement carries more weight than any executive mandate. Colgate-Palmolive’s experience illustrates the scaled version: mandatory AI training for all non-plant workers (14,000 employees) combined with a self-service AI Hub that produced 3,000-5,000 employee-created AI assistants. But Colgate started from a position of credibility. A failed experimenter must manufacture credibility through the people least likely to grant it — and their conversion becomes the strongest possible signal.

The practical implication: select the first restart pilot team based on skepticism level, not enthusiasm. Identify 8-12 employees in a single department who have vocally expressed doubt about AI. Invite them — not assign them — to a 4-week structured experiment with a single workflow. Give them permission to conclude “this doesn’t work” if the evidence supports it. The psychological safety to reach a negative conclusion makes a positive conclusion credible.

Principle 3: Measure Business Outcomes from Day One — and Show the Numbers

McKinsey’s State of AI data (n=1,933, 2025) is specific: workflow redesign is the strongest predictor of EBIT impact, not technology deployment. The metric shift is not just an analytical improvement — it is a trust signal.

Failed-experimenter organizations measured the wrong things the first time. Tool adoption rates. User logins. Number of prompts. These metrics created the illusion of progress without the reality of value. Employees noticed. When the initiative was abandoned despite “strong adoption numbers,” the implicit lesson was: the metrics were theater.

The restart must measure what employees care about: time recovered in their specific workflow, error rates before and after, cycle time reduction, quality improvement. These metrics are harder to collect but impossible to dismiss as theater because they describe changes employees can verify from their own experience.

Deloitte’s TrustID research quantifies the mechanism: employees whose managers conduct weekly check-ins about AI’s real impact on their work report trust scores approximately 60% higher than those without check-ins. The measurement is not just a management tool — it is a trust-building ritual.

Critically, the numbers must be shared even when they are modest. A restart pilot that reduces invoice processing time by 8 minutes per transaction is not a headline number. But for the accounts payable team that processes 200 invoices per week, it is 26 hours recovered per week — and it is a number they can verify against their own experience. Sharing real, verified, modest improvements builds more credibility than projecting dramatic returns that echo the overblown promises of the first attempt.

Principle 4: Address the Four Resistance Patterns Directly

HBR’s research segments employee responses to AI restart across two dimensions — belief in AI’s value and personal risk perception — producing the four profiles described above. Each requires a distinct communication approach.

For Disruptors (~40% after failure): These employees believe AI works but distrust organizational execution. The response is not more vision — it is visible structural change. What governance mechanism exists now that did not exist before? Who is accountable for business outcomes, not just technology deployment? What stops this from being quietly abandoned in six months? The structural signals answer their question: “What is different this time?”

For the Endangered (~25% after failure): These employees fear both AI and organizational change. They need protected experimentation — what HBR calls “digital playgrounds” where exploration carries no performance risk. Employees who receive hands-on AI training in low-risk settings report 144% higher trust versus those without training (Deloitte TrustID, 2025). The investment is not in the training content. It is in the psychological safety of the environment.

For residual Visionaries (~25% after failure): Deploy them as peer coaches, not as evangelists. The distinction matters. An evangelist says “AI is great.” A coach says “I struggled with this specific problem and here is what I tried.” Gallup’s data (n=19,043, May 2025) confirms the mechanism: employees whose peers actively support AI use are 2.1x more likely to adopt it themselves. Peer coaching converts enthusiasm into credibility. Evangelism in a failed-experimenter context sounds like denial.

For the Complacent (~10%): External disruption evidence is more effective than internal motivation. Competitor adoption data, industry benchmarks, and the S&P Global 42% abandonment statistic itself — reframed as “42% of companies are falling behind, and here is what the ones who recovered did differently” — creates urgency without accusation.

Principle 5: Build the Success-Sharing Cadence Before the Pilot Ends

The failed first attempt typically followed a pattern: big announcement, gradual silence, quiet abandonment. The communication void between “we’re doing AI” and “we’ve stopped doing AI” is where organizational cynicism incubates.

The restart must reverse this pattern with structured, recurring visibility into outcomes — not enthusiasm, outcomes.

The 30-60-90 Communication Cadence:

Milestone What to share Format Who delivers
Day 30 Pilot team’s honest assessment: what works, what doesn’t, specific numbers Department meeting, not email Pilot team members (not executives)
Day 60 Business outcome data from the first workflow: time saved, errors reduced, quality changes All-hands with Q&A Pilot team + manager, with executive present but not presenting
Day 90 Go/no-go decision on scaling, with the data that informed it Written memo + manager cascades Executive sponsor, with pilot team co-signing

The critical design choices: pilot team members deliver the early updates, not executives. The executive’s role shifts from champion to listener for the first 60 days. This inversion signals that the organization values ground-truth evidence over leadership enthusiasm — the precise signal that Disruptors need to re-engage.

The Q&A at Day 60 is non-negotiable. Employees in a failed-experimenter organization have accumulated questions they were never invited to ask the first time. The questions themselves are diagnostic: “What happens when the budget gets cut?” reveals fear of abandonment. “How does this affect my performance review?” reveals concern about accountability asymmetry. “What if it doesn’t work for my workflow?” reveals awareness of generic approaches that failed before. Answering these questions publicly — with honesty about what is uncertain — builds more trust than any success metric.

What the Companies That Recovered Did Differently

The evidence from organizations that successfully re-engaged employees after AI failure converges on three structural differences from their first attempt.

They separated the restart from the failure. IKEA’s AI reskilling — converting 8,500 call-center workers into design advisors after deploying the Billie chatbot — succeeded because the company framed the change as role elevation, not technology adoption. Voluntary turnover dropped 20% over two years. The lesson for failed experimenters: the second attempt should not be positioned as “AI Round 2.” It should be positioned as solving a specific business problem that happens to involve AI. The technology is the method, not the narrative.

They gave employees ownership of the tools, not just access. Colgate-Palmolive’s AI Hub model — 14,000 employees trained, 3,000-5,000 custom AI assistants built by employees themselves — worked because the tools served problems employees identified, not problems leadership imagined. At mid-market scale, the equivalent is inviting the pilot team to select their own workflow target within a defined scope, rather than assigning a workflow from the top.

They measured trust alongside adoption. The HBR research is specific: usage metrics alone hide anxiety-driven compliance. Organizations that measured psychological safety and openness to experimentation alongside activity data caught the performative compliance pattern early and corrected for it. At mid-market scale, this means adding three questions to existing 1:1 check-ins: “Is AI making your work better or harder this week?” “What would make you more confident using these tools?” “What should we stop doing?”

Key Data Points

Finding Source Date Sample
42% of companies abandoned majority of AI initiatives (up from 17% in 2024) S&P Global Voice of the Enterprise March 2025 n=1,006
46% of POCs scrapped before reaching production S&P Global Voice of the Enterprise March 2025 n=1,006
Trust in company-provided AI fell 31% in two months Deloitte TrustID Index May-July 2025 ~60,000 U.S. employees
Trust in agentic AI dropped 89% in same period Deloitte TrustID Index May-July 2025 ~60,000 U.S. employees
Usage of employer-provided AI tools declined 15% Deloitte TrustID Index February-July 2025 ~60,000 U.S. employees
43% of employees with AI access use unapproved tools Deloitte TrustID Index 2025 ~60,000 U.S. employees
80% of employees experience strong concern about AI threats HBR cross-national study Fall 2025 n=2,000+
High-anxiety employees: 65% AI task use, 4.6/5 resistance HBR cross-national study Fall 2025 n=2,000+
93% of AI spending goes to technology; 7% to people Deloitte December 2025 Not disclosed
Employees with hands-on AI training: 144% higher trust Deloitte TrustID 2025 ~60,000
Weekly manager check-ins increase trust scores ~60% Deloitte TrustID 2025 ~60,000
Employee AI positivity rises from 15% to 55% with leadership support BCG AI at Work June 2025 n=10,635
Manager support: 2.1x weekly AI usage, 8.8x best-work impact Gallup May 2025 n=19,043
Cisco 3P: 30% workflow augmentation from skeptic-first pilots in 10 weeks HBR March 2026 24 workflows
IKEA: 8,500 reskilled, 20% voluntary turnover reduction Ingka Group 2023 8,500 employees
Colgate: 14,000 trained, 3,000-5,000 custom AI assistants built Retool / Colgate-Palmolive 2025 14,000 employees
Only 36% of employees rate AI training as adequate BCG AI at Work June 2025 n=10,635

What This Means for Your Organization

If the first AI attempt produced organizational skepticism rather than organizational capability, the path forward is not a louder launch with better tools. It is a quieter, more disciplined restart that earns credibility through evidence rather than asserting it through enthusiasm.

The 42% of companies that abandoned AI initiatives in 2025 will largely try again. Many will make the same mistake: treating the second attempt as a technology re-deployment rather than a trust repair operation. The employee psychology documented in the research is clear. Belief in AI’s value does not predict adoption when personal anxiety and organizational skepticism are high. The 80% of employees who carry AI-related concerns will participate enthusiastically or performatively depending on a single variable: whether they believe this time will produce outcomes that matter to their work, or another round of leadership enthusiasm followed by quiet abandonment.

The practical difference between a successful restart and a repeat failure is structural, not motivational. Acknowledge the first failure with specific numbers. Pilot with skeptics who will stress-test the approach. Measure business outcomes that employees can verify from their own experience. Share results through the people who did the work, not the people who approved the budget. Build the communication cadence before the pilot starts, not after the first success.

If the question of how to re-engage a workforce that has already formed conclusions about AI is one your leadership team is navigating, I would welcome the conversation — brandon@brandonsneider.com.

Sources

  1. S&P Global, “Voice of the Enterprise: AI & Machine Learning, Use Cases 2025” (March 2025, n=1,006 midlevel and senior IT and LOB professionals, North America and Europe). Independent analyst firm. Documented the surge from 17% to 42% initiative abandonment rate and 46% POC-to-production scrap rate. High credibility. https://www.spglobal.com/market-intelligence/en/news-insights/research/ai-experiences-rapid-adoption-but-with-mixed-outcomes-highlights-from-vote-ai-machine-learning

  2. Deloitte TrustID Index (ongoing, ~60,000 U.S. employees annually). Daily pulse survey tracking customer and employee trust sentiment. Documented the 31% generative AI trust decline (May-July 2025), 89% agentic AI trust decline, 15% usage decline, and 43% non-compliance rate. Deloitte is a consulting vendor; TrustID methodology is rigorous and independently published. Moderate-high credibility. Referenced via https://hbr.org/2025/11/workers-dont-trust-ai-heres-how-companies-can-change-that

  3. HBR, “Why AI Adoption Stalls, According to Industry Data” (February 2026, cross-national study, n=2,000+ respondents plus U.S.-only n=1,000). Independent editorial review. Documented the belief-anxiety paradox, four employee profiles, and the performative compliance pattern. Academic researchers; no vendor affiliation. High credibility. https://hbr.org/2026/02/why-ai-adoption-stalls-according-to-industry-data

  4. HBR, “Workers Don’t Trust AI. Here’s How Companies Can Change That” (November 2025). Deloitte-authored, HBR-published synthesis of TrustID and organizational trust data. Five trust-building strategies with case studies (IKEA, Walmart, Colgate-Palmolive, Intuit). Moderate-high credibility (Deloitte authorship; HBR editorial review). https://hbr.org/2025/11/workers-dont-trust-ai-heres-how-companies-can-change-that

  5. HBR, “How to Foster Psychological Safety When AI Erodes Trust on Your Team” (February 2026). Academic research synthesis on trust erosion from AI integration. Documented the human-AI oversight paradox, attribution uncertainty, and 3M’s after-action review approach. High credibility. https://hbr.org/2026/02/how-to-foster-psychological-safety-when-ai-erodes-trust-on-your-team

  6. BCG, “AI at Work 2025: Momentum Builds, but Gaps Remain” (June 2025, n=10,635, 11 countries). Third annual survey. Documented the leadership support effect (15% to 55% positivity), 36% training adequacy rating, and frontline adoption stall at 51%. BCG is an AI consulting vendor; findings consistent with independent sources. Moderate-high credibility. https://www.bcg.com/publications/2025/ai-at-work-momentum-builds-but-gaps-remain

  7. Gallup, “Manager Support Drives Employee AI Adoption” (May 2025, n=19,043 U.S. employees, ±1.1pp margin of error). Documented the 2.1x/8.8x manager support multipliers and 28% support baseline. Independent research. High credibility. https://www.gallup.com/workplace/694682/manager-support-drives-employee-adoption.aspx

  8. Ingka Group / IKEA, “AI and Remote Selling” (2023). Corporate announcement documenting 8,500 call-center worker reskilling, Billie chatbot deployment, 20% voluntary turnover reduction, and EUR 1.3B remote design sales. Corporate source; specific numbers publicly reported. Moderate credibility (corporate disclosure). https://www.ingka.com/newsroom/ai-and-remote-selling-bring-ikea-design-expertise-to-the-many/

  9. Retool / Colgate-Palmolive case study (2025). Documented the AI Hub platform, 14,000 employee training, 3,000-5,000 custom AI assistants, and bottom-up adoption model. Corporate case study; Retool is a vendor. Moderate credibility (vendor case study with named company and specific numbers). https://retool.com/blog/colgate-palmolive-enterprise-ai-adoption

  10. Institute for Public Relations, “Six Components of Repairing Trust After an Organization-Level Failure” (based on Gillespie & Dietz trust repair framework). Academic research on organizational trust restoration: investigation, accountability, corrective action, transparent reporting, stakeholder engagement, monitoring. High credibility (academic, peer-reviewed framework). https://instituteforpr.org/trust-repair-after-an-organization-level-failure/


Brandon Sneider | brandon@brandonsneider.com March 2026