The Performative AI Problem: How to Tell Whether Your Organization Is Actually Capturing Value — Or Just Going Through the Motions

Brandon Sneider | March 2026


Executive Summary

  • McKinsey’s 2025 State of AI survey (n=1,993 across 105 nations) finds 88% of companies report using AI in at least one function — but only 6% qualify as “AI high performers” attributing 5%+ of EBIT to AI. The gap between adoption and impact has never been wider.
  • HBR’s cross-national study (n=2,000+ respondents, Fall 2025) identifies the core paradox: high-anxiety employees use AI more than low-anxiety colleagues (65% of tasks vs. 42%) but score 4.6 on a 5-point resistance scale, compared to 2.1 for the low-anxiety group. Usage does not equal buy-in.
  • ManpowerGroup’s 2026 Global Talent Barometer (n=14,000 workers, 19 countries) quantifies the confidence collapse: regular AI usage jumped 13% in 2025 while confidence in the technology plummeted 18%. People are using AI because they have to, not because they believe it works.
  • Writer’s 2025 enterprise AI adoption report finds 31% of employees have actively sabotaged their company’s AI rollout — rising to 41% among millennial and Gen Z workers — and two-thirds of executives say AI adoption has created tension and division.
  • An NBER study (n=6,000 executives, February 2026) across the U.S., U.K., Germany, and Australia finds nearly 90% of firms report AI had zero impact on employment or productivity over three years, despite 66% of executives personally using AI tools.

The Usage Trap: Why Your Dashboard Is Lying to You

The most dangerous metric in enterprise AI is the adoption rate. A 97% adoption figure — the kind that makes vendor QBRs look excellent and internal dashboards glow green — tells leadership almost nothing about whether AI is producing business value.

Atlassian’s head of AI go-to-market, Ben Ostrowski, puts it directly: “The early AI ROI market was full of ‘saved 30 minutes’ stats. Most of the time, that was just reinvested back into admin tasks or correcting AI output.” Atlassian’s own 2025 developer experience study confirms the paradox: 68% of developers report saving 10+ hours per week with AI tools, yet 50% say they lose an equivalent amount to organizational inefficiencies that AI has not touched. The speed gains evaporate into the same bottleneck that existed before deployment.

Microsoft’s Katy George acknowledged the problem publicly, shifting the company’s internal measurement from adoption rates to performance outcomes. Zapier’s Brandon Sammut called high adoption rates “meaningless for business results.” These are not critics. They are the companies selling the tools.

The data confirms the pattern at scale. BCG’s September 2025 research finds 60% of companies generate “no material value” from AI investments, with only 5% creating substantial value at scale. Deloitte’s 2026 State of AI in the Enterprise (n=3,235 senior leaders) reports that 37% of organizations use AI “at a surface level, with little or no change to existing processes.” Only 34% are using AI to create new products, reinvent processes, or transform their business model.

The adoption dashboard is measuring activity. The P&L is measuring impact. For most organizations, those two numbers have nothing to do with each other.

The Anatomy of Performative Adoption

HBR’s Fall 2025 study offers the most granular view of what performative adoption looks like inside organizations. The researchers surveyed over 2,000 respondents across the U.S. and Europe — spanning healthcare, technology, finance, manufacturing, retail, education, and hospitality — and found four distinct employee profiles:

Profile % of Workforce AI Belief Risk Perception Behavior
Visionaries ~40% High Low Genuine engagement; drive adoption forward
Disruptors ~30% High belief, high anxiety High Use AI heavily but resist it; compliance-driven
Endangered ~20% Low High Avoid when possible; minimize interaction
Complacent ~10% Low Low Indifferent; psychologically distant

The “Disruptors” — 30% of the workforce — are the performative adoption engine. They believe AI has business value. They also fear it threatens their career, their identity, and their expertise. The result: they use AI more than almost anyone else, but their usage is self-protective compliance rather than authentic commitment. They are performing adoption to avoid being seen as resisters, not because the tool is making their work better.

The numbers are stark. High-anxiety employees report 65% of their tasks involve AI, compared to 42% for low-anxiety employees. But the high-anxiety group scores 4.6 on a 5-point resistance scale versus 2.1 for the low-anxiety group. More usage. More resistance. The dashboard shows green. The reality is red.

Industry context matters. Finance and technology workers show 48% higher anxiety than the baseline — the people most likely to be building and deploying AI tools are also the most threatened by them. Professional services workers (lawyers, consultants) face acute identity threat: their value proposition is expertise-based, and AI challenges the premise that experience is irreplaceable.

The Confidence Collapse

ManpowerGroup’s 2026 Global Talent Barometer, surveying nearly 14,000 workers across 19 countries, reveals a pattern that no usage dashboard will capture: the more people use AI, the less they trust it.

Regular AI usage jumped 13% in 2025. Confidence in the technology dropped 18% over the same period. The divergence is sharpest among the most experienced workers — baby boomers experienced a 35% confidence decline, Gen X workers saw a 25% drop. The very people whose domain expertise should make them the most effective AI users are the ones losing faith the fastest.

This is not a training problem. It is a trust problem. Fifty-six percent of workers globally report receiving no recent skills development despite being told to use AI daily. They are handed tools without context, support, or explanation of how AI fits into their evolving role. Sixty-four percent of surveyed workers are now “job hugging” — staying in roles despite burnout and dissatisfaction because they fear the alternative is automation of their position.

The NBER’s February 2026 study of 6,000 executives across four countries puts the macro picture in perspective: nearly 90% of firms report AI had zero measurable impact on employment or productivity over the past three years. Executives use AI an average of 1.5 hours per week, and 25% do not use it at all. The economist Robert Solow’s 1987 observation — “You can see the computer age everywhere but in the productivity statistics” — applies with uncomfortable precision to AI in 2026.

The Sabotage Signal

Writer’s 2025 enterprise AI adoption report identified a behavior that most executives would rather not discuss: 31% of employees report actively sabotaging their company’s AI rollout. The number rises to 41% among millennial and Gen Z workers.

Sabotage takes specific forms: refusing to use AI tools, intentionally generating low-quality outputs to discredit the technology, or avoiding training entirely. Writer’s chief strategy officer Kevin Chung attributes it not to technophobia but to frustration: “There’s so much pressure to get it right, and then when you’re handed something that doesn’t work, you get frustrated.”

The organizational impact is measurable. Two-thirds of executives report that AI adoption has created tension and division within their organization, with 42% saying it is “tearing their company apart.” This is not a technology failure. It is a deployment failure — the result of mandating adoption without redesigning work.

The cautionary cases are accumulating. IgniteTech’s CEO eliminated 80% of the workforce after employees resisted mandatory AI adoption, achieving 75% EBITDA margins — and destroying institutional knowledge that took decades to build. Coinbase mandated all engineers use AI coding tools within one week, with CEO oversight for non-compliance. These approaches produce compliance. They do not produce value.

The Performative Adoption Audit: Five Diagnostic Questions

The question for a CEO or COO 90 days into deployment is not “are people using AI?” but “is AI changing how work gets done?” The distinction requires looking beyond the usage dashboard to five diagnostic dimensions:

1. Usage Depth vs. Usage Breadth

What to measure: Not how many employees have accessed the tool, but how many have changed a workflow because of it. Worklytics and similar platforms distinguish between activation (logged in once), engagement (regular use), and integration (tool is embedded in daily work processes). Most organizations measure the first. Few measure the third.

The benchmark: Jellyfish’s 2025 data across engineering organizations shows 90% of teams now use AI tools, but only companies where AI generates 50%+ of code see measurable cycle time reduction (24%, from 16.7 to 12.7 hours). Adoption without depth is noise.

The audit question: “How many workflows have been redesigned since AI deployment — not just accelerated, but structurally changed?”

2. The Time Reinvestment Test

What to measure: Where “saved time” actually goes. Atlassian’s paradox — developers save 10 hours per week with AI but lose 10 hours to organizational inefficiencies — reveals the most common failure mode: AI speeds up one step in a workflow, but the bottleneck simply moves downstream.

The audit question: “If employees are saving time with AI, can they point to a specific higher-value activity that now fills that time? Or did the meetings, approvals, and context-switching absorb it?”

3. The Confidence-Usage Gap

What to measure: Whether employees trust AI enough to rely on its output without extensive manual verification. ManpowerGroup’s data shows a 13% usage increase paired with an 18% confidence decline. If employees are spending as much time checking AI output as they would have spent doing the work manually, the net productivity gain is zero or negative.

The audit question: “What percentage of AI-generated output goes directly into production vs. requires significant manual revision? Has that ratio improved since deployment?”

4. The Emotional Temperature

What to measure: HBR’s framework identifies that industry-shaped anxiety is the leading predictor of performative adoption. Finance and tech workers (48% above baseline anxiety) will show high usage and high resistance simultaneously. Professional services workers will be skeptical of AI’s ability to replicate judgment-based work.

The audit question: “Do employees describe AI as ‘something I use’ or ‘something that changed how I work’? The difference between those two statements is the difference between compliance and integration.”

5. The P&L Connection

What to measure: Whether AI-driven activity connects to business outcomes. McKinsey’s data is definitive: 88% report AI adoption, but only 39% attribute any EBIT impact — and most of those estimate less than 5% of EBIT. The 6% who qualify as high performers share one trait: they redesigned workflows before deploying technology, not after.

The audit question: “Can you draw a direct line from AI usage to a revenue increase, cost reduction, or cycle time improvement that shows up in the financial statements — not just in a vendor report?”

Key Data Points

Metric Finding Source
Adoption vs. impact gap 88% report AI use; only 6% see 5%+ EBIT impact McKinsey State of AI 2025 (n=1,993)
Performative usage signal High-anxiety employees: 65% AI task involvement, 4.6/5 resistance HBR cross-national study (n=2,000+, Fall 2025)
Confidence collapse Usage up 13%, confidence down 18% ManpowerGroup GTB 2026 (n=14,000, 19 countries)
Active sabotage 31% of employees (41% of millennials/Gen Z) Writer enterprise AI report, 2025
No productivity impact ~90% of firms report zero AI productivity gain over 3 years NBER (n=6,000 executives, February 2026)
Surface-level adoption 37% of organizations use AI with “little or no change” to processes Deloitte State of AI 2026 (n=3,235)
Value creation 60% generate no material value; only 5% create value at scale BCG Build for the Future, September 2025
Time savings paradox 68% of developers save 10+ hrs/week; 50% lose 10+ hrs to inefficiencies Atlassian Developer Experience 2025
Training gap 56% of workers received no AI skills development despite mandated use ManpowerGroup GTB 2026
Executive perception gap 76% of executives think employees are enthusiastic; only 31% are HBR executive survey (n=1,400 U.S. employees)

What This Means for Your Organization

The performative adoption problem is not an edge case. It is the default outcome. Deloitte’s data shows 37% of organizations are using AI with no workflow change. HBR’s data shows 30% of the workforce is using AI heavily while actively resisting it. ManpowerGroup’s data shows the entire workforce is losing confidence as usage rises. When three independent studies with a combined sample of nearly 20,000 respondents converge on the same pattern, the pattern is real.

The executives who commissioned AI deployments are typically the last to learn that adoption has become theatrical. HBR’s 2025 study found a 45-percentage-point gap between executives who believe employees are enthusiastic about AI (76%) and individual contributors who actually are (31%). The dashboard confirms what leadership wants to believe. The five diagnostic questions above reveal what is actually happening.

Running a performative adoption audit does not require new technology or external consultants. It requires a CEO, COO, or CIO willing to spend 48 hours asking the right questions: interviewing a cross-section of 15-20 employees about what actually changed in their work, comparing workflow maps from before and after deployment, and checking whether “time saved” shows up in any financial metric that matters. The organizations in the 6% — McKinsey’s high performers — did this reflexively. They treated AI deployment as a business transformation, not a software rollout.

If this diagnostic surfaced questions specific to your organization’s AI deployment, I’d welcome the conversation — brandon@brandonsneider.com

Sources

  1. McKinsey, “The State of AI: Global Survey 2025” (n=1,993 participants, 105 nations, June-July 2025). Independent consulting firm survey. High credibility for breadth; self-reported data typical of executive surveys. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

  2. HBR, “Why AI Adoption Stalls, According to Industry Data” (n=2,000+ respondents, cross-national, Fall 2025; additional U.S.-only survey n=1,000, Spring 2025). Academic-quality research published in peer-reviewed business journal. High credibility; the 4.6 resistance score and four-profile framework are the most granular data available on performative adoption. https://hbr.org/2026/02/why-ai-adoption-stalls-according-to-industry-data

  3. HBR, “Leaders Assume Employees Are Excited About AI. They’re Wrong.” (n=1,400 U.S.-based employees, 2025). Independent survey with executive, middle management, and individual contributor stratification. High credibility for the perception gap analysis. https://hbr.org/2025/11/leaders-assume-employees-are-excited-about-ai-theyre-wrong

  4. ManpowerGroup, “Global Talent Barometer 2026” (n=14,000 workers, 19 countries, January 2026). Large-sample workforce survey from major staffing firm. High credibility for employment and confidence trends; potential bias toward temporary/contract workforce. https://investor.manpowergroup.com/news-releases/news-release-details/global-talent-barometer-2026-ai-use-accelerates-worker

  5. Writer, “2025 Enterprise AI Adoption Report” (2025). Vendor-funded research from enterprise AI platform. The 31% sabotage figure is self-reported and may understate the problem. Moderate credibility; treat as directional. https://writer.com/blog/enterprise-ai-adoption-survey-press-release/

  6. NBER, “AI Productivity Paradox Study” (n=6,000 executives across U.S., U.K., Germany, Australia, February 2026). Independent academic research from the National Bureau of Economic Research. High credibility; large sample across four countries provides the most rigorous macro-level assessment of AI’s productivity impact. https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-study-robert-solow-information-technology-age/

  7. Deloitte, “State of AI in the Enterprise 2026” (n=3,235 senior leaders, August-September 2025). Major consulting firm survey with IT and business leader split. High credibility for enterprise adoption patterns; potential upward bias from self-selection. https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html

  8. BCG, “The Widening AI Value Gap” (September 2025). Independent consulting firm analysis. High credibility; the 5% value-at-scale figure aligns with McKinsey’s 6% high performer finding. https://www.bcg.com/publications/2025/are-you-generating-value-from-ai-the-widening-gap

  9. Atlassian, “State of Developer Experience 2025” (2025). Vendor research from collaboration platform company. Moderate credibility; developer-focused but the time-savings paradox is corroborated by Jellyfish data. https://www.atlassian.com/blog/developer/developer-experience-report-2025

  10. Jellyfish, “2025 AI Metrics in Review” (aggregated from engineering organization data, 2025). Vendor analytics platform data. Moderate-to-high credibility; based on observed behavioral data rather than self-reported surveys. https://jellyfish.co/blog/2025-ai-metrics-in-review/

  11. Charter Works, “Why Four Tech Companies Say Adoption Is the Wrong AI Metric” (2025). Interviews with Microsoft, Atlassian, Zapier, and Udemy AI leaders. Journalistic source. High credibility for named executive quotes; directional for company strategy shifts. https://www.charterworks.com/why-four-tech-companies-say-adoption-is-the-wrong-ai-metric/

  12. EQ4C Tools analysis of forced AI adoption (2025). Aggregation of public layoff data and adoption mandates. Moderate credibility; the IgniteTech and Coinbase cases are documented in multiple primary sources. https://tools.eq4c.com/the-corporate-ai-mandate-why-forcing-workers-to-adopt-ai-or-face-termination-is-backfiring/


Brandon Sneider | brandon@brandonsneider.com March 2026