The Performative Adoption Diagnostic: How to Tell Whether Your AI Program Is Changing Work or Just Checking Boxes

Brandon Sneider | March 2026


Executive Summary

  • 56% of CEOs report getting “nothing” — neither increased revenue nor decreased costs — from AI investments (PwC 29th Global CEO Survey, n=4,701, September-November 2025). Yet 88% of organizations say they use AI regularly. The gap between reported adoption and measured business impact is where performative compliance lives.
  • High-anxiety employees use AI more frequently but with maximum resistance — 65% of their tasks involve AI (vs. 42% for low-anxiety workers), yet they score 4.6 out of 5 on resistance measures (HBR/BCG, n=2,000+, Fall 2025). Fear drives compliance. Compliance without commitment produces activity without outcomes.
  • 43% of employees with AI access bypass employer-provided tools entirely, using personal accounts or unapproved alternatives that the organization cannot see (Deloitte TrustID Workforce AI Report, Q3 2025). Corporate usage dashboards measure what employees do with approved tools while missing half the actual AI activity.
  • Only 13% of employees report AI deeply integrated into their daily workflows (BCG AI at Work, n=13,000+, June 2025). The other 87% sit on a spectrum between occasional use and deliberate avoidance — and most usage dashboards cannot distinguish between them.
  • A four-layer diagnostic — spanning dashboard analysis, workflow audits, structured interviews, and output quality assessment — takes 3-4 weeks, costs $15,000-$30,000, and produces the ground truth that separates genuine adoption from theater.

The Gap Between the Dashboard and the Floor

Every AI deployment generates usage data. License utilization, prompt volume, session duration, feature adoption rates — the metrics arrive in tidy dashboards that suggest progress. The problem is what they do not measure.

McKinsey’s 2025 State of AI survey (n=1,993, 105 countries) identifies the core disconnect: out of 25 organizational attributes tested, workflow redesign has the largest effect on whether AI produces EBIT impact. Yet only 6% of organizations qualify as AI high performers (5%+ EBIT attributable to AI). The remaining 94% report AI usage without proportional business results.

This is the performative adoption problem. Employees open the tool, complete the task the training showed them, and return to their actual workflow unchanged. The dashboard records a session. The P&L records nothing.

Three forces drive this pattern:

Fear-driven compliance. HBR and BCG’s cross-national study (n=2,000+, Fall 2025) identifies four employee archetypes: Visionaries (40%, high belief, low anxiety), Disruptors (30%, high belief, high anxiety), Endangered (20%, low belief, high anxiety), and Complacent (10%, low belief, low anxiety). The Disruptors — nearly a third of the workforce — use AI at higher rates than any other group, but their usage is defensive. They are performing compliance to protect their jobs, not redesigning their work. Their AI activity shows up on every dashboard. Their workflow change shows up on none.

The measurement-adoption paradox. ActivTrak’s analysis of 443 million hours of workforce data (n=163,638 employees, January 2023-December 2025) finds that after AI adoption, time in email increased 104%, chat and messaging 145%, and business management tools 94%. AI users’ average daily focused time declined by 23 minutes. The tools are generating more activity, not more productivity — and activity metrics are what most organizations measure.

Shadow adoption masking real behavior. Deloitte’s TrustID report (Q3 2025) documents a 15% decline in approved AI tool usage alongside rising noncompliance — 43% of workers bypass employer-provided tools. They find personal ChatGPT or Claude accounts “easier to access” and “better and more accurate.” The employee’s actual AI workflow happens outside corporate visibility. The approved tool collects dust while the dashboard shows declining engagement that managers interpret as resistance rather than preference.

The Four-Layer Diagnostic

Detecting performative adoption requires looking beyond the dashboard. The methodology below moves from quantitative signals to qualitative evidence, with each layer either confirming or contradicting the layer above.

Layer 1: Dashboard Forensics (Days 1-5)

Usage dashboards are not useless — they just answer the wrong question. They answer “who opened the tool?” when the real question is “who changed their work?” The forensic approach extracts diagnostic signal from existing data by looking for three patterns:

Pattern What It Signals How to Detect
High frequency, low duration Checkbox behavior — opening tool briefly to register activity Average session length < 3 minutes with daily logins
Uniform usage across roles Training script repetition — everyone doing the same thing Identical feature adoption across departments with different workflows
Declining trajectory after training Initial compliance followed by reversion Usage peaks in weeks 2-3 post-training, declines 40%+ by week 8

BCG’s finding that only 36% of employees feel adequately trained (n=13,000+, June 2025) maps directly to the declining trajectory pattern. Employees who received fewer than five hours of training disproportionately show the peak-and-decline curve.

Compare usage data against business outcome data at the team level, not the individual level. Faros AI’s analysis (n=10,000+ developers, 1,255 teams, 2025) demonstrates this principle: developers using AI assistants touched 47% more pull requests per day, but PR review time ballooned 91%. At the team level, there was no significant correlation between AI adoption and delivery improvement. The coding speed showed up in individual metrics. The review bottleneck absorbed the gains at the team level. This is the template for every function: individual activity can increase while organizational throughput stays flat.

Layer 2: Workflow Delta Audit (Days 5-15)

This is the diagnostic’s center of gravity. Select 8-12 workflows across 3-4 departments. For each, document the pre-AI process and the current process side by side.

The audit answers three questions:

Did the workflow actually change? Not “does the employee use AI during the workflow” but “does the workflow have different steps, different handoffs, or different outputs than it did six months ago?” Cisco’s 3P organization reviewed 24 workflows after deployment and found 30% of activities augmented by AI (Cisco Newsroom, December 2025). That 30% represented genuine integration — the other 70% of activities continued unchanged. Most organizations do not measure this ratio.

Did the output quality change? An employee who uses AI to draft a client email but then rewrites the draft entirely has not adopted AI — they have added a step. An employee who uses AI to draft a client email and sends a version recognizably derived from the AI output has adopted AI in that workflow. The distinction requires reviewing actual outputs, not usage logs.

Did time allocation shift? HBR’s eight-month ethnographic study of a 200-person technology company (April-December 2025) found that AI users “felt more productive but did not feel less busy, and in some cases felt busier than before.” Task expansion — product managers writing code, researchers taking on engineering tasks — consumed the time AI freed. If calendar analysis and time studies show no reallocation of hours toward higher-value work, the “productivity gains” are evaporating into task creep.

Layer 3: Structured Diagnostic Interviews (Days 10-20)

Surveys produce social desirability bias. Employees report what they think managers want to hear about AI adoption. Structured interviews with behavioral anchoring cut through this.

Five diagnostic questions, each designed to distinguish genuine adoption from performance:

1. “Walk me through the last time you used [AI tool] for actual work. What were you doing before you opened it, and what did you do after?” Genuine adopters describe a specific, recent workflow with concrete details. Performative adopters describe the training exercise or give vague answers (“I use it for emails sometimes”).

2. “What did you stop doing because AI handles it now?” The single most diagnostic question. Real adoption eliminates or reduces previous tasks. Performative adoption adds AI on top of existing tasks without removing anything. If no employee in a team can name something they stopped doing, the workflow has not changed.

3. “When the AI output is wrong, what do you do?” Genuine adopters have developed correction patterns — they know the tool’s failure modes, have workarounds, and can describe specific instances. Performative adopters either say “it’s usually fine” (they are not reviewing output critically) or “I just do it myself” (they have abandoned the tool for consequential work).

4. “If [AI tool] disappeared tomorrow, how would your day change?” Genuine adopters describe specific disruptions to specific workflows. Performative adopters say “I’d figure it out” or “not much would change.” The strength of the response correlates with integration depth.

5. “Has [AI tool] changed how your team works together, or mostly how you work individually?” Organizational transformation requires team-level workflow change. BCG’s data shows employee-centric companies are 7x more likely to achieve AI maturity (BCG/HBR, n=1,400, 2025). If AI remains an individual productivity tool without changing team interactions, handoffs, or collaboration patterns, it has not moved from personal hack to organizational capability.

Conduct these interviews with 15-20 employees across levels and functions. Record the ratio of specific, behavioral answers to vague, generic ones. That ratio is the adoption reality score.

Layer 4: Output Quality Comparison (Days 15-25)

Pull 20-30 work outputs from each audited team — client deliverables, internal reports, proposals, analyses, code commits, whatever the team produces. Compare outputs from six months pre-deployment to current outputs on three dimensions:

Throughput change. Did the volume of completed work increase? If AI is genuinely integrated, the team should produce more finished outputs per unit of time. If volume is unchanged, AI is not contributing to capacity.

Quality change. Did the depth, accuracy, or sophistication of outputs improve? Or did outputs become more generic and less tailored — the “workslop” phenomenon HBR documented in September 2025, where AI-generated content floods organizations with volume that degrades collective output quality?

Cycle time change. Did the time from initiation to completion decrease? This is the cleanest operational metric. If a proposal that took five days still takes five days, AI has not changed the workflow regardless of what the dashboard shows.

The combination of these three measures against baseline produces a team-level adoption score that no usage dashboard can replicate.

The Performative Adoption Scorecard

Synthesize the four layers into a single diagnostic framework for each team or department:

Dimension Genuine Adoption (3) Partial Adoption (2) Performative Adoption (1)
Dashboard pattern Sustained, varied usage with role-specific feature adoption Moderate usage with some role differentiation Uniform, declining, or checkbox-pattern usage
Workflow delta Documented process changes; steps eliminated or restructured AI added to existing process without eliminating steps No observable workflow change despite tool access
Interview responses Specific behavioral examples; can name eliminated tasks Some concrete examples mixed with vague descriptions Training-script answers; cannot name workflow changes
Output quality Measurable improvement in throughput, quality, or cycle time Improvement in one dimension, flat in others No measurable change from pre-deployment baseline

A team scoring 8-12 has genuinely adopted AI. A team scoring 4-7 is partially adopted — AI is contributing but the workflow has not been redesigned. A team scoring 4 or below is performing compliance. The interventions differ dramatically: genuine adopters need expansion resources, partial adopters need workflow redesign support, and performative adopters need the training, psychological safety, and workflow redesign that should have preceded deployment.

Key Data Points

Finding Source Date Sample
56% of CEOs report zero revenue or cost improvement from AI PwC 29th Global CEO Survey Jan 2026 n=4,701
Only 6% of organizations are AI high performers (5%+ EBIT) McKinsey State of AI Nov 2025 n=1,993
Workflow redesign is the #1 predictor of AI EBIT impact (of 25 attributes) McKinsey State of AI Nov 2025 n=1,993
13% of employees report AI deeply integrated into daily workflows BCG AI at Work Jun 2025 n=13,000+
65% of high-anxiety employee tasks involve AI (vs. 42% low-anxiety) HBR/BCG cross-national study Fall 2025 n=2,000+
43% of employees bypass employer-provided AI tools Deloitte TrustID Q3 2025 Enterprise
AI tool usage declined 15% despite increased access Deloitte TrustID Q3 2025 Enterprise
Employee trust in gen AI declined 38% (May-July 2025) Deloitte TrustID Q3 2025 Enterprise
Email time +104%, chat +145%, focused time -23 min after AI adoption ActivTrak Productivity Lab Mar 2026 n=163,638
Developers touched 47% more PRs but review time +91%; zero delivery improvement Faros AI Paradox Report 2025 n=10,000+ devs
AI users felt more productive but not less busy; some felt busier HBR ethnographic study 2025 ~200 employees
30% of Cisco 3P activities genuinely augmented by AI (24 workflows reviewed) Cisco 3P Organization Dec 2025 Internal
Employee-centric companies 7x more likely to achieve AI maturity BCG/HBR 2025 n=1,400

What This Means for Your Organization

The performative adoption problem is not an employee problem. It is a measurement problem. Most organizations deploy AI tools, measure license utilization, and report adoption percentages that bear no relationship to whether work has actually changed. The CEO sees a dashboard showing 70% adoption. The CFO sees no P&L impact. Neither understands why because neither is measuring the right thing.

The diagnostic methodology above takes 3-4 weeks and costs roughly $15,000-$30,000 for a 200-500 person company — a fraction of what most organizations spend on AI licenses in a single quarter. The output is the ground truth that every subsequent decision requires: which teams have genuinely integrated AI into redesigned workflows, which are partially there and need specific support, and which are performing compliance while their actual work remains unchanged.

The critical finding from BCG’s research is that co-created AI rollouts are twice as likely to produce genuine usage. The teams that designed their own AI workflows — rather than receiving top-down deployment — became real adopters. The teams that received tools without involvement in workflow redesign became performers. If the diagnostic reveals widespread performative adoption, the prescription is not more training or stricter mandates. It is going back to the teams, involving them in redesigning their own workflows, and measuring workflow change rather than tool usage.

If the diagnostic raises questions about what it would reveal in your specific organization — or how to act on what it finds — that is a conversation worth having: brandon@brandonsneider.com.

Sources

  1. PwC 29th Global CEO Survey (n=4,701, September-November 2025, presented January 2026). 56% of CEOs report zero revenue or cost improvement from AI. Independent global survey. Credibility: HIGH. https://www.pwc.com/gx/en/ceo-survey/2026/pwc-ceo-survey-2026.pdf

  2. McKinsey State of AI 2025 (n=1,993, 105 countries, June-July 2025). 6% of organizations are AI high performers; workflow redesign is the #1 predictor of EBIT impact. Credibility: MODERATE-HIGH (large survey, consulting firm with AI services). https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

  3. BCG AI at Work 2025: Momentum Builds, but Gaps Remain (n=13,000+, June 2025). 13% deep integration; 36% feel adequately trained; 51% frontline usage stagnation. Credibility: MODERATE-HIGH (large global sample, consulting firm). https://www.bcg.com/publications/2025/ai-at-work-momentum-builds-but-gaps-remain

  4. HBR/BCG “Leaders Assume Employees Are Excited About AI. They’re Wrong” (n=1,400 U.S. employees, November 2025). Executive-employee perception gap; employee-centric companies 7x more AI mature. Credibility: HIGH (independent research, peer-reviewed publication). https://hbr.org/2025/11/leaders-assume-employees-are-excited-about-ai-theyre-wrong

  5. HBR/BCG “Why AI Adoption Stalls, According to Industry Data” (n=2,000+ cross-national, Fall 2025). Four employee archetypes; fear-driven compliance; 65% task frequency among high-anxiety workers. Credibility: HIGH (cross-national design, behavioral measures). https://hbr.org/2026/02/why-ai-adoption-stalls-according-to-industry-data

  6. HBR “AI Doesn’t Reduce Work — It Intensifies It” (eight-month ethnographic study, ~200 employees, April-December 2025). Task expansion, blurred boundaries, cognitive overload despite perceived productivity gains. Credibility: HIGH (longitudinal ethnographic design, direct observation). https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it

  7. Deloitte TrustID Workforce AI Report (Q3 2025). 43% noncompliance rate; 15% usage decline; 38% trust decline. Credibility: MODERATE-HIGH (established trust measurement framework, methodology not fully disclosed). https://d1lzrgdbvkolkd.cloudfront.net/4749_Deloitte_Trust_ID_Workforce_AI_Report_Q3_2025_3aa42f916c.pdf

  8. ActivTrak 2026 State of the Workplace (n=163,638 employees, 443M hours of behavioral data, January 2023-December 2025). Activity increases across all categories; focused time decline. Credibility: HIGH (behavioral telemetry, not surveys; massive longitudinal dataset). https://www.activtrak.com/resources/state-of-the-workplace/

  9. Faros AI Productivity Paradox Report (n=10,000+ developers, 1,255 teams, 2025). 47% more PRs, 91% longer reviews, zero delivery improvement at company level. Credibility: HIGH (engineering telemetry data, not self-report). https://www.faros.ai/ai-productivity-paradox

  10. Cisco 3P Organization AI Deployment (24 workflows reviewed, December 2025). 30% of activities genuinely augmented. Credibility: MODERATE (single company case study, first-party report). https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2025/m12/how-ai-will-transform-the-workplace-in-2026.html


Brandon Sneider | brandon@brandonsneider.com March 2026