The Operations Leader’s AI Playbook: What the COO Owns That Nobody Else Will
Brandon Sneider | March 2026
Executive Summary
- The COO is the missing link in most AI transformations. Every AI framework assigns roles to the CIO (technology), CFO (budget), and CHRO (people). Almost none address the person who owns the processes AI is supposed to improve. PwC’s 2026 Global CEO Survey (n=4,454) finds 56% of companies see zero revenue or cost benefit from AI. The most common root cause: bolting AI onto broken processes that no one owns.
- 80% of organizations experimenting with AI see no bottom-line impact. McKinsey’s 2025 survey of 118 US C-suite executives finds only 19% report revenue increases above 5% from gen AI. Only 1% of executives report reaching gen AI maturity. The gap is not technology selection — it is operational execution. That falls on the COO’s desk.
- Bain identifies “workflow debt” as the silent killer of AI value. Accumulated unnecessary meetings, approvals, handoffs, exceptions, and one-off policies make simple tasks difficult. AI amplifies whatever system it enters. Automate a clean process and you get speed. Automate workflow debt and you multiply complexity at scale.
- The process ownership vacuum is the structural problem. At a 200-500 person company, most cross-functional processes — order-to-cash, hire-to-retire, procure-to-pay — are owned by no one or by committees. Without a named owner with authority over the end-to-end workflow, AI deployment fragments into departmental experiments that produce dashboards but not P&L impact.
- Klarna’s $700M automation reversal is the cautionary tale. The fintech eliminated ~700 customer service roles, claimed AI handled two-thirds of all interactions, then rehired human agents in 2025 after customer satisfaction dropped and complex-issue resolution collapsed. The lesson: the COO must define the human-AI boundary before deployment, not discover it through customer complaints.
The COO’s Problem: Everyone Else Has a Framework. You Don’t.
The CIO has Gartner’s Magic Quadrant and a procurement playbook. The CFO has the 10-20-70 budget model. The CHRO has change management methodologies. The General Counsel has a compliance checklist. The CISO has a risk framework.
The COO — the person who owns the operating machine that AI is supposed to improve — has been handed nothing.
This is not an oversight. It reflects a persistent confusion in the AI adoption literature between “technology deployment” (what the CIO manages) and “process transformation” (what the COO must lead). McKinsey’s 2025 operations research makes the distinction explicit: while 75% of companies have drafted a gen AI strategy, only 12% have found revenue-generating use cases (McKinsey, April 2025, n=118 US C-suite executives). The strategy exists. The operating model to execute it does not.
PwC’s COO research for 2026 quantifies the gap from the other side: 57% of operations and supply chain leaders have integrated AI into at least some functions, but 41% identify limited cross-functional collaboration as a top-three barrier to executing their operations strategy (PwC, 2026). The technology is inside the building. The organizational wiring to capture its value is not.
What the COO Actually Owns in an AI Transformation
The operations leader’s AI responsibilities fall into five domains that no other executive can own:
1. Process Inventory and Ownership Assignment
Before any AI tool selection, the COO must answer a question most 200-500 person companies cannot: who owns each cross-functional process?
The data on this is sobering. Cognitive World’s analysis of AI project failures identifies fragmented process ownership as the primary structural reason AI initiatives die between pilot and production (Cognitive World, March 2025). Leadership assigns AI projects targeting “small problems inside departmental boundaries” rather than cross-functional value-creating workflows. At the enterprise level, only 1% of surveyed companies consider themselves “mature” in AI integration with established workflows; only 22% have advanced beyond proof-of-concept to measurable value.
At a 200-500 person company, the typical state is worse than fragmented — it is invisible. Order-to-cash spans sales, finance, operations, and customer success. No single person owns it end-to-end. Procure-to-pay spans purchasing, accounts payable, and the department requesting the purchase. Hire-to-retire spans HR, IT, the hiring manager, and payroll.
The COO’s first deliverable: a process ownership matrix covering the company’s 8-12 core cross-functional processes. Each process gets a named owner with authority to approve workflow changes, not a committee and not a shared Slack channel.
2. Workflow Debt Audit
Bain’s February 2026 research introduces “workflow debt” as a parallel to technical debt — accumulated unnecessary work that has grown around meetings, approvals, handoffs, exceptions, and one-off policies created for problems that were solved years ago (Bain, “Want More Out of Your AI Investments? Think People First,” February 2026).
The finding is direct: AI amplifies whatever system it enters. If workflow debt is not addressed, AI and automation multiply complexity instead of productivity. The research recommends a specific sequence: simplify and standardize before automating.
Most companies reverse this sequence. They buy an AI tool, assign it to an existing workflow, and discover three months later that they have automated a process containing 14 approval steps, 6 of which exist because of a compliance requirement that was revised in 2022.
The COO’s second deliverable: a workflow debt assessment for each automation candidate. Before any process enters an AI pilot, it must pass three questions:
- How many handoffs does this process contain? (Bain’s UK banking case study: 10+ handoffs in a 60-100 day process, compressed to zero handoffs in one day after redesign)
- Which approval steps still serve their original purpose?
- What workarounds have become permanent?
3. Human-AI Boundary Definition
Klarna’s reversal is the highest-profile example of a problem that will repeat across industries: deploying AI without defining where it stops and human judgment begins.
The timeline is instructive. Between 2022 and 2024, Klarna eliminated approximately 700 customer service positions and replaced them with an OpenAI-powered assistant. At its peak, the AI handled two-thirds to three-quarters of all customer interactions. CEO Sebastian Siemiatkowski publicly celebrated the cost savings. By mid-2025, internal reviews revealed what customers had been reporting: AI responses were generic, repetitive, and incapable of handling nuanced problem-solving. Customer satisfaction dropped. Siemiatkowski acknowledged the company “overestimated AI’s capabilities and underappreciated the human aspects of service delivery.” Klarna began rehiring human agents under a hybrid model (LaSoft, 2025; FinTech Weekly, 2025; Entrepreneur, 2025).
The Klarna pattern — automate aggressively, discover the boundary through failure, then rebuild at higher cost — is avoidable. The COO must define three categories before deployment:
| Category | Description | Examples |
|---|---|---|
| AI-autonomous | High-volume, rule-based, low-stakes decisions | Invoice matching, appointment scheduling, expense categorization, standard order routing |
| AI-assisted, human-decided | AI prepares the analysis, a person makes the call | Customer escalation triage, vendor negotiation preparation, exception processing, quality control review |
| Human-only | Judgment, empathy, relationship, or regulatory requirements demand a person | Key account relationship management, complex complaint resolution, regulatory response, employee performance conversations |
Intercom’s Fin AI agent demonstrates the economics when this boundary is set correctly: 66% average autonomous resolution rate, 80-90% cost reduction on automated resolutions, and agents using the AI-assisted model close 31% more conversations daily while maintaining satisfaction scores (Intercom, 2025-2026). The boundary is the strategy.
4. Throughput Measurement Redesign
The COO’s existing KPIs were designed for manual work. When AI enters a process, those metrics change meaning.
The operations leader’s measurement challenge is specific: traditional metrics — cycle time, error rate, cost per transaction, throughput per FTE — measure human effort. AI does not have “effort.” It has speed and accuracy within its defined scope, and unpredictable behavior at its edges.
The metrics that matter in an AI-augmented operation shift to:
| Traditional Metric | AI-Augmented Replacement |
|---|---|
| Cost per transaction | Cost per correctly completed transaction (include rework) |
| Throughput per FTE | Throughput per process (decouple from headcount) |
| Error rate | Error rate by source (human error vs. AI error vs. handoff error) |
| Cycle time | Cycle time by segment (AI time vs. wait time vs. human time) |
| Headcount per function | Capacity per function (humans + AI combined) |
This is not an academic distinction. Faros AI’s data shows AI tools can increase pull request volume by 98% while delivering zero improvement in delivery throughput — because the bottleneck moves from creation to review (Faros AI, 2025). If the COO measures only the automated segment, the dashboard looks excellent while delivery stalls.
5. The Over-Automation Circuit Breaker
Gartner predicts over 40% of agentic AI projects will be canceled by the end of 2027 due to escalating costs, unclear business value, or inadequate risk controls (Gartner, June 2025). The analyst firm recommends deploying agentic AI only where it delivers clear value or ROI, noting that most current projects are “early-stage experiments driven by hype and often misapplied.”
The COO needs kill criteria — defined in advance, not discovered through customer complaints:
- Satisfaction threshold: Customer satisfaction on AI-handled interactions drops below 85% of human-handled baseline for two consecutive measurement periods
- Escalation rate: More than 30% of AI-initiated interactions require human intervention (indicating the process was misclassified as AI-autonomous)
- Rework rate: AI-completed work requires human correction more than 15% of the time
- Cost trajectory: Total cost (AI tool + human oversight + rework + escalation handling) exceeds the pre-automation cost per transaction
These thresholds are not standard — they must be calibrated to the specific process and company. The point is that they exist before deployment, not after.
The Process Owner Role at 200-500 Employees
At a Fortune 500 company, process owners are dedicated roles with teams. At a 200-500 person company, the process owner is someone who also has a day job. This is the central organizational design problem the COO must solve.
The research consensus on what predicts success in this role:
Seniority matters. Cognitive World’s analysis is unambiguous: the process owner must be a senior manager with institutional credibility. Middle managers lack the authority to enforce cross-functional workflow changes. A director-level person who can resolve conflicts between departments is the minimum viable authority level.
Background predicts approach. Operations-background owners tend to focus on throughput and efficiency. Finance-background owners tend to focus on cost reduction and measurement. IT-background owners tend to focus on tool selection and integration. The best fit depends on which problem is most acute, but operations or finance backgrounds tend to produce faster P&L impact than IT backgrounds because they start from the business problem rather than the technology.
Time commitment is 20-30% minimum. The fractional CAIO research in this repository confirms that external AI leadership fails without a named internal counterpart dedicating significant time. The same applies to process ownership: a person who “also handles AI” with 5% of their time produces strategy documents, not results.
The reporting line matters. The process owner reports to the COO, not the CIO. This is counterintuitive — the CIO controls the technology budget. But the process owner’s authority comes from operational accountability, not technology expertise. The COO is the only executive who can resolve a conflict between, say, the head of sales and the head of finance over how the order-to-cash process should change.
The 90-Day COO AI Action Plan
| Timeframe | Action | Deliverable |
|---|---|---|
| Week 1-2 | Inventory all cross-functional processes (8-12 typical for this company size) | Process map with current owners (or “unowned” designation) |
| Week 3-4 | Assign named process owners for top 3-4 automation candidates | Ownership matrix with authority levels and time allocation |
| Week 3-4 | Conduct workflow debt assessment on top candidates | Debt inventory: unnecessary handoffs, obsolete approvals, workarounds |
| Week 5-6 | Define human-AI boundary for first pilot process | Three-category classification (autonomous / assisted / human-only) |
| Week 5-6 | Establish pre-automation baseline metrics | Current cost per transaction, cycle time, error rate, satisfaction |
| Week 7-8 | Design AI-augmented metrics dashboard | Replacement KPIs aligned to throughput, not headcount |
| Week 7-8 | Set kill criteria for pilot | Defined thresholds for satisfaction, escalation, rework, and cost |
| Week 9-12 | Launch pilot with measurement infrastructure in place | Weekly metric reviews against baseline and kill criteria |
This plan assumes the tool has already been selected (a CIO responsibility) and the budget has been approved (a CFO responsibility). The COO’s 90 days focus on what those two executives cannot do: prepare the operating machine to capture value from the technology investment.
Key Data Points
| Metric | Value | Source |
|---|---|---|
| Companies seeing zero AI benefit | 56% | PwC 29th Global CEO Survey, January 2026, n=4,454 |
| Organizations experimenting with AI but seeing no bottom-line impact | ~80% | McKinsey, 2025 |
| C-suite executives reporting >5% revenue from gen AI | 19% | McKinsey, April 2025, n=118 |
| COOs citing limited cross-functional collaboration as top-3 barrier | 41% | PwC COO Survey, 2026 |
| Operations leaders who have integrated AI into functions | 57% | PwC COO Survey, 2026 |
| Agentic AI projects predicted to be canceled by end of 2027 | >40% | Gartner, June 2025 |
| Companies achieving gen AI maturity | 1% | McKinsey, April 2025 |
| Klarna customer service roles eliminated then rehired | ~700 | Multiple sources, 2022-2025 |
| Intercom Fin autonomous resolution rate | 66% | Intercom, 2025-2026 |
| Cost savings per AI-resolved customer interaction | 80-90% | Intercom, 2025-2026 |
| Agent productivity increase with AI-assisted model | 31% | Intercom Copilot, 2025-2026 |
| UK bank process compression (before) | 60-100 days, 10+ handoffs | Bain, February 2026 |
| UK bank process compression (after) | 1 day, zero handoffs | Bain, February 2026 |
| Workforce engagement TSR multiplier | 2.3x | Bain/Glassdoor, 2020-2024, Fortune 1000 |
What This Means for Your Organization
The operations leader is the person most likely to determine whether AI investments produce P&L impact or expensive dashboards. The CIO can deploy the technology. The CFO can approve the budget. The CHRO can manage the people transition. But only the COO can answer the question that precedes all of these: which processes should change, how should they change, and who is accountable for the result?
The 56% of companies seeing zero AI benefit are not failing because they bought the wrong tool. They are failing because they automated processes that nobody owned, that contained years of accumulated workflow debt, and that lacked defined boundaries between what AI should handle and what humans must. These are operational failures, not technology failures. They belong to the COO.
The practical starting point is not an AI strategy document — it is a process ownership matrix. Most 200-500 person companies discover during this exercise that their 8-12 core processes are owned by committees, by tradition, or by no one at all. Fixing this structural gap costs nothing except the COO’s political capital and produces benefits that extend far beyond AI deployment.
If this raised questions specific to your operations — particularly around process ownership, workflow debt assessment, or the human-AI boundary for your specific industry — I would welcome the conversation at brandon@brandonsneider.com.
Sources
-
PwC, “29th Global CEO Survey: Leading Through Uncertainty in the Age of AI,” January 2026, n=4,454 CEOs across 95 countries. Independent large-scale survey; high credibility. PwC CEO Survey
-
PwC, “2026 Operations Strategy for COOs,” 2026. Independent; operations-specific survey data. PwC COO Hub
-
McKinsey, “How COOs Maximize Operational Impact from Gen AI and Agentic AI,” April 2025, n=118 US C-suite executives. Independent consulting research; moderate sample size. McKinsey COO AI Guide
-
Bain & Company, “Want More Out of Your AI Investments? Think People First,” February 2026, Fortune 1000 analysis with 5-year annualized data (2020-2024). Independent; high credibility for workforce engagement claims; proprietary analysis. Bain People First
-
Gartner, “Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027,” June 2025. Tier-1 analyst firm; prediction-grade (not empirical); vendor-neutral. Gartner Press Release
-
Cognitive World, “Process Ownership: The Overlooked Driver of AI Success,” March 2025. Industry publication; frameworks sourced from academic and practitioner research; moderate credibility. Cognitive World Process Ownership
-
Klarna AI customer service reversal, multiple sources (LaSoft, FinTech Weekly, Entrepreneur, Reworked), 2025. Primary reporting from CEO statements and company disclosures; high credibility for the reversal narrative. Entrepreneur
-
Intercom Fin AI Agent performance data, 2025-2026. Vendor-reported metrics; credibility moderate (vendor has incentive to report favorable data); resolution rates independently verified by customer deployments. Intercom Fin
-
Protiviti, “Top Risks for Chief Operating Officers 2026,” n=1,540 global C-suite executives. Independent survey; cross-industry; high credibility for risk ranking data. Protiviti COO Risks
-
Fortune 1000 AI & Data Leadership Executive Benchmark Survey, 15th annual iteration, 2026, n=100+ C-level executives, 96% C-level or equivalent. Long-running independent benchmark; high credibility for enterprise AI adoption trends. HBR Executive AI Survey
-
Faros AI, developer productivity data, 2025. Independent engineering analytics firm; empirical data from customer deployments; high credibility for code throughput vs. delivery metrics. Referenced in existing repository research.
Brandon Sneider | brandon@brandonsneider.com March 2026