The 2026 Talent Review: How to Add AI Fluency to the Process Every CHRO Already Runs

Brandon Sneider | March 2026


Executive Summary

  • AI fluency is now a leadership competency, not a training topic. Korn Ferry, Egon Zehnder, and DDI all added AI readiness dimensions to their executive assessment frameworks in 2025-2026. Meta began formally grading every employee on “AI-driven impact” in January 2026. IBM ties base pay and equity to skills development, explicitly including AI — employees can receive low performance ratings for missing skills targets even while hitting business results.
  • Only 30% of the global workforce demonstrates full AI readiness (SHL, n=~1,000,000 assessments, 2025), and only 5% use AI in ways that meaningfully change how they work. The 9-box grid remains the CHRO’s primary calibration tool — but without an AI dimension, it is identifying yesterday’s high performers, not tomorrow’s.
  • The readiness gap at the top is worse. Only 5% of executives say they manage AI use well, despite 60% using AI in decision-making (Deloitte, n=9,000+, 2026). C-suite confidence in human-machine era preparedness dropped from 65% to 51% in one year (Mercer, n=~12,000, September-October 2025).
  • The talent review is the right venue because it already sits in every CHRO’s Q3-Q4 calendar, feeds succession planning, drives development budgets, and has executive attention. Adding an AI fluency dimension to an existing process costs a fraction of building a new assessment program.
  • Three modifications to the standard talent review — adding an AI fluency axis to the 9-box, updating the succession profile, and tagging development plans with AI capability targets — produce the diagnostic a CEO needs to determine who leads the organization through AI transformation and who needs support to keep pace.

The Talent Review Is Already on the Calendar. The AI Dimension Is Not.

Every mid-market CHRO with 200-2,000 employees runs an annual talent review. The process varies in sophistication — some use formal 9-box grids calibrated across business units, others run spreadsheet-based discussions in a conference room — but the function is universal: identify high performers, flag flight risks, map succession pipelines, allocate development budgets.

In 2026, this process has a blind spot.

Gartner’s survey of 426 CHROs across 23 industries (October 2025) identifies “shaping work in the human-machine era” as a top-four priority, alongside harnessing AI to redesign HR itself. Deloitte’s 2026 Global Human Capital Trends report (n=9,000+ leaders, 89 countries) finds 66% of C-suite leaders say traditional functions must change — but only 7% report making progress. The gap between recognizing AI as a talent variable and actually measuring it in the talent review process is where most organizations stall.

The data is not ambiguous. IDC projects skills shortages will cost the global economy $5.5 trillion by 2026, and 94% of CEOs and CHROs identify AI as their top in-demand skill (IDC-Workera, 2025). McKinsey reports demand for AI fluency has grown nearly sevenfold in two years, faster than any other skill category. The U.S. Department of Labor published its AI Literacy Framework on February 13, 2026, defining five foundational content areas — understanding AI principles, exploring potential uses, directing AI effectively, evaluating outputs, and using AI responsibly — as baseline competencies for American workers.

The talent review does not need to become an AI assessment. It needs three specific updates that take the existing process from a rear-view mirror to a windshield.

Update #1: Add an AI Fluency Axis to the 9-Box Grid

The traditional 9-box maps performance against potential. Both axes capture what leaders have done and what they might do — neither captures whether they can operate in the environment the organization is moving toward.

The modification is not complex. Overlay a third dimension — AI fluency — onto the existing grid as a color code, icon, or separate calibration discussion. The goal is not to create a 27-box matrix. It is to surface a question the current 9-box cannot answer: Is this high-potential leader ready to manage AI-augmented teams, or are they a high performer in a world that is changing underneath them?

What AI Fluency Means in a Talent Review Context

Egon Zehnder’s 2025 AI Leadership Assessment Framework provides the most rigorous structure, rating leaders across four maturity levels — Inactive, Reactive, Proactive, and Transformational — on competencies including leading change through AI disruption, using data for strategic decisions, understanding customers through AI-enhanced insights, and collaborating across functions to embed AI in processes. For business leaders (what Egon Zehnder calls “AI Transformers”), the assessment distinguishes between awareness and action: a Proactive leader redesigns workflows; a Transformational leader reshapes the business model.

For mid-market companies without the budget for a formal Egon Zehnder engagement, the SHL AI Readiness Model offers a four-capability framework distilled from nearly one million assessments: AI literacy (understanding what AI can and cannot do), analytical ability (evaluating AI outputs critically), continuous learning orientation, and willingness to champion AI adoption. SHL’s research shows that only 30% of the workforce scores high across all four — meaning the 9-box high-potential pool almost certainly contains leaders who are excellent today but lack the capabilities for tomorrow.

The simplest version for a 200-500 person company: during the 9-box calibration session, rate each leader on a three-level scale — Using AI to change outcomes (top), Using AI tools but not changing work (middle), Not engaging with AI (bottom). Pair this with SHL’s finding that 89% of employees use AI in some form but only 5% use it to meaningfully transform work, and the middle category becomes the one worth interrogating. The high-performing VP who runs AI-generated reports through the same decision process as before is not the same as the director who restructured a workflow because AI made a different approach possible.

What Companies Are Already Doing

Meta is the most aggressive public example. In November 2025, Meta’s Head of People communicated that “AI-driven impact” would become a formal performance review criterion starting in 2026, applying to all roles from engineering to marketing. Employees are assessed on how they use AI to deliver results and build tools that improve productivity. Engineering managers evaluate workers partly on their ability to use AI to accelerate development cycles and improve code quality. During 2025, employees were encouraged to highlight AI-powered achievements in self-reviews; in 2026, the criterion becomes mandatory.

IBM takes a structural approach. CHRO Nickle LaMoreaux restructured performance evaluation to weight three dimensions equally: business results, behaviors, and skills development — explicitly including AI skills. Base pay and equity grants are tied to skills development, not just business results. An employee can be rated a low performer for failing to build new skills even while hitting revenue targets. IBM’s logic: “AI was going to shrink the half-life of skills,” so continuous learning is no longer aspirational — it is a performance requirement.

Both examples come from technology companies with $100B+ market capitalization. The principle is transferable; the intensity is not. A 400-person manufacturing company does not need Meta’s mandatory AI-impact scoring. It needs to know which of its 12 directors can lead an AI-augmented function and which three need development before the next planning cycle.

Update #2: Rewrite the Succession Profile

Korn Ferry’s 2026 research identifies a critical gap in succession planning: most organizations optimize successor profiles for today’s operating model, not tomorrow’s. The specific question Korn Ferry raises: Can your successor candidates redesign business operations to incorporate AI systems, lead integrated human-AI teams, judge when AI adds value versus when human expertise is essential, and navigate workforce anxiety during technological transformation?

DDI’s Global Leadership Forecast 2025 (n=10,796 leaders, 2,014 organizations, 50+ countries) adds a diagnostic dimension: frontline leaders are 3x more likely than executives to express concern about AI, creating a readiness divide at precisely the level where transformation executes. Leaders who trust senior management are 2.2x more likely to feel excited about AI — which means the succession pipeline’s AI readiness is partly a function of how well current executives communicate the AI strategy.

The practical update for a 200-500 person company: add three questions to every succession profile.

1. Can this candidate direct AI-augmented work? Not “do they use ChatGPT” — can they evaluate an AI vendor proposal, judge whether an AI-generated analysis is reliable, and decide where AI augments versus replaces human effort in their function? This is the “non-technical AI owner” capability that determines whether a department’s AI investment produces returns.

2. Can this candidate lead through AI-driven anxiety? Mercer (n=~12,000, September-October 2025) finds employee concern about AI-driven job loss surged from 28% in 2024 to 40% in 2026, while 62% of employees say leaders underestimate AI’s emotional impact. Only 19% of HR leaders consider emotional impacts in digital implementation strategy. The successor who ignores workforce anxiety will trigger the performative compliance that kills adoption — usage dashboards that look healthy while business outcomes do not move.

3. Can this candidate learn and adapt at the pace AI requires? IBM’s framework makes this explicit: the half-life of skills is shrinking, and the ability to continuously acquire new capabilities is a distinct competency, separate from current expertise. The candidate who mastered the current tech stack over five years and has not learned a new tool since is a different succession risk than the one who learned three new platforms in two years.

Mercer’s 2025/2026 Skills Snapshot Survey documents the infrastructure gap: only 38% of organizations maintain an enterprise-wide skills library, and only 55% map skills directly to jobs. For most mid-market companies, answering these three succession questions requires manual assessment during the talent review — the automated skills taxonomy is not yet in place.

Update #3: Tag Development Plans with AI Capability Targets

The talent review produces development plans. In 2026, those plans need AI-specific capability targets — not “attend an AI workshop” but “demonstrate the ability to evaluate AI vendor claims and make informed build-vs-buy decisions by Q2.”

The DOL’s AI Literacy Framework (February 13, 2026) provides a credible external anchor. Its five content areas — understanding AI principles, exploring potential uses, directing AI effectively, evaluating AI outputs, and using AI responsibly — translate cleanly into development milestones for business leaders. The framework’s seven delivery principles emphasize experiential learning embedded in context, building complementary human skills, and designing for agility — which means the development plan should place leaders in AI project roles, not in classroom seats.

The data on training investment and attrition creates a tension every CHRO must navigate. Mercer finds 63% of employees would trade a 10% pay increase for AI upskilling opportunities — and 77% of investors favor companies that invest in AI education. The demand signal is unambiguous. The risk: development investment in AI skills can accelerate departure if not paired with internal mobility. The development plan must connect AI capability targets to expanded scope, not just credentials.

Tiered Development by 9-Box Position

9-Box Position AI Fluency Level Development Action
High performer, high potential Already using AI to change outcomes Assign to lead the next AI initiative; add AI governance accountability
High performer, high potential Using tools but not changing work 90-day AI project rotation with measurable business outcome target
High performer, moderate potential Not engaging with AI Peer-evidence exposure (Gen X responds to colleague results, not vendor demos); private coaching
Rising star, high potential Any level Pair with AI-fluent senior leader for reverse mentoring; assign to AI pilot team
Solid contributor, low potential Not engaging with AI Focus on role-specific AI tool proficiency; protect from scope expansion that triggers anxiety

The Gallup data (n=19,043, May 2025) is the governing principle: employees whose managers actively support AI are 2.1x more likely to use it weekly and 8.8x more likely to say it helps them do their best work. Development plans that bypass the manager layer and send individuals to training independently miss the single highest-leverage variable.

The AI Fluency Calibration Session: What Changes in the Room

The traditional talent review calibration session is a structured argument. Business unit leaders present their assessments, HR facilitates, and the group debates whether someone rated “high potential” in Division A would earn the same rating in Division B.

Adding the AI fluency dimension changes the conversation in two specific ways.

First, it surfaces disagreements about what AI readiness means. The CFO who automated three reporting workflows will define AI fluency differently than the VP of Sales who thinks using AI means asking ChatGPT to draft emails. The calibration session forces a shared definition — and that definition becomes the standard against which all 12 directors, 40 managers, and 200+ individual contributors are assessed. Without this forced alignment, the organization has 12 different definitions of “AI-ready leader.”

Second, it reveals the leadership pipeline’s exposure to the AI transition. If the top three succession candidates for the COO role are all rated “not engaging with AI,” the CEO now knows that the operations function’s AI strategy depends on developing current candidates or hiring externally. This is a succession risk that the traditional 9-box — which would rate all three as high performers — cannot detect.

Heidrick & Struggles’ survey of Chief Data and AI Officers (2024-2025) documents the gap from the other direction: only 38% of boards have sufficient AI knowledge to respond effectively to AI presentations. The talent review calibration session is the mechanism that prevents this board-level gap from replicating itself one level down.

Key Data Points

Metric Value Source
Global workforce demonstrating full AI readiness 30% SHL (n=~1,000,000 assessments, 2025)
Workers using AI to meaningfully transform work 5% SHL (n=~1,000,000 assessments, 2025)
Executives using AI in decisions vs. managing it well 60% vs. 5% Deloitte (n=9,000+, 89 countries, 2026)
C-suite confidence in human-machine era readiness (YoY) 65% to 51% Mercer (n=~12,000, Sep-Oct 2025)
Employee AI job loss concern (YoY) 28% to 40% Mercer (n=~12,000, Sep-Oct 2025)
Manager support → weekly AI use multiplier 2.1x Gallup (n=19,043, May 2025)
Manager support → “AI helps me do my best work” multiplier 8.8x Gallup (n=19,043, May 2025)
AI fluency demand growth (2-year) ~7x McKinsey (2026)
CEOs/CHROs identifying AI as top skill need 94% IDC-Workera (2025)
Organizations maintaining enterprise-wide skills library 38% Mercer Skills Snapshot (2025/2026)
Employees willing to trade 10% raise for AI upskilling 63% Mercer (n=~12,000, Sep-Oct 2025)
Projected cost of global skills shortages by 2026 $5.5T IDC (2025)
Frontline leaders 3x more AI-concerned than executives 3x gap DDI (n=10,796 leaders, 2025)
Boards with sufficient AI knowledge 38% Heidrick & Struggles (2024-2025)

What This Means for Your Organization

The talent review happening in Q3-Q4 2026 is the single most natural moment to answer the question every CEO is asking but few HR processes can answer: Who in this organization can lead through AI transformation, and who cannot?

The answer does not require a new assessment platform, a consulting engagement, or an enterprise skills taxonomy. It requires three additions to the process already on the calendar: an AI fluency overlay on the 9-box, three questions added to the succession profile, and AI-specific capability targets in development plans. The total cost is preparation time, not budget.

The alternative — running the same talent review with the same criteria while the operating environment changes — produces a succession pipeline optimized for a world that no longer exists. The 65-to-51% confidence decline Mercer documented is not abstract sentiment. It is the C-suite signaling that the gap between what they need their organizations to do and what they believe their organizations can do is widening. The talent review is where that gap either closes or becomes permanent.

The organizations in the 5% capture value because they ask the uncomfortable questions before they become crises. If your Q3 talent review does not include an AI dimension, you are identifying yesterday’s leaders for tomorrow’s challenges. If mapping this to your specific organizational structure and leadership pipeline would be useful, I’d welcome the conversation — brandon@brandonsneider.com.

Sources

  1. Gartner CHRO Priorities Survey (October 2025). n=426 CHROs, 23 industries, 4 regions. Identifies “shaping work in the human-machine era” as top-four CHRO priority for 2026. Source: Independent advisory firm; annual survey with consistent methodology. https://www.gartner.com/en/newsroom/press-releases/2025-10-02-gartner-says-chros-top-priorities-for-2026-center-around-realizing-ai-value-and-driving-performance-amid-uncertainty

  2. Deloitte 2026 Global Human Capital Trends (2026). n=9,000+ leaders, 89 countries. Conducted with Oxford Economics. 60% of executives use AI in decisions; only 5% manage it well; 66% say functions must change; only 7% making progress. Source: Independent consulting firm; large global sample with academic partner. https://www.deloitte.com/us/en/about/press-room/deloitte-report-winning-organizations-will-build-the-human-advantage.html

  3. Mercer Global Talent Trends 2026 (September-October 2025). n=~12,000 C-suite, HR leaders, investors, employees. C-suite preparedness confidence fell from 65% to 51%; AI job loss concern rose from 28% to 40%; 63% of employees would trade 10% raise for AI upskilling. Source: Independent HR consulting firm; 11th annual report, large global sample. https://www.mercer.com/about/newsroom/mercer-s-global-talent-trends-2026-report/

  4. Mercer 2025/2026 Skills Snapshot Survey (2025-2026). 38% of organizations maintain enterprise-wide skills library (up from 30% in 2023); 55% map skills to jobs (up from 47% in 2023). Source: Independent HR consulting firm; multi-year tracking. https://www.mercer.com/en-us/insights/talent-and-transformation/skill-based-talent-management/rebuilding-reward-and-career-frameworks-based-on-skills/

  5. SHL AI Readiness Research (2025). n=~1,000,000 assessments globally. Only 30% of workforce demonstrates full AI readiness; only 5% use AI to meaningfully transform work; 89% use AI in some form. Four-capability model: AI literacy, analytical ability, continuous learning, willingness to champion. Source: Independent assessment firm; exceptionally large sample from actual assessments, not self-report. https://www.shl.com/resources/by-type/blog/2026/is-your-workforce-really-ai-ready-or-just-using-the-tools/

  6. DDI Global Leadership Forecast 2025 (2025). n=10,796 leaders, 2,014 organizations, 50+ countries. Frontline leaders 3x more AI-concerned than executives; leaders who trust senior management 2.2x more likely to feel excited about AI. Source: Independent leadership consulting firm; established longitudinal study. https://www.ddi.com/research/global-leadership-forecast-2025

  7. Gallup Workplace AI and Manager Support (May 2025). n=19,043. Manager AI support → 2.1x weekly use, 8.8x “best work” impact; only 28% of employees report receiving manager support. Source: Independent research firm; large sample with rigorous methodology. https://www.gallup.com/699797/indicator-artificial-intelligence.aspx

  8. Korn Ferry AI Readiness in Succession Planning (2026). Recommends expanding success profiles to evaluate AI integration capability, human-AI team leadership, AI judgment, and workforce anxiety navigation. Source: Independent executive search/advisory firm; prescriptive framework based on client practice. https://www.kornferry.com/insights/featured-topics/gen-ai-in-the-workplace/why-ai-readiness-is-vital-to-your-succession-plan

  9. Egon Zehnder AI Leadership Assessment Framework (2025). Three-part framework assessing AI Transformers and AI Builders on four competencies across four maturity levels (Inactive → Reactive → Proactive → Transformational). Source: Independent executive search firm; proprietary assessment framework. https://www.egonzehnder.com/industries/technology-communications/artificial-intelligence/insights/assessing-ai-skills-in-leadership-why-it-has-become-critical-for-business-leaders

  10. IDC-Workera AI Skills Gap Research (2025). 94% of CEOs/CHROs identify AI as top skill need; only 35% feel they have prepared employees effectively; skills shortages projected to cost $5.5T globally by 2026. Source: Independent analyst firm sponsoring Workera research; widely cited economic projection. https://www.workera.ai/blog/the-5-5-trillion-skills-gap-what-idcs-new-report-reveals-about-ai-workforce-readiness

  11. McKinsey Human Skills in the Age of AI (2026). AI fluency demand grew ~7x in two years; identifies eight high-prevalence skills; positions judgment, creativity, and aspiration as “only human” leadership traits. Source: Independent consulting firm; multi-year labor market analysis. https://www.mckinsey.com/mgi/media-center/human-skills-will-matter-more-than-ever-in-the-age-of-ai

  12. IBM CHRO Nickle LaMoreaux on Skills Strategy (February 2026). Performance evaluation equally weights business results, behaviors, and skills development; base pay tied to skills; employees rated low for missing skills targets even while hitting business results. Source: Primary executive interview; single company, but $150B investment and 300K workforce provide scale context. https://www.hr-brew.com/stories/2026/02/12/ibm-chro-nickle-lamoreaux-skills-strategy

  13. Meta AI-Driven Impact Performance Review (November 2025, effective January 2026). AI-driven impact becomes formal review criterion for all roles; engineering managers evaluate AI-accelerated development; AI Performance Assistant deployed for review writing. Source: Primary company announcement via internal memo reported by multiple outlets. https://winbuzzer.com/2026/02/04/meta-ties-employee-performance-reviews-ai-usage-2026-xcxwbn/

  14. U.S. Department of Labor AI Literacy Framework (February 13, 2026). Five content areas (understanding AI, exploring uses, directing AI, evaluating outputs, responsible use) and seven delivery principles. Voluntary guidance for workforce and education systems. Source: U.S. federal agency; authoritative government framework. https://www.dol.gov/newsroom/releases/eta/eta20260213

  15. Heidrick & Struggles Chief Data and AI Officers Survey (2024-2025). Only 38% of boards have sufficient AI knowledge to respond to AI presentations; only 5% of AI officers named HR as the function they spend most time with. Source: Independent executive search firm; survey of senior data/AI executives. https://www.heidrick.com/en/insights/digital-leadership/ai-and-leadership_how-finance-hr-technology-leaders-collaborate


Brandon Sneider | brandon@brandonsneider.com March 2026