AI and the External Audit: What Your Auditors Will Ask in 2026 — and What Documentation Satisfies Them
Brandon Sneider | March 2026
Executive Summary
- 90% of S&P 500 companies now disclose AI-related information in their 10-K filings, up from 72% in 2023. Financial auditors, SOC 2 assessors, and compliance reviewers are adding AI-specific inquiries to every engagement. A 200-500 person company deploying AI in any process that touches financial reporting, customer data, or regulated decisions faces audit questions it did not face 18 months ago.
- Audit committee AI oversight has tripled in one year. EY Center for Board Matters finds 48% of Fortune 100 companies now cite AI risk as part of board oversight responsibilities, up from 16% in 2024. Auditors asking “who oversees AI at the board level?” expect a named committee, not a blank stare.
- The practical risk is not failing an audit — it is producing unauditable numbers. Warren Averett’s 2026 analysis identifies four audit vulnerabilities specific to AI-using companies: AI-generated estimates without documentation, revenue recognition based on undocumented user behavior, API-fed data without audit trails, and model versioning gaps. Any of these can make reported figures “unauditable” in the auditor’s judgment.
- The audit-readiness checklist for a mid-market company deploying AI is 15-20 controls, not 200. SOC 2 has not formally adopted AI-specific criteria. PCAOB standards remain technology-neutral. The practical requirement is demonstrating that AI systems touching financial data have the same governance rigor as any other ICFR component: documented controls, audit trails, and human oversight at decision points.
The Regulatory Landscape: Less Than You Fear, More Than You Expect
Mid-market CFOs bracing for a new AI audit standard can exhale — slightly. No regulator has published mandatory AI-specific audit requirements as of March 2026. But three parallel developments are creating audit pressure without formal mandates.
PCAOB: Technology-neutral standards, AI-aware inspectors. The PCAOB postponed QC 1000, its new quality control standard, to December 2026. That standard requires audit firms to “identify specific risks that would inhibit audit quality — like the use of technology-based auditing tools” and design quality control systems that guard against them. The standard is technology-neutral by design, but PCAOB inspectors are flagging AI in their 2026 work. Christina Ho, PCAOB board member, has called for structured AI guidance and an Innovation Lab for the profession, though these remain proposals, not mandates. In 2024, the PCAOB inspected 578 engagements across Big Four and non-Big Four firms — the inspection infrastructure is real even if AI-specific rules are not.
AICPA: Nonauthoritative guidance, not new criteria. The AICPA’s Trust Services Criteria — the foundation of SOC 2 reports — have not been revised to include AI-specific requirements. However, Moss Adams (December 2025) documents how existing trust service criteria already apply to AI systems: CC9.2 (system operations) maps to model integrity monitoring, processing integrity criteria require documentation of AI model inputs, processing, and outputs, and confidentiality criteria cover training data governance. The practical reality: auditors are evaluating AI controls under existing criteria, not waiting for new ones.
SEC: Disclosure expectations, not audit procedures. The CAQ’s analysis of S&P 500 10-K filings (as of June 30, 2025) shows AI disclosures surging across every section: 424 companies disclosed AI risks in Item 1A (up from 312), 120 in MD&A (up from 69), and 57 in financial statements (up from 36). AI mentions in Item 1C (Cybersecurity) jumped from 11 to 47 companies. The SEC has not issued AI-specific audit rules, but the disclosure trend creates an implicit expectation: if the company discloses AI use, auditors need to understand the controls around it.
What Financial Auditors Are Actually Asking
Warren Averett’s 2026 audit red flags analysis identifies the four questions that produce audit findings for AI-using companies.
1. “How was this AI-generated estimate derived?”
Companies using AI models for reserves, impairments, revenue forecasts, or financial close automation face a documentation burden their manual processes never required. The auditor needs to trace the number from data source to financial statement. If the AI model is a black box — inputs go in, numbers come out — the estimate may be deemed “unauditable.” The fix: model cards documenting inputs, assumptions, logic, and validation procedures for every AI model producing financial data.
2. “Where is the audit trail for this AI-processed data?”
API-fed data flowing from product platforms to general ledgers through multiple systems creates gaps auditors cannot bridge. When data passes through AI enrichment, classification, or transformation without logging, the auditor loses the chain of evidence. The fix: automated audit trails with source metadata for every data transfer involving AI processing, including timestamps, transformation rules applied, and version identifiers.
3. “Can you show me what changed between model versions?”
Machine learning models that continuously update without logging create a specific audit risk: the auditor cannot determine whether changes in financial outcomes reflect market conditions or model drift. This is the version control problem, and it applies to any AI system that learns or is retrained. The fix: formal change management for AI models, including versioning, testing documentation before deployment, and impact assessment of model changes on financial outputs.
4. “Who reviewed the AI output before it entered the financial statements?”
Human oversight documentation is the control auditors most frequently seek and most frequently find missing. If an AI system generates a journal entry, classifies a transaction, or produces an estimate, the auditor expects evidence that a qualified person reviewed the output. The fix: documented review protocols with reviewer identity, review date, and disposition for every AI-generated financial data point.
What SOC 2 Assessors Are Evaluating
SOC 2 has not formally adopted AI-specific criteria. But companies that use AI in service delivery — and sell to enterprise clients requiring SOC 2 reports — face assessor scrutiny under existing trust service criteria.
Moss Adams and Baker Tilly (both December 2025) document how AI controls map to the existing SOC 2 framework:
| Trust Service Criterion | AI Control Requirement | Documentation Needed |
|---|---|---|
| CC9.2 — System Operations | Model integrity monitoring, drift detection | Model performance logs, drift thresholds, alert procedures |
| Processing Integrity | AI outputs are complete, valid, accurate, authorized | Input validation records, output testing, authorization controls |
| Confidentiality | Training data governance, data leakage prevention | Data classification for AI training sets, DLP configurations |
| Privacy | AI processing of personal data, consent alignment | Privacy impact assessments for AI deployments, consent records |
| CC3.2 — Risk Identification | AI-specific risk assessment | Risk register entries for AI systems, threat modeling |
The 2026 trend in SOC 2 reports: nearly 90% now include subservice providers (up from 82%), and reports with 150+ controls increased from 16% to 23% of all examinations. Confidentiality inclusion rose from 34% to 64.4%. Assessors are documenting more, not less — and AI systems that process customer data are squarely in scope.
ISO 42001 as emerging benchmark. While not required for SOC 2, ISO 42001 (AI Management Systems) is gaining traction as the reference framework for AI governance audits. SAP, Microsoft, and Cornerstone OnDemand have achieved certification. For a mid-market company, ISO 42001 alignment is not a 2026 requirement — but it signals where assessor expectations are heading.
What Audit Committees Need to Know
The audit committee’s role has shifted faster than any other board function in the AI era.
Board AI oversight has tripled. EY Center for Board Matters finds 48% of Fortune 100 companies now cite AI risk in board oversight disclosures — up from 16% in 2024. Approximately 40% have charged at least one board committee (usually the audit committee) with AI oversight, up from 11% the prior year. Only 25% have incorporated AI oversight into committee charters, creating a gap between disclosed responsibility and formal authority.
PwC’s 2024 Annual Corporate Directors Survey (September 2024) finds 57% of directors place AI oversight with the full board, while 17% assign it to the audit committee specifically. The practical implication: auditors will ask who owns AI oversight, and the answer needs to be documented in a charter, not just a conversation.
The Harvard Law School Forum on Corporate Governance (July 2025) identifies four questions audit committees should pose to management:
- How is AI being used across each function, and what are the transformative opportunities?
- How is responsible AI governance enforced — not just planned?
- Which AI models are classified as higher risk, and why?
- What is AI’s impact on the talent strategy across the finance function?
BDO’s 2026 Audit Committee Priorities add operational specifics: audit committees should update cyber incident response plans and AI governance — including policy, model risk controls, change management, and monitoring — while establishing reporting metrics such as time-to-detect anomalies and model drift indicators.
The Mid-Market Audit-Readiness Checklist
A 200-500 person company does not need an enterprise AI governance platform to satisfy auditors. It needs evidence that AI systems are governed with the same discipline applied to any other ICFR component.
Before the audit:
-
AI inventory. Maintain a living registry of every AI system, model, and embedded tool — including third-party AI features in existing SaaS platforms. PwC’s responsible AI guidance emphasizes that “if AI is part of your process, it becomes part of the audit.” The shadow AI tools employees adopted without IT approval are in scope.
-
Financial data mapping. For every AI system that touches financial data — directly or indirectly — document the data flow from source to financial statement. Auditors need to trace the chain. API integrations, AI-driven classifications, automated journal entries: map them.
-
Model documentation. For each AI model producing or processing financial data, maintain a model card: purpose, inputs, training data sources, validation methodology, known limitations, approval authority, and version history.
-
Human oversight evidence. Document who reviews AI outputs before they enter financial statements, when they review, and what their review entailed. A checkbox is insufficient — auditors want evidence of substantive review.
-
Change management logs. Every model update, retraining event, or parameter change should be logged with date, author, reason, testing results, and approval.
-
Vendor AI assessment. For third-party AI tools, document the vendor’s AI governance posture, data usage terms, and any SOC 2 or ISO 42001 certifications. Third-party breaches average $5.08M in costs — auditors want to see vendor risk assessment.
-
Incident response for AI failures. Documented procedures for when an AI system produces incorrect outputs that affect financial data, customer decisions, or regulatory compliance.
During the audit:
-
Prepare to explain AI-generated estimates with the same rigor as management estimates: inputs, assumptions, methodology, sensitivity analysis.
-
Provide audit trail documentation for every AI-processed data point entering the financial statements.
-
Demonstrate that AI governance is operational — not just documented. Auditors review meeting minutes, escalation records, and monitoring dashboards, not just policies.
Key Data Points
| Metric | Finding | Source |
|---|---|---|
| S&P 500 AI disclosure in 10-K | 90% (448/500) in 2024, up from 72% (359/500) in 2023 | CAQ Analysis, June 2025 |
| AI risk disclosures (Item 1A) | 424 companies (2024), up from 312 (2023) | CAQ Analysis, June 2025 |
| Board AI oversight disclosure (Fortune 100) | 48% in 2025, up from 16% in 2024 (3x increase) | EY Center for Board Matters, 2025 |
| Committee-level AI oversight | 40% of Fortune 100, up from 11% in 2024 | EY Center for Board Matters, 2025 |
| AI in committee charters | Only 25% of S&P 500 boards | Harvard/EY, 2025 |
| SOC 2 reports with subservice providers | Nearly 90%, up from 82% | Konfirmity SOC 2 Analysis, 2026 |
| SOC 2 reports with 150+ controls | 23%, up from 16% | Konfirmity SOC 2 Analysis, 2026 |
| Confidentiality inclusion in SOC 2 | 64.4%, up from 34% | Konfirmity SOC 2 Analysis, 2026 |
| PCAOB QC 1000 effective date | December 15, 2026 (postponed one year) | PCAOB, 2025 |
| Big Four combined AI investment | $9.5 billion | AI News/OpenTools, 2025 |
| Average cost of third-party breach | $5.08 million | Konfirmity/IBM, 2026 |
What This Means for Your Organization
The audit pressure on AI-using companies is real but manageable — if addressed before the auditor arrives. The companies that struggle are not the ones deploying sophisticated AI. They are the ones that deployed AI without treating it as part of their control environment.
For a 200-500 person company, the practical work is a 4-6 week project: build the AI inventory, document the data flows for financially material AI processes, create model cards for the systems that touch financial data, and ensure human review is documented at every output point. Most of this work produces governance documentation that also satisfies enterprise client due diligence, cyber insurance applications, and the 90-day governance sprint — the same investment pays across four compliance audiences.
The timing matters. Auditors are asking these questions now, not next year. PCAOB’s QC 1000 takes effect in December 2026, requiring audit firms to assess technology risks in their own quality control systems. That standard will cascade to client expectations: audit firms documenting their own AI risks will expect clients to do the same.
If the specific intersection of AI deployment and audit readiness at your organization raised questions this document did not answer, I would welcome that conversation — brandon@brandonsneider.com.
Sources
-
CAQ — Analysis of AI-Related Information in S&P 500 Companies’ 10-Ks (June 2025). Independent analysis of all S&P 500 annual filings. High credibility — primary source, comprehensive dataset. https://www.thecaq.org/sp-500-and-ai-reporting
-
EY Center for Board Matters — Cyber and AI Oversight Disclosures: What Companies Shared in 2025 (October 2025). Analysis of Fortune 100 proxy and 10-K disclosures. High credibility — empirical review of public filings. https://corpgov.law.harvard.edu/2025/10/28/cyber-and-ai-oversight-disclosures-what-companies-shared-in-2025/
-
Warren Averett — Audit Red Flags in the Age of AI: What Tech CFOs Should Watch for in 2026 (2026). Practitioner guidance from a Top 50 accounting firm. Moderate-high credibility — reflects real audit practice but single-firm perspective. https://warrenaverett.com/insights/tech-cfos-audit/
-
Moss Adams — Representing AI Controls in Your SOC 2 Report (December 2025). Detailed mapping of AI controls to trust service criteria. High credibility — authored by SOC examination directors with specific framework references. https://www.mossadams.com/articles/2025/12/ai-controls-for-soc-2-reports
-
PCAOB — QC 1000, A Firm’s System of Quality Control (adopted May 2024, effective December 15, 2026). Primary regulatory standard. Highest credibility — binding regulatory requirement for all registered firms. https://pcaobus.org/oversight/standards/qc-standards/details/qc-1000--a-firms-system-of-quality-control
-
Harvard Law School Forum on Corporate Governance — Oversight in the AI Era: Understanding the Audit Committee’s Role (July 2025). Synthesis of PwC survey data and regulatory expectations. High credibility — academic forum with practitioner data. https://corpgov.law.harvard.edu/2025/07/12/oversight-in-the-ai-era-understanding-the-audit-committees-role/
-
PwC — 2024 Annual Corporate Directors Survey (September 2024). Survey of corporate directors on governance practices. High credibility — large sample, established annual survey. https://www.pwc.com/us/en/services/governance-insights-center/library/annual-corporate-directors-survey.html
-
BDO — Audit Committee Priorities for 2026 (2026). Practitioner guidance including AI governance recommendations. Moderate-high credibility — Top 10 firm perspective. https://www.bdo.com/insights/assurance/audit-committee-priorities-for-2026
-
Konfirmity — What Changed in SOC 2 for 2026 (2026). Analysis of SOC 2 reporting trends and control statistics. Moderate credibility — industry analyst, data sources not fully cited. https://www.konfirmity.com/blog/soc-2-what-changed-in-2026
-
PwC — Responsible AI and Internal Audit (2025). Framework for internal audit evaluation of AI governance. High credibility — Big Four methodology document. https://www.pwc.com/us/en/tech-effect/ai-analytics/responsible-ai-internal-audit.html
-
ISACA — Five Questions That Audit Professionals Will Need to Answer in 2026 (January 2026). Practitioner framework for AI-era audit focus. High credibility — professional standards organization. https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2026/five-questions-that-audit-professionals-will-need-to-answer-in-2026
-
PCAOB — AI and the Pursuit of Audit Quality: A Regulatory Perspective (2025). Board member speech on AI in auditing. Moderate credibility — personal views, not Board policy. https://pcaobus.org/news-events/speeches/speech-detail/ai-and-the-pursuit-of-audit-quality--a-regulatory-perspective
Brandon Sneider | brandon@brandonsneider.com March 2026