The Governance Evidence Package: What Your Company Must Produce When the Inquiry Arrives
Brandon Sneider | March 2026
Executive Summary
- The 90-day governance sprint produces documentation that satisfies insurers, enterprise buyers, and boards proactively. It does not produce the documentation a state attorney general, the FTC, or the EEOC demands reactively. Proactive governance artifacts describe what the company intends to do. Reactive evidence must prove what the company actually did, when it did it, and who authorized it. The gap between these two is where enforcement exposure lives.
- Five state AI laws now in effect or taking effect in 2026 — Texas RAIGA (January 2026), Colorado AI Act (June 2026), Illinois AIPA, and California’s two AI statutes — each grant attorney general offices the power to issue civil investigative demands requiring specific documentation within defined timelines. Texas requires companies to produce AI system descriptions, training data details, performance metrics, and post-deployment monitoring evidence. Colorado requires annual impact assessments and 90-day notification to the AG upon discovering algorithmic discrimination.
- The most common governance failure is not missing policies — it is missing timestamps. Regulators do not ask “do you have a policy?” They ask “when was this policy in effect, who acknowledged it, and can you prove it was enforced on the date of the incident?” The evidence gap is temporal, not substantive.
- Companies that close the seven documentation gaps identified in this analysis can respond to a civil investigative demand within 10 business days instead of 60. The cost of closing these gaps is approximately $5,000-$15,000 in staff time layered onto the existing governance cadence — less than 1% of the penalty exposure from a single uncurable violation under Texas RAIGA ($80,000-$200,000).
The Enforcement Landscape Has Shifted From Guidance to Demands
Before 2025, AI governance was a best-practice recommendation. In 2026, it is a documentation obligation with penalties attached.
The shift happened across three vectors simultaneously. State legislatures passed enforceable AI statutes with attorney general enforcement authority. Federal agencies — the FTC, SEC, and EEOC — applied existing consumer protection, securities, and employment law to AI-specific conduct. And cyber insurers began conditioning coverage on governance documentation, creating a private enforcement layer that functions independently of regulators.
The Texas Responsible AI Governance Act (TRAIGA, effective January 1, 2026) grants the attorney general the power to issue a civil investigative demand to any company deploying a high-risk AI system, triggered by a single complaint (Tex. Bus. & Comm. Code § 552.103). That demand requires production of a high-level AI system description, training data details, input/output data descriptions, performance metrics and known limitations, and post-deployment monitoring and safeguard measures. The company has 60 days to cure a violation and must “explain how the violation was cured and identify any changes made to internal policies to prevent further violations” (§ 552.104). Failure to cure exposes the company to $80,000-$200,000 per uncurable violation plus $2,000-$40,000 per day of continued violation (§ 552.105).
Colorado’s AI Act (SB 24-205, effective June 30, 2026) requires deployers of high-risk AI systems to maintain updated impact assessments — reviewed annually and within 90 days of any substantial system modification — that document the system’s purpose, algorithmic discrimination risks and mitigations, data categories processed, and performance metrics (Skadden, June 2024; Brownstein, March 2026). When a deployer discovers algorithmic discrimination — or learns of it from a credible source — notification to the Colorado Attorney General is required within 90 days. The rebuttable presumption of reasonable care, the closest thing to a safe harbor, requires three simultaneous conditions: a current risk management policy aligned with NIST AI RMF or ISO 42001, completed impact assessments, and an annual review confirming no algorithmic discrimination.
At the federal level, the FTC’s Operation AI Comply (launched September 2024, continuing under the current administration) produced enforcement actions against DoNotPay ($193,000 settlement, January 2025), Rytr, Click Profit, Workado, and others — each resulting in consent orders that require ongoing compliance documentation, advertising substantiation records, and periodic compliance reporting (FTC, September 2024; Benesch, 2025). The SEC charged Presto Automation (January 2025) for materially misleading AI capability claims, finding the company “had no established process for drafting, reviewing, or approving periodic or current reports” and “never implemented disclosure controls” — the absence of documentation was itself the violation (SEC Administrative Proceeding 33-11352).
The EEOC’s AI enforcement posture treats algorithmic hiring tools as “selection procedures” subject to disparate impact analysis under Title VII. Employers must validate, monitor, document, and preserve prior versions of AI models used in employment decisions — including thresholds, escalation triggers, and configuration changes over time (EEOC Guidance, September 2025; Ogletree, 2025).
The Seven Documentation Gaps
Companies that completed the 90-day governance sprint have policies, registries, and training records. What they typically lack is the evidentiary layer that transforms governance artifacts from descriptions of intent into proof of execution. These seven gaps appear consistently when a proactive governance program meets a reactive regulatory demand.
Gap 1: Timestamped Policy Acknowledgment Records
What the sprint produces: An AI acceptable use policy.
What the investigator demands: Proof that the policy was in effect on the date of the incident, proof that the specific employee involved had acknowledged it, and the version of the policy they acknowledged.
The gap: Most mid-market companies distribute policies via email or SharePoint without collecting timestamped digital acknowledgments. When the AG asks “was Employee X bound by this policy on March 3?”, the company cannot prove it.
How to close it: Implement version-controlled policy distribution with electronic acknowledgment tracking. Every policy revision generates a new acknowledgment cycle. Retain acknowledgment records for the duration of the applicable statute of limitations — minimum three years under most state employment laws, longer for federal claims. HR platforms (BambooHR, Paylocity, Rippling) include policy acknowledgment modules at no additional cost. Total effort: 4-8 hours to configure, ongoing effort embedded in existing HR onboarding.
Gap 2: Decision-Level Audit Trails for AI-Assisted Outputs
What the sprint produces: A tool registry and risk tier assignments.
What the investigator demands: Evidence of what the AI system produced, what a human reviewed, what was changed, and what reached the client, customer, or applicant — for the specific transaction at issue.
The gap: The governance program governs at the system level (which tools are approved, what data they may process). The investigator interrogates at the transaction level (what happened in this specific case). Most companies have no logging of individual AI interactions, review decisions, or human overrides.
How to close it: For high-risk AI use cases (hiring decisions, client-facing work product, financial analysis, compliance determinations), implement input/output logging with human review documentation. This does not require enterprise-grade AI observability tooling. It requires a structured workflow: AI generates output, human reviews and documents the review (approved as-is, modified, rejected), final version is stored alongside the AI-generated version. Cloud-based document management systems (Google Workspace version history, SharePoint versioning) provide basic audit trail capability. For hiring tools, preserve AI-generated scores, human override records, and final disposition decisions per candidate. California requires four-year retention of automated decision data in employment contexts.
Gap 3: Incident Response Timeline Evidence
What the sprint produces: An incident response addendum for AI-specific scenarios.
What the investigator demands: A contemporaneous timeline proving the company discovered the incident on Date X, escalated on Date Y, notified affected parties on Date Z, and remediated by Date W.
The gap: The IR addendum describes the escalation process. It does not generate the timestamped artifacts that prove the process was followed. During actual incidents, companies rely on Slack messages, email chains, and meeting notes — none of which are structured for regulatory production.
How to close it: Create an AI incident log template (a simple structured document or ticketing system entry) with mandatory fields: discovery date/time, discovering party, initial classification, escalation date/time, escalation recipient, investigation actions with dates, root cause determination, remediation actions with completion dates, notification decisions with dates and recipients, and post-incident review date. Use the same ticketing system (Jira, ServiceNow, even a dedicated Slack channel with a bot that timestamps entries) the company already uses for IT incidents. Colorado requires 90-day notification to the AG after discovering algorithmic discrimination. Texas allows a 60-day cure period. Neither timeline is generous. A company that discovers an issue and spends three weeks determining whether it qualifies as an “incident” has already consumed half of its response window.
Gap 4: Impact Assessment Currency and Version Control
What the sprint produces: An initial impact assessment for deployed AI systems.
What the investigator demands: The impact assessment that was current on the date of the incident — which may not be the most recent version — plus evidence that the assessment was updated within the statutorily required timeline after any substantial modification.
The gap: Colorado requires impact assessment updates within 90 days of substantial modifications and annual reviews. Most companies treat the initial assessment as a one-time exercise. When a vendor pushes a model update, the company rarely documents whether the update constitutes a “substantial modification” or triggers a reassessment obligation.
How to close it: Implement a change-trigger protocol: when an AI vendor issues a model update, the governance lead evaluates whether the change meets the “substantial modification” threshold and documents the determination either way. Store impact assessments with version numbers, effective dates, and expiration dates (12 months from completion). Maintain a version history log that records every assessment, revision, and trigger evaluation. This is a 30-minute monthly task during the existing governance pulse — not a separate workstream.
Gap 5: Training Completion Evidence With Content Versioning
What the sprint produces: Role-specific AI training sessions with completion tracking.
What the investigator demands: Proof that the specific employee received training on the specific topic relevant to the incident, the content of that training, and when they received it.
The gap: Most companies track completion (who attended or clicked through) but do not preserve the training content itself. When the AG asks “what were employees trained to do regarding customer data in AI tools?”, the company can prove someone attended training but cannot produce the slide deck, quiz, or scenario that defined the expected behavior.
How to close it: Archive every version of training materials alongside completion records. When training content changes, create a new version and link subsequent completions to the new version. This allows precise answers: “Employee X completed Version 2.1 of the AI data handling training on February 15, 2026. Here is Version 2.1. Version 2.2 was released March 1 with updated guidance on [specific topic].” Store in the same location as policy acknowledgments. Total storage cost: negligible. Total effort: 1-2 hours per training revision.
Gap 6: Vendor Oversight Documentation
What the sprint produces: A vendor assessment checklist for AI tools.
What the investigator demands: Evidence that the company conducted due diligence on the specific AI vendor involved in the incident, monitored the vendor’s compliance with agreed terms, and responded to known vendor risks.
The gap: The sprint evaluates vendors at onboarding. It does not generate ongoing oversight evidence. When a vendor’s data practices change, the company rarely documents its awareness of the change, its assessment of the impact, or its response. The EEOC has specifically noted that using a third-party AI tool does not absolve the employer of discrimination liability — the employer must validate and monitor the vendor’s tool independently (EEOC Guidance, 2023-2025).
How to close it: Add a vendor review cadence to the quarterly governance cycle: for each Tier 1 (high-risk) AI vendor, document any changes to the vendor’s terms of service, data processing agreements, or model architecture during the quarter. Record the governance lead’s assessment of whether changes affect risk tier, require contract amendments, or trigger an impact assessment update. This is 2-4 hours of quarterly work that produces the evidence chain regulators expect: “The vendor changed its training data policy on April 1. The governance team reviewed on April 12. The assessment concluded no material risk change. Here is the documented review.”
Gap 7: Enforcement Readiness — The Response Package Itself
What the sprint produces: Governance documentation organized by function (policy, registry, training, controls).
What the investigator demands: Governance documentation organized by the investigator’s questions — not the company’s filing system.
The gap: When a civil investigative demand arrives, the GC must produce a coherent response package within a regulatory deadline. Companies with strong governance programs still spend 40-80 hours assembling, cross-referencing, and formatting their response because their documentation is distributed across HR systems, IT ticketing platforms, SharePoint sites, and individual email folders.
How to close it: Maintain a pre-assembled “evidence binder” — a single digital location (a dedicated SharePoint folder, Google Drive directory, or compliance platform workspace) organized by regulatory demand category, not by internal function. The binder maps to the common investigative demand structure: (1) AI systems deployed and their purposes, (2) data processed and safeguards, (3) risk assessments and impact analyses, (4) policies and employee acknowledgments, (5) training records, (6) incident history and response evidence, (7) vendor oversight documentation. Update the binder quarterly during the existing governance review. A company that maintains this index can respond to a civil investigative demand in 10 business days. A company that does not will spend the entire 60-day cure period assembling evidence instead of actually curing violations.
Key Data Points
| Metric | Value | Source |
|---|---|---|
| Texas RAIGA penalty — uncurable violation | $80,000-$200,000 per violation | Tex. Bus. & Comm. Code § 552.105 |
| Texas RAIGA penalty — ongoing violation | $2,000-$40,000 per day | Tex. Bus. & Comm. Code § 552.105 |
| Colorado AI Act penalty | Up to $20,000 per violation | SB 24-205 (2024) |
| Colorado AG notification window — algorithmic discrimination | 90 days from discovery | SB 24-205 (2024) |
| Texas cure period after AG violation notice | 60 days | Tex. Bus. & Comm. Code § 552.104 |
| FTC DoNotPay settlement | $193,000 | FTC, January 2025 |
| SEC Presto Automation — documentation finding | “Never implemented disclosure controls” | SEC Administrative Proceeding 33-11352, January 2025 |
| California automated decision data retention | 4 years minimum | CCPA/CPRA ADMT regulations |
| State AI laws effective in 2026 | 5+ (TX, CO, IL, CA x2) | Gunderson Dettmer, January 2026 |
| Cost to close seven documentation gaps | $5,000-$15,000 staff time | Analysis of HR platform configuration + quarterly governance effort |
| 42 state AGs — bipartisan AI safety coalition letter | December 9, 2025 | NAAG, December 2025 |
| Cyber insurance — governance documentation as underwriting prerequisite | Required for AI coverage | ISACA, 2025; Continuum Insurance, 2026 |
What This Means for Your Organization
The governance sprint builds the house. The evidence package proves it was built to code. Without the evidence layer, a company with a strong governance program is in roughly the same position during a regulatory inquiry as a company with no program at all — both must reconstruct their compliance posture from scattered records under time pressure, with legal counsel billing hourly.
The practical reality for a 200-500 person company: the seven gaps identified here can be closed within the existing governance cadence for an incremental investment of $5,000-$15,000 in staff time over 30-60 days. The largest effort is Gap 2 (decision-level audit trails), which requires workflow changes in high-risk use cases. The others are configuration changes to existing systems — HR platform acknowledgment tracking, versioned training archives, a structured incident log, and a quarterly evidence binder assembly.
The companies that will navigate their first regulatory inquiry without crisis are the ones that treat governance documentation as ongoing evidence production, not annual compliance filing. The difference between a 10-day response and a 60-day scramble is not better lawyers. It is better filing.
If this raised questions about how your governance program translates into enforcement readiness, I’d welcome the conversation — brandon@brandonsneider.com
Sources
-
Texas Responsible AI Governance Act (TRAIGA), HB 149 — Tex. Bus. & Comm. Code §§ 552.101-552.105. Effective January 1, 2026. Civil investigative demand authority, 60-day cure period, penalty structure. Primary legislation. https://www.nortonrosefulbright.com/en/knowledge/publications/c6c60e0c/the-texas-responsible-ai-governance-act
-
Colorado AI Act, SB 24-205 — Effective June 30, 2026. Impact assessment requirements, AG notification obligations, rebuttable presumption of reasonable care. Primary legislation. Source credibility: primary statute, analyzed by Skadden (June 2024) and Brownstein (March 2026). https://www.skadden.com/insights/publications/2024/06/colorados-landmark-ai-act
-
FTC Operation AI Comply — Enforcement sweep launched September 2024, continuing 2025-2026. DoNotPay ($193,000), Rytr, Click Profit, Workado settlements. Source credibility: federal agency enforcement actions — primary source. https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes
-
SEC v. Presto Automation, Administrative Proceeding 33-11352 — January 2025. Materially misleading AI capability claims, absence of disclosure controls as violation. Source credibility: federal agency enforcement action — primary source. https://www.sec.gov/enforcement-litigation/administrative-proceedings/33-11352-s
-
EEOC AI Hiring Guidance — Select Issues: Assessing Adverse Impact in Software, Algorithms, and AI (2023-2025). Employer documentation obligations for algorithmic selection procedures. Source credibility: federal agency guidance — authoritative, not binding. https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial
-
Gunderson Dettmer, “2026 AI Laws Update: Key Regulations and Practical Guidance” — January 2026. Multi-state compliance summary including TX, CO, CA, IL, NY. Source credibility: law firm analysis — secondary source interpreting primary legislation. https://www.gunder.com/en/news-insights/insights/2026-ai-laws-update-key-regulations-and-practical-guidance
-
42 State Attorneys General Bipartisan AI Safety Coalition Letter — December 9, 2025. 16 safeguards demanded of 13 AI companies. Source credibility: primary source — official multi-state enforcement communication. https://www.naag.org/press-releases/bipartisan-coalition-of-state-attorneys-general-issues-letter-to-ai-industry-leaders-on-child-safety/
-
ISACA, “Cyber Insurance in Crisis with AI Blind Spots” — 2025. Governance documentation as insurance underwriting prerequisite. Source credibility: independent professional association — high. https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2025/cyber-insurance-in-crisis-with-ai-blind-spots
-
Norton Rose Fulbright, “The Texas Responsible AI Governance Act” — 2025. CID requirements, cure period, penalty structure analysis. Source credibility: law firm analysis — secondary source. https://www.nortonrosefulbright.com/en/knowledge/publications/c6c60e0c/the-texas-responsible-ai-governance-act
-
Internetwork Defense, “AI Governance Controls Briefing: Evidence Gap” — March 6, 2026. Policy-to-operations evidence disconnect, ISO 42001 compliance gap analysis. Source credibility: industry analysis — moderate. https://internetworkdefense.com/ai-governance-controls-briefing-2026-03-06-evidence-gap/
Brandon Sneider | brandon@brandonsneider.com March 2026