The AI Regulatory Preparation Roadmap: A 2026-2027 Compliance Calendar for Multi-State Companies
Brandon Sneider | March 2026
Executive Summary
- Seven distinct AI regulatory regimes are now live or taking effect between January 2026 and January 2027 — Illinois (January 1, 2026), Texas (January 1, 2026), California risk assessments (January 1, 2026), Connecticut (July 1, 2026), Colorado (June 30, 2026, under rework), California ADMT opt-out (January 1, 2027), and New York State RAISE Act (January 1, 2027). The multi-state compliance matrix maps requirements; this document sequences them into a quarter-by-quarter preparation calendar.
- Companies that started preparation in Q1 2026 are already behind on two deadlines (Illinois, Texas) and have approximately 90 days before the next wave (Colorado, Connecticut) arrives. The good news: a 90-day governance sprint built to the strictest standard covers 80-90% of requirements across all jurisdictions simultaneously.
- Federal preemption is uncertain and should not delay action. President Trump’s December 2025 executive order established a DOJ task force to challenge state AI laws, but no state law has been struck down, no preemptive legislation has passed Congress, and companies that paused compliance to wait for federal action face liability under existing state law with no governance documentation to defend against it (King & Spalding, January 2026).
- The EU AI Act high-risk enforcement begins August 2, 2026, with penalties up to 35 million euros or 7% of global turnover. Any mid-market company whose AI outputs affect EU residents — through customers, vendors, or data processing — faces extraterritorial obligations that require separate analysis.
- The 5% of mid-market companies navigating this well treat the regulatory calendar as a project plan, not a legal research assignment. Each deadline triggers a specific deliverable. This document identifies what to deliver, by when, and who owns it.
The 2026-2027 Regulatory Calendar
The calendar below sequences every actionable compliance deadline for a mid-market company operating across five or more U.S. states. Deadlines are organized by quarter with specific preparation milestones backfilled to allow realistic implementation time.
Q1 2026 (January - March): The Laws Already in Effect
Three major regimes went live on January 1, 2026. If compliance programs are not in place, remediation should be treated as urgent.
| Deadline | Law | What It Requires | Penalty Exposure |
|---|---|---|---|
| Jan 1, 2026 | Illinois HB 3773 (IHRA amendment) | Notice to employees/applicants when AI is used in employment decisions. Prohibits AI that produces discriminatory outcomes (disparate impact standard). Employers liable even for third-party vendor tools. | Private right of action; uncapped compensatory damages, back pay, emotional damages, attorneys’ fees |
| Jan 1, 2026 | Texas TRAIGA (HB 149) | Prohibits AI deployed with intent to discriminate, encourage self-harm, or violate constitutional rights. Requires healthcare AI disclosure. 36-month regulatory sandbox available. | $10K-$200K per violation; $2K-$40K/day ongoing; AG enforcement only, 60-day cure period |
| Jan 1, 2026 | California CCPA/CPRA risk assessments | Risk assessments required for processing personal data using AI, including profiling, targeted advertising, and selling personal data. Annual summary reporting to CPPA begins April 1, 2028 for pre-2026 processing. | $2,500/violation; $7,500/intentional violation; AG and CPPA enforcement |
| Jan 1, 2026 | California TFAIA (SB 53) | Frontier model developers (>10^26 operations) must publish risk frameworks and report safety incidents within 15 days. | Up to $1M per violation |
| Jan 1, 2026 | California AB 2013 (Training Data Transparency) | Public-use generative AI developers must publish high-level training data information. | Penalties for noncompliance (unspecified) |
What a mid-market company should deliver by end of Q1 2026:
- AI system inventory. Complete catalog of every AI tool in use — purchased, platform-embedded, and shadow AI. Categorize by risk level and jurisdiction of impact.
- Illinois employment AI notice. Written notification to all Illinois employees and applicants that AI is used in employment decisions, specifying which decisions and what tools. This is already required — absence of notice is itself a civil rights violation under the IHRA.
- Texas TRAIGA compliance statement. Document that all deployed AI systems have been reviewed for prohibited uses. File NIST AI RMF compliance documentation (TRAIGA allows entities to demonstrate good faith by following NIST AI RMF).
- California risk assessment initiation. Begin risk assessments for any AI processing of California residents’ personal data. Assessments must cover purpose, data categories, discrimination risk analysis, performance metrics, and mitigation measures.
- Employment AI disparate impact analysis. For any company using AI in hiring, promotion, or performance management: run baseline demographic disparity analysis. Illinois requires impact-based compliance; NYC Local Law 144 requires annual independent bias audits. The same analysis serves both.
Owner: GC and CHRO jointly, with CIO providing the AI system inventory. Budget: $10K-$20K if handled internally with outside counsel review; $25K-$50K if outsourced to a specialized compliance firm.
Q2 2026 (April - June): The Colorado and Connecticut Wave
Two additional regimes take effect at the end of Q2, and preparation must begin now.
| Deadline | Law | What It Requires | Penalty Exposure |
|---|---|---|---|
| June 30, 2026 | Colorado SB 24-205 (AI Act) | Developers and deployers of high-risk AI systems must use “reasonable care” to protect consumers from algorithmic discrimination. Requires risk management policy, annual impact assessments, and consumer disclosures when AI makes adverse decisions. | $20,000/violation (each consumer counted separately); AG enforcement only; 90-day cure period under proposed rework |
| July 1, 2026 | Connecticut Public Act 25-113 (CTDPA amendment) | Privacy notice must disclose whether personal data is collected, used, or sold for training large language models. Applies broadly to any LLM training data use. | AG enforcement under CTDPA |
Critical context on Colorado: Governor Polis’s working group reached unanimous consensus on March 17, 2026, on a replacement framework that shifts from mandatory bias audits to transparency-and-notice, extends the cure period to 90 days, and allocates fault between developers and deployers based on relative responsibility (Colorado Governor’s Office, March 2026). The replacement bill must pass the legislature before June 30 to supersede the original law. Companies should prepare for the original law’s requirements while monitoring the legislative process — the original law remains enforceable until replaced.
What a mid-market company should deliver by end of Q2 2026:
- Impact assessments for high-risk AI systems. Colorado requires annual assessments for AI used in consequential decisions (employment, credit, housing, insurance, healthcare, education). Build these using the same template initiated for California, adding Colorado-specific fields: principal reason documentation for adverse decisions and human review availability.
- Risk management policy. A documented program governing how the organization identifies, evaluates, and mitigates algorithmic discrimination risk. This is Colorado’s core compliance mechanism. Build it on NIST AI RMF — Colorado law explicitly references NIST compliance as a factor in enforcement decisions.
- Consumer/employee disclosure framework. Pre-decision notice that AI is being used, with jurisdiction-specific supplements. Colorado requires disclosure when AI makes a decision adverse to consumer interests. California requires pre-use notice plus opt-out. Illinois requires employment-specific notice. One framework, three overlays.
- Privacy notice update for Connecticut. Add LLM training data disclosure to the consumer-facing privacy notice. This is a straightforward notice obligation, but the definition of “LLM” is undefined in the statute — take a conservative interpretation that covers any AI system trained on personal data.
- Vendor AI documentation file. Collect developer representations, data processing terms, and intended-use documentation from every AI vendor. Texas TRAIGA allocates fault to deployers who use AI outside the developer’s intended scope. Colorado’s proposed framework allocates fault between developers and deployers based on relative responsibility. Documentation protects the company in both regimes.
Owner: GC (impact assessments, risk management policy), CIO (vendor documentation, privacy notice), CHRO (employment-specific disclosures). Budget: $15K-$30K incremental to Q1 investment.
Q3 2026 (July - September): EU AI Act and Ongoing Maintenance
| Deadline | Law | What It Requires | Penalty Exposure |
|---|---|---|---|
| Aug 2, 2026 | EU AI Act (high-risk enforcement) | Full compliance for high-risk AI systems: technical documentation, quality management, conformity assessments, human oversight, accuracy/robustness standards. Applies extraterritorially to any AI whose output is “used in the Union.” | Up to EUR 35M or 7% global turnover (prohibited systems); EUR 15M or 3% (high-risk noncompliance); EUR 7.5M or 1% (false information) |
| Aug 2, 2026 | California SB 942 (AI Transparency Act, as amended) | Covered providers (>1M monthly users) must provide free AI detection tools, user-facing manifest disclosures (watermarks/labels), and latent disclosures in AI-generated image, video, and audio content. | $5,000/violation per day |
EU AI Act applicability test for mid-market companies: Most 200-500 person American companies are not direct providers of high-risk AI systems under the EU AI Act. But the extraterritorial trigger is broader than expected: if AI system outputs are “used in the Union” — meaning a customer in Europe receives an AI-generated document, an AI-scored credit decision, or an AI-assisted professional opinion — the deployer obligations may apply. Companies with European clients, offshore teams processing EU data, or SaaS products accessible to EU users should conduct a targeted applicability assessment during Q3.
What a mid-market company should deliver by end of Q3 2026:
- EU AI Act applicability assessment. Determine whether any deployed AI systems produce outputs that reach EU residents. If yes, begin the conformity assessment process for high-risk systems. If no, document the assessment and revisit annually.
- Quarterly governance review. The 90-day governance sprint is complete. Q3 is the first quarterly review: update the AI system inventory for new tools deployed in Q2, refresh risk assessments for any systems that changed, review bias testing results, and document vendor contract changes.
- Colorado compliance monitoring. By September 2026, the Colorado legislature will have acted (or not) on the replacement framework. Adjust the compliance program based on whatever law is actually in effect.
- Federal preemption status check. The Commerce Department’s review of “burdensome” state AI laws was due March 11, 2026. The DOJ AI Litigation Task Force was activated January 10, 2026. By Q3, the practical impact of federal action (if any) on state law enforcement will be clearer. No state law has been struck down as of March 2026.
Owner: GC (EU assessment, compliance monitoring), CIO (quarterly inventory update), outside counsel (federal preemption analysis). Budget: $10K-$20K for EU applicability assessment; $5K-$10K for quarterly review.
Q4 2026 (October - December): Year-End Preparation and 2027 Readiness
| Upcoming Deadline | Law | What It Requires |
|---|---|---|
| Jan 1, 2027 | California ADMT opt-out (CCPA regulations) | Businesses using automated decision-making technology for “significant decisions” must provide pre-use notice, at least two opt-out methods, access to decision logic, and human review on request. Covers employment, financial services, housing, education, healthcare decisions. |
| Jan 1, 2027 | New York RAISE Act (S6953-B) | Frontier model developers (>10^26 operations, >$500M revenue) must publish safety protocols, report incidents to the State within 72 hours. |
| Dec 31, 2027 | California risk assessment completion | Risk assessments for pre-January 2026 AI processing must be completed. |
| April 1, 2028 | California CPPA reporting | Annual summary reporting of risk assessments to the California Privacy Protection Agency begins. |
What a mid-market company should deliver by end of Q4 2026:
- ADMT opt-out architecture. Build the consumer/employee opt-out mechanism required by California’s January 2027 ADMT regulations before the deadline. Requirements: (a) pre-use notice explaining what ADMT is used for, (b) at least two methods for opting out, © access to information about ADMT decision logic on request, (d) right to human review of adverse decisions. Design this as the unified opt-out that also satisfies Colorado’s human review rights and Virginia’s profiling opt-out.
- Annual bias audit completion. NYC Local Law 144 requires annual independent bias audits for automated employment decision tools. Complete the audit before year-end to maintain compliance and generate evidence usable for Illinois disparate-impact defense and Colorado good-faith documentation.
- 2027 compliance budget. Present the annual AI governance maintenance budget: ongoing monitoring, policy refresh, vendor re-assessment, bias audit, training updates. The 90-day governance sprint research documents this at $15K-$45K/year for a 200-500 person company.
- Board/leadership regulatory briefing. Update the board on the compliance posture: which deadlines were met, what documentation is in place, what gaps remain, and what 2027 obligations are approaching. The board fiduciary duty research documents director liability for failing to oversee AI risk — this briefing protects the directors as much as the company.
Owner: CIO (opt-out architecture), GC (bias audit, board briefing), CFO (compliance budget). Budget: $15K-$30K for opt-out system build; $10K-$15K for independent bias audit.
Key Data Points
- 7 distinct AI regulatory regimes now active or taking effect in 2026-2027 for a multi-state U.S. company, with no two sharing identical requirements (King & Spalding, January 2026; Baker Botts, January 2026).
- Illinois HB 3773 carries the steepest employment risk: private right of action, uncapped compensatory damages, and liability for third-party vendor tools — the only major state AI law with private litigation exposure (Hinshaw & Culbertson, 2026; Manatt Phelps & Phillips, 2026).
- Colorado’s AI Act is being replaced, but the original law remains enforceable until the legislature acts. The March 17, 2026 working group consensus extends the cure period to 90 days and shifts from mandatory audits to transparency-and-notice (Colorado Governor’s Office, March 2026).
- Texas TRAIGA provides a 36-month regulatory sandbox and allows NIST AI RMF compliance as an affirmative defense — the most company-friendly enforcement posture among major state laws (Norton Rose Fulbright, 2025; Latham & Watkins, 2025).
- Federal preemption has not displaced any state law. The December 2025 executive order lacks preemptive legal force without congressional action. The DOJ task force has been operational since January 10, 2026, with no enforcement actions against state laws as of March 2026 (Ropes & Gray, March 2026; Paul Hastings, 2025).
- EU AI Act penalties dwarf U.S. state penalties: up to EUR 35M or 7% of global turnover vs. $20K-$200K per violation under U.S. state laws. The extraterritorial trigger captures any AI system whose outputs are “used in the Union” (EU AI Act Article 2; Latham & Watkins, 2025).
- California ADMT regulations (January 1, 2027) create the most operationally complex obligation: two opt-out methods, access to decision logic, and human review on request for any “significant decision” made using automated decision-making technology (CPPA, September 2025; Skadden, October 2025).
- Over 1,100 AI-related bills were introduced across U.S. state legislatures in 2025 alone, with 145 enacted into law (IAPP, 2026). The compliance landscape is expanding, not stabilizing.
The Wildcard: Federal Preemption
The regulatory calendar above assumes state laws remain enforceable. That assumption deserves scrutiny.
President Trump’s December 11, 2025 executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” established three mechanisms to challenge state AI regulation: a DOJ AI Litigation Task Force (operational since January 10, 2026), a Commerce Department review of “burdensome” state laws (due March 11, 2026), and conditional federal grant funding (states that enact “onerous” AI laws may lose broadband equity funding) (White House, December 2025).
The executive order explicitly carves out child safety, data center infrastructure, and state government procurement from preemption. But it targets the core of the compliance calendar: bias mitigation requirements, disclosure obligations, and transparency mandates.
The practical impact as of March 2026: zero. No state law has been challenged in court. No preemptive federal legislation has passed Congress. Executive orders lack independent preemptive legal force — only a statute enacted by Congress or a regulation issued under congressional authorization can preempt state law (Ropes & Gray, March 2026). States with enacted legislation — California, Colorado, Texas, Illinois — have shown no indication of voluntary repeal.
The prudent approach: build the compliance program on the assumption that state laws remain in effect. If federal preemption materializes, the documentation produced for state compliance becomes the foundation for whatever federal standard replaces it. If preemption does not materialize — the more likely scenario through 2027 — the company is protected. The cost of building a governance program that becomes partially unnecessary is $20K-$50K. The cost of not building one and facing enforcement is multiples of that.
What This Means for Your Organization
The regulatory calendar is a project plan. Each quarter through 2027 carries specific deliverables with identifiable owners and budgets. The total cost of sequential compliance — building from the Q1 foundation through Q4 2026 and into 2027 — runs $50K-$130K for a 200-500 person company. That is less than half the cost of a single enforcement action under Colorado or Texas law, less than the premium increase from a single AI-related cyber insurance exclusion, and a fraction of the litigation exposure under Illinois’s private right of action.
The companies that do this efficiently share three characteristics. First, they build once and overlay — a single governance architecture with jurisdiction-specific supplements, not separate programs per state. Second, they assign ownership explicitly — the GC, CIO, and CHRO each own specific deliverables, not a shared “AI committee” that diffuses accountability. Third, they treat the calendar as a rolling obligation, not a one-time project — quarterly reviews, annual bias audits, and triggered assessments maintain the program after the initial sprint.
If this raised questions about how the calendar applies to your specific jurisdictional exposure, I’d welcome the conversation — brandon@brandonsneider.com.
Sources
- King & Spalding, “New State AI Laws are Effective on January 1, 2026, But a New Executive Order Signals Disruption,” January 2026. Independent law firm analysis. High credibility. https://www.kslaw.com/news-and-insights/new-state-ai-laws-are-effective-on-january-1-2026-but-a-new-executive-order-signals-disruption
- Baker Botts, “U.S. Artificial Intelligence Law Update: Navigating the Evolving State and Federal Regulatory Landscape,” January 2026. Independent law firm analysis. High credibility. https://www.bakerbotts.com/thought-leadership/publications/2026/january/us-ai-law-update
- Colorado Governor’s Office, “Colorado Artificial Intelligence Policy Workgroup Delivers Unanimous Support for Revised Policy Framework,” March 17, 2026. Primary source — government press release. Highest credibility. https://governorsoffice.colorado.gov/governor/news/colorado-artificial-intelligence-policy-workgroup-delivers-unanimous-support-revised-policy
- Clark Hill, “Colorado’s AI Law Delayed Until June 2026: What the Latest Setback Means for Businesses,” September 2025. Independent law firm analysis. High credibility. https://www.clarkhill.com/news-events/news/colorados-ai-law-delayed-until-june-2026-what-the-latest-setback-means-for-businesses/
- Norton Rose Fulbright, “The Texas Responsible AI Governance Act: What Your Company Needs to Know Before January 1,” 2025. Independent law firm analysis. High credibility. https://www.nortonrosefulbright.com/en/knowledge/publications/c6c60e0c/the-texas-responsible-ai-governance-act
- Latham & Watkins, “Texas Signs Responsible AI Governance Act Into Law,” 2025. Independent law firm analysis. High credibility. https://www.lw.com/en/insights/texas-signs-responsible-ai-governance-act-into-law
- Hinshaw & Culbertson, “Illinois Adopts New AI-in-Employment Regulations: What Employers Need to Know for 2026,” 2026. Independent law firm analysis. High credibility. https://www.hinshawlaw.com/en/insights/blogs/employment-law-observer/illinois-adopts-new-ai-in-employment-regulations-what-employers-need-to-know-for-2026
- Manatt Phelps & Phillips, “AI-Assisted Hiring Faces a New Compliance Landscape in 2026,” 2026. Independent law firm analysis. High credibility. https://www.manatt.com/insights/newsletters/employment-law/ai-assisted-hiring-faces-a-new-compliance-landscape-in-2026-california-and-illinois-put-discriminatory-impact-and-transparency-front-and-center
- CPPA (California Privacy Protection Agency), “California Finalizes Regulations to Strengthen Consumers’ Privacy,” September 2025. Primary source — government agency. Highest credibility. https://cppa.ca.gov/announcements/2025/20250923.html
- Skadden, “California Finalizes CCPA Regulations for Automated Decision-Making Technology, Risk Assessments and Cybersecurity Audits,” October 2025. Independent law firm analysis. High credibility. https://www.skadden.com/insights/publications/2025/10/california-finalizes-cppa-regulations
- Greenberg Traurig, “Revised and New CCPA Regulations Set to Take Effect on Jan. 1, 2026 — Summary of Near-Term Action Items,” September 2025. Independent law firm analysis. High credibility. https://www.gtlaw.com/en/insights/2025/9/revised-and-new-ccpa-regulations-set-to-take-effect-on-jan-1-2026-summary-of-near-term-action-items
- Bryan Cave Leighton Paisner, “Connecticut Quietly Adds AI Disclosure Mandate to Consumer Privacy Law,” 2025. Independent law firm analysis. High credibility. https://www.bclplaw.com/en-US/events-insights-news/connecticut-quietly-adds-ai-disclosure-mandate-to-consumer-privacy-law.html
- White House, “Ensuring a National Policy Framework for Artificial Intelligence,” Executive Order, December 11, 2025. Primary source — executive order. Highest credibility. https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/
- Ropes & Gray, “Examining the Landscape and Limitations of the Federal Push to Override State AI Regulation,” March 2026. Independent law firm analysis. High credibility. https://www.ropesgray.com/en/insights/alerts/2026/03/examining-the-landscape-and-limitations-of-the-federal-push-to-override-state-ai-regulation
- Paul Hastings, “President Trump Signs Executive Order Challenging State AI Laws,” 2025. Independent law firm analysis. High credibility. https://www.paulhastings.com/insights/client-alerts/president-trump-signs-executive-order-challenging-state-ai-laws
- IAPP, “US State AI Governance Legislation Tracker,” 2026. Independent nonprofit. High credibility. https://iapp.org/resources/article/us-state-ai-governance-legislation-tracker
- Akin Gump, “New California Regulations Regarding Employer Use of Automated Decision-Making Technology: Compliance Required by January 1, 2027,” 2025. Independent law firm analysis. High credibility. https://www.akingump.com/en/insights/alerts/new-california-regulations-regarding-employer-use-of-automated-decision-making-technology-compliance-required-by-january-1-2027
- A&O Shearman, “New York Enacts Responsible AI Safety and Education Act,” 2025. Independent law firm analysis. High credibility. https://www.aoshearman.com/en/insights/ao-shearman-on-tech/new-york-enacts-responsible-ai-safety-and-education-act
- EU Artificial Intelligence Act, Implementation Timeline. Primary source — legislative text. Highest credibility. https://artificialintelligenceact.eu/implementation-timeline/
Brandon Sneider | brandon@brandonsneider.com March 2026