The General Counsel’s AI Checklist: 12 Legal Risk Categories for a 200-500 Person Company

Executive Summary

  • AI adoption in corporate legal departments doubled in one year — 87% of GCs now report AI use within their teams, up from 44% (FTI Consulting General Counsel Report, n=224, Summer 2025). Yet only 36% of boards have a formal AI governance framework, and just 6% have AI-related management reporting metrics (NACD, 2025). The GC sits at the center of this gap.
  • Liability falls on the deployer, not the vendor. Every major AI tool — Copilot, Cursor, Claude, Amazon Q — disclaims warranty and accuracy. Courts consistently reject “the AI did it” as a defense. Eighty-eight percent of AI vendors cap their liability at monthly subscription fees. The organization that uses the output owns the consequences.
  • State AI regulation is no longer theoretical. Colorado’s AI Act (effective June 2026), Illinois AIPA (January 2026), California’s ADMT regulations (January 2026), and Texas RAIGA (January 2026) create overlapping compliance obligations. Over 1,000 state-level AI bills were introduced in 2025 alone. A 200-500 person company operating across three states faces real enforcement risk today.
  • Insurance coverage is fracturing. Verisk released general liability exclusion endorsements for generative AI effective January 2026. WR Berkley’s absolute AI exclusion eliminates D&O, E&O, and Fiduciary Liability coverage for any claim arising from AI use. Documented AI governance is becoming a prerequisite for coverage, not a differentiator.
  • The GC who builds a 12-category AI legal checklist now — before the first incident, audit, or lawsuit — positions the company to adopt AI faster, not slower. The organizations with governance programs show 46% agentic AI early adoption rates versus 12% for those still developing policies (CSA/Google Cloud, 2025).

The GC’s Expanding AI Mandate

General counsel at mid-market companies face a unique structural problem. They lack the dedicated AI counsel, regulatory affairs teams, and compliance infrastructure that Fortune 500 legal departments deploy. But they face the same legal exposure — in some cases greater, because a single AI incident at a 300-person company carries proportionally larger financial and reputational impact.

FTI Consulting’s 2026 General Counsel Report (n=224, organizations over $100M revenue) found that half of legal operations teams identified “evaluating and implementing generative AI use cases” as their greatest current challenge. Fifty-three percent now have formalized technology roadmaps — more than double the prior year — but the legal risk framework rarely keeps pace with the technology roadmap.

The practical question is not whether to govern AI. It is how to build a legal risk program that covers the twelve categories where exposure is real and growing — without creating a compliance bureaucracy that stalls the adoption your CEO is demanding.

The 12-Category Checklist

1. AI Vendor Contract Review

The standard SaaS master service agreement does not cover AI-specific risks. Three categories of contract risk are unique to AI tools and require specific attention.

Hallucination risk. AI tools generate confidently wrong output. No vendor warrants accuracy. WilmerHale’s 2026 analysis identifies hallucination, drift (performance degradation post-audit), and silent adoption (mid-contract AI capability additions) as the three contract risks that traditional templates miss entirely.

Data training rights. Most AI vendors default to using customer data for model improvement unless the contract says otherwise. OpenAI, Anthropic, Google, and Microsoft now all represent that business/API data is not used for training by default — but the consumer-tier terms differ, and employees on personal accounts bypass enterprise protections entirely. The contract must define “data” to include raw data, metadata, embeddings, synthetic data, and derivative datasets, and impose explicit restrictions on training use.

Model change notification. Unlike traditional SaaS, AI products change fundamentally through model updates. The vendor can ship a materially different product without triggering a contract amendment. Require written notification of material model changes, with the right to terminate if the update degrades performance or compliance posture.

Redline priorities for a 200-500 person company:

  • Data usage clause: prohibit training on customer data; require deletion of all data (including caches and embeddings) on termination
  • IP indemnification: confirm scope and conditions (Microsoft and Anthropic Enterprise offer IP indemnity; most others do not)
  • Liability caps: negotiate above the industry-standard monthly-fee cap for data breach and IP infringement scenarios
  • Audit rights: secure contractual access to audit the vendor’s data handling, security practices, and model governance
  • Regulatory compliance warranty: require vendor representation that the tool complies with applicable laws — only 17% of AI vendors currently offer this

2. IP Ownership of AI Output

The U.S. Copyright Office confirmed in January 2025 that purely AI-generated material is not copyrightable. The Supreme Court declined to hear an appeal on March 2, 2026. The legal principle is settled: prompt engineering alone does not establish authorship.

Three tiers of IP protection emerge:

Copyrightable: Output where a human substantially modifies, selects, arranges, or transforms AI-generated material. The human’s creative decisions receive protection.

Not copyrightable: Output generated entirely from prompts, even detailed ones. The Copyright Office found that “the gaps between prompts and resulting outputs demonstrate that the user lacks sufficient control over the conversion of their ideas into fixed expression.”

Gray zone: Output generated by AI and partially edited by humans. The Copyright Office acknowledges this requires “case-by-case analysis” with no bright-line rule. This is where most enterprise AI-assisted work falls.

What to do: Establish an internal classification system. Any AI-generated material used in client deliverables, products, or competitive assets requires documented human authorship contribution. The GC should define the minimum human modification threshold that the company treats as sufficient for IP protection — knowing that no court has yet drawn that line.

Trade secret protection, not copyright, is becoming the primary IP strategy for AI-assisted work. But AI tools themselves create trade secret risks by potentially exposing proprietary information to model training pipelines. The acceptable use policy (see Category 4) must address this directly.

3. Employment Law Exposure

AI in hiring, performance management, promotion, and termination decisions creates a regulatory minefield with four active enforcement regimes and a landmark class action.

NYC Local Law 144 (effective since July 2023): Requires annual independent bias audits on automated employment decision tools (AEDTs), public disclosure of audit results, and advance candidate notification. Penalties: $500-$1,500 per violation.

Illinois AIPA (effective January 1, 2026): Makes it unlawful for employers to use AI that discriminates — intentionally or through disparate impact — in recruitment, hiring, promotion, discipline, or termination. Prohibits using ZIP codes as a proxy for protected classes. Requires employee notification when AI is used in employment decisions.

Colorado AI Act (effective June 30, 2026): Requires “reasonable care” to prevent algorithmic discrimination in high-risk AI systems. Employment decisions are explicitly designated high-risk. Employers with 50+ employees must establish risk management policies, conduct annual impact assessments, and complete new assessments within 90 days of AI system modifications. Penalties: up to $20,000 per violation.

California ADMT regulations (effective January 1, 2026): Restricts discriminatory use of automated decision-making technology in employment. Requires advance notice to candidates and employees when ADMT is used, meaningful human oversight, proactive bias testing, and four-year record retention.

Mobley v. Workday (N.D. Cal., conditionally certified May 2025): The first federal class action holding an AI vendor potentially liable as an “agent” of employers for discriminatory hiring decisions. The conditional class covers all applicants over 40 rejected by Workday’s AI screening. Workday represented that 1.1 billion applications were rejected using its tools during the relevant period. The opt-in deadline is March 7, 2026.

What to do: Inventory every AI tool that touches employment decisions — including resume screening, performance scoring, scheduling optimization, and workforce planning. For each tool, determine whether it triggers obligations under the states where you operate. Commission bias audits for any tool used in hiring. The cost of a proactive audit is trivial compared to the Mobley-scale class action exposure.

4. AI Acceptable Use Policy

Every employee is already using AI. Gartner (2025) found 68% of employees use AI tools without IT approval. Forty-seven percent access tools through personal accounts. The GC who has not published an acceptable use policy has employees writing their own rules.

Ten clauses every 200-500 person company needs:

  1. Approved tool list. Specify which AI tools are sanctioned, at which tier (enterprise vs. consumer), and for which use cases. Update quarterly.
  2. Data classification rules. Define which data categories may never enter AI tools (client PII, trade secrets, attorney-client privileged material, HIPAA/financial data) and which are permitted.
  3. Personal vs. company accounts. Prohibit the use of personal AI accounts for company work. Consumer-tier terms of service typically grant broader data usage rights than enterprise agreements.
  4. Client-facing output review. Require human review and documented sign-off before any AI-generated content reaches clients, courts, regulators, or the public.
  5. Prohibited uses. Enumerate specific prohibited applications: employment decisions without human review, legal advice generation without attorney supervision, financial projections without analyst validation, client communication drafts sent without review.
  6. IP ownership acknowledgment. Employees acknowledge that AI-generated output may not be copyrightable and that trade secret protections require specific handling procedures.
  7. Incident reporting. Define what constitutes an AI incident (data leakage, hallucination in a deliverable, bias complaint, regulatory inquiry) and the escalation path.
  8. Vendor terms awareness. Require employees to confirm they have read the relevant terms of service before using any AI tool. Flag key provisions — particularly data usage and output ownership clauses.
  9. Record retention. Specify retention requirements for AI prompts, outputs, and modification logs in regulated contexts (four years for California ADMT; annually for Colorado).
  10. Consequences. State that violations are subject to the same disciplinary process as other policy violations. Ambiguity breeds non-compliance.

5. Regulatory Exposure by State

A 200-500 person company with employees, customers, or operations in multiple states faces overlapping AI obligations with no federal floor. The GC must map the company’s geographic footprint against active and pending regulations.

Currently enforceable:

  • NYC Local Law 144 (AEDTs in hiring — since July 2023)
  • Illinois AIPA (AI in employment — January 2026)
  • Texas RAIGA (intent-based AI discrimination — January 2026)
  • California ADMT (automated decision-making — January 2026)
  • California AI Transparency Act (SB 942/AB 853 — content labeling)

Effective mid-2026:

  • Colorado AI Act (high-risk AI systems — June 30, 2026)

What to do: Build a state-by-state compliance matrix. For each state where the company operates, map AI tool usage against regulatory triggers. Most mid-market companies discover they need two compliance tiers: one for states with comprehensive AI laws (Colorado, California, Illinois) and a baseline for the rest. Review quarterly — more than 1,000 state AI bills were introduced in 2025 alone.

6. Liability Allocation and Insurance

The liability architecture for AI is shifting beneath companies that have not reviewed their coverage.

Vendor liability reality. Eighty-eight percent of AI vendors cap liability at monthly subscription fees. Accuracy warranties are nonexistent. The deploying organization bears all downstream liability for AI-generated errors, omissions, and discriminatory outcomes. Courts have not accepted vendor reliance as a defense in any reported AI-related decision.

Insurance fracturing. Verisk released new general liability exclusion endorsements for generative AI effective January 1, 2026. WR Berkley’s absolute AI exclusion — drafted for D&O, E&O, and Fiduciary Liability policies — eliminates coverage for any claim “based upon, arising out of, or attributable to” AI use, deployment, or development. This includes AI-generated content, failure to detect AI-created materials, inadequate AI governance, chatbot communications, and regulatory actions related to AI oversight.

Cyber insurance is currently the exception — most cyber carriers have signaled continued coverage for AI-related data breaches. But documented governance programs are becoming underwriting requirements.

What to do: Review all liability, D&O, E&O, and cyber policies for AI-specific language — exclusions, endorsements, and sublimits. Contact your broker to confirm coverage for AI-related claims. Organizations with documented AI governance programs can negotiate narrower exclusions. Organizations without governance face broader exclusions, higher premiums, or coverage gaps.

7. Client and Enterprise Buyer Due Diligence

If your company sells to Fortune 500 buyers or operates in regulated industries, enterprise procurement teams are already asking about your AI governance.

Microsoft’s Supplier Security & Privacy Assurance (SSPA) program v10 now includes AI requirements. Due diligence questionnaires from large buyers increasingly include: “Describe your AI governance program.” “What AI tools process customer data?” “How do you audit AI outputs for accuracy and bias?”

Having nothing to describe is a competitive disadvantage in B2B sales. Having a documented governance program — even a minimum viable one — is a sales enabler.

What to do: Prepare a one-page AI governance summary suitable for inclusion in RFP responses and due diligence questionnaires. Document your approved tool list, data classification policies, bias audit schedule (if applicable), and incident response procedures. This document should exist before the first enterprise buyer asks for it.

8. ABA Ethics and Professional Responsibility

ABA Formal Opinion 512 (July 2024) established the ethical framework for lawyers using generative AI. The obligations flow through to in-house counsel and the GC personally.

Duty of competence (Rule 1.1): Lawyers must understand the benefits and risks of AI tools used to deliver legal services. Technological competence is not optional.

Supervision (Rules 5.1, 5.3): Managerial lawyers must establish policies for permissible AI use. Supervisory lawyers must ensure lawyers and non-lawyers are trained in ethical AI use and comply with professional obligations when using AI tools.

Confidentiality (Rule 1.6): AI tools that process client information create confidentiality obligations. Consumer-tier tools with broad data usage rights may violate client confidentiality regardless of convenience.

Candor (Rule 3.3): Courts are sanctioning lawyers who submit AI-generated filings with fabricated citations. The GC must ensure any AI-assisted legal work is verified for accuracy before submission.

What to do: Issue internal guidance applying ABA Opinion 512 to in-house practice. The duty of competence means the GC cannot delegate AI tool selection to IT without legal department involvement. Establish a review protocol for AI-assisted legal work products.

9. Data Privacy and Cross-Border Considerations

AI tools process data in ways that traditional privacy programs do not anticipate. Prompts become training data under consumer terms. Embeddings create derivative data products. Outputs may contain fragments of other users’ inputs.

GDPR exposure: If the company processes EU personal data through AI tools, Article 22 (automated individual decision-making) and the data transfer framework apply. The EU AI Act’s Article 4 AI literacy mandate has applied since February 2025.

State privacy laws: California’s CCPA/CPRA, Virginia’s CDPA, Colorado’s CPA, Connecticut’s CTDPA, and similar laws create AI-specific obligations around automated decision-making, profiling, and opt-out rights.

What to do: Map AI data flows. For each tool, document what data enters, where it is processed, what the vendor retains, and under what terms. Ensure enterprise agreements (not consumer terms) govern all AI tools processing personal data.

10. Board Reporting and Fiduciary Duty

Directors face increasing scrutiny under the Caremark doctrine for AI oversight failures. The NACD’s 2025 survey found only 36% of boards have implemented a formal AI governance framework, and just 6% have established AI-related management reporting metrics.

The SEC’s 2026 examination priorities identify AI as a top focus. Examiners are scrutinizing whether AI-related disclosures match actual practices — meaning companies that overstate or understate AI usage face enforcement risk.

What to do: Ensure the board receives quarterly AI updates covering: tools in use, governance program status, regulatory developments in operating states, incident reports, and insurance coverage status. The GC should own or co-own this reporting obligation. (See the companion board briefing research for template structure.)

11. AI-Specific Litigation Preparedness

The AI litigation landscape is accelerating. Anthropic’s $1.5 billion copyright settlement (August 2025) — the largest in U.S. history — signals that training-data IP claims carry real financial weight. Doe v. GitHub remains in discovery. Mobley v. Workday achieved conditional class certification. New Hampshire’s H.B. 143 (effective 2026) creates a private right of action for individuals harmed by AI chatbot interactions with minors.

What to do: Conduct a litigation exposure assessment specific to AI use cases. Identify which AI tools create the highest liability exposure (client-facing chatbots, employment screening, financial analysis) and ensure those have the strongest governance controls. Establish a litigation hold protocol for AI-related data — including prompts, outputs, and model version logs.

12. Incident Response for AI-Specific Events

Traditional incident response plans do not cover AI-specific failure modes: hallucination in a compliance document, prompt injection exposing confidential data, biased output in a client deliverable, or an AI chatbot providing harmful guidance.

Gartner predicts 50% of incident response efforts will involve AI applications by 2028, up from near-zero today. Most security teams lack processes for AI output recall, model rollback, or hallucination-driven compliance incidents.

What to do: Add an AI-specific annex to the existing incident response plan. Define five AI incident types: data leakage through prompts, hallucination in a deliverable, bias-related complaint, regulatory inquiry about AI use, and vendor breach affecting AI tool data. Assign ownership, define containment procedures, and run a tabletop exercise within 90 days.

Key Data Points

Metric Value Source
GC teams using AI 87% (up from 44%) FTI Consulting, n=224, Summer 2025
Boards with formal AI governance 36% NACD, 2025
Boards with AI reporting metrics 6% NACD, 2025
AI vendors that cap liability at subscription fees 88% Jones Walker analysis, 2025
AI vendors offering regulatory compliance warranty 17% Jones Walker analysis, 2025
Employees using AI without IT approval 68% Gartner, 2025
Employees accessing AI via personal accounts 47% Gartner, 2025
State AI bills introduced in 2025 1,000+ Baker Botts, January 2026
Colorado AI Act penalty per violation $20,000 SB 24-205
Mobley v. Workday applications rejected 1.1 billion Court filings, N.D. Cal.
Shadow AI breach cost premium $670,000 IBM Cost of a Data Breach, 2024
Organizations with governance adopting agentic AI 46% vs. 12% CSA/Google Cloud, 2025

What This Means for Your Organization

The GC at a 200-500 person company is now the de facto AI risk officer — whether or not the title exists. The twelve categories above are not theoretical. Colorado’s penalties are real. Verisk’s exclusions are filed. Mobley v. Workday’s conditional class includes potentially hundreds of millions of applicants. The question is not whether these risks apply to your company. It is whether you discover them through a proactive checklist or through a demand letter.

The counterintuitive finding across every data set: governance accelerates adoption. Organizations with AI governance programs adopt faster, not slower. The CSA/Google Cloud data (46% agentic AI adoption with governance vs. 12% without) is the number to put in front of the CEO who worries that legal is slowing things down. The GC who builds this checklist is not the brake. The GC who ignores it is the liability.

The practical path for a mid-market legal department: start with the acceptable use policy (Category 4), the vendor contract review (Category 1), and the state regulatory map (Category 5). These three produce the highest risk reduction per hour invested. Insurance review (Category 6) is the fourth priority — before renewal season, not after. The remaining eight categories build on these four foundations.

Total cost to stand up the initial program: negligible for categories that require policy drafting (2-4 weeks of GC time plus outside counsel review), $15,000-$40,000 for a bias audit if AI tools touch employment decisions, and the cost of an insurance broker conversation that should be happening anyway.

The alternative — waiting for the first subpoena, the first insurance denial, or the first enterprise buyer who rejects your RFP — costs more.

Sources

  1. FTI Consulting, “The General Counsel Report 2026: AI Adoption in Corporate Legal Departments Doubles” (n=224 GCs, organizations >$100M revenue, Summer 2025). Independent industry survey. High credibility.https://www.fticonsulting.com/about/newsroom/press-releases/ai-adoption-in-corporate-legal-departments-doubles-according-to-the-general-counsel-report

  2. NACD, “2025 Board Governance Survey: AI Oversight” (2025). Industry association survey. High credibility. — Referenced in WilmerHale analysis, https://www.wilmerhale.com/en/insights/blogs/keeping-current-disclosure-and-governance-developments/20260217-managing-legal-risk-in-the-age-of-artificial-intelligence-what-key-stakeholders-need-to-know-today

  3. Jones Walker LLP, “AI Vendor Liability Squeeze: Courts Expand Accountability While Contracts Shift Risk” (2025-2026). Law firm analysis. High credibility for contract term analysis.https://www.joneswalker.com/en/insights/blogs/ai-law-blog/ai-vendor-liability-squeeze-courts-expand-accountability-while-contracts-shift-r.html

  4. Gartner, “Employee AI Tool Usage” (2025). Independent analyst firm. High credibility. — Referenced in multiple governance analyses.

  5. CSA/Google Cloud, “AI Governance and Adoption Survey” (2025). Joint industry/vendor survey. Moderate-high credibility; note Google Cloud co-sponsorship. — Referenced in minimum-viable-ai-governance.md.

  6. ABA Standing Committee on Ethics and Professional Responsibility, “Formal Opinion 512: Generative Artificial Intelligence Tools” (July 29, 2024). Primary authority. Highest credibility.https://www.americanbar.org/news/abanews/aba-news-archives/2024/07/aba-issues-first-ethics-guidance-ai-tools/

  7. U.S. Copyright Office, “Part 2 Report: Copyright and Artificial Intelligence” (January 29, 2025). Federal agency primary source. Highest credibility. — Referenced in IP analysis.

  8. Mobley v. Workday, No. 3:23-cv-00770 (N.D. Cal., conditional certification May 2025). Federal court proceedings. Primary authority.https://www.fisherphillips.com/en/insights/insights/discrimination-lawsuit-over-workdays-ai-hiring-tools-can-proceed-as-class-action-6-things

  9. Verisk/ISO, “General Liability Exclusion Endorsements for Generative AI” (effective January 1, 2026). Insurance industry standard-setter. Highest credibility for coverage analysis.https://www.independentagent.com/vu_resource/verisk-to-roll-out-new-general-liability-exclusions-for-generative-ai-exposures/

  10. Baker Botts, “U.S. Artificial Intelligence Law Update: Navigating the Evolving State and Federal Regulatory Landscape” (January 2026). Law firm regulatory analysis. High credibility.https://www.bakerbotts.com/thought-leadership/publications/2026/january/us-ai-law-update

  11. WilmerHale, “Managing Legal Risk in the Age of Artificial Intelligence: What Key Stakeholders Need to Know Today” (February 2026). Law firm analysis. High credibility.https://www.wilmerhale.com/en/insights/blogs/keeping-current-disclosure-and-governance-developments/20260217-managing-legal-risk-in-the-age-of-artificial-intelligence-what-key-stakeholders-need-to-know-today

  12. IBM, “Cost of a Data Breach Report 2024” (n=600+ organizations, 2024). Independent annual study. High credibility. — Referenced in governance analyses.

  13. Holon Law Partners, “The Rise of AI Vendor Agreements: 7 Clauses Every Business Needs” (2025). Law firm practical guide. High credibility for contract drafting.https://holonlaw.com/ai/the-rise-of-ai-vendor-agreements/

  14. Fisher Phillips, “Comprehensive Review of AI Workplace Law and Litigation” (2025). Employment law firm analysis. High credibility.https://www.fisherphillips.com/en/news-insights/comprehensive-review-of-ai-workplace-law-and-litigation-as-we-enter-2025.html

  15. Corporate Compliance Insights, “AI Risk in 2026: 3 Critical Changes for the General Counsel” (2026). Industry publication. Moderate-high credibility.https://www.corporatecomplianceinsights.com/ai-risk-2026-critical-changes-general-counsel/


Created by Brandon Sneider | brandon@brandonsneider.com March 2026