AI and Your Existing Contracts: The Pre-Deployment Audit Every GC Must Run Before Day One
Brandon Sneider | March 2026
Executive Summary
- Every company deploying AI tools already has contracts that may prohibit it. NDAs restrict what data enters AI systems. Client agreements promise IP ownership the company may not be able to deliver. Employment agreements do not address AI-generated work product. Vendor terms allow data reuse the company never authorized. Debevoise & Plimpton identifies contractual use limitations on data as AI’s single biggest enterprise challenge in 2026, noting that firms may face “hundreds or even thousands” of applicable contracts requiring review (Debevoise Data Blog, November 2025).
- The breach happens on Day One, not Day Thirty. The moment an employee pastes client information into an AI tool, any NDA restricting disclosure to third parties may be violated. If the AI vendor’s terms grant a license to use inputs for model improvement — and the client agreement promises confidential treatment — the company is in breach of two contracts simultaneously. This exposure exists before the AI produces a single output.
- The ABA says boilerplate consent is not enough. Formal Opinion 512 (July 2024) holds that engagement letters with generic AI consent language fail the informed consent standard. Professional services firms must obtain matter-specific, risk-specific consent before using client data in generative AI tools. The same logic extends to any company processing third-party confidential information through AI.
- A structured five-category contract audit, scoped to 4-6 weeks, costs $15K-$35K and prevents exposure that could invalidate entire client relationships. The categories — client/NDA agreements, vendor/SaaS terms, employment agreements, data processing agreements, and professional engagement letters — cover the full contract surface area that AI deployment touches.
The Contract Exposure Most Companies Miss
Most mid-market AI governance programs start in the right place: acceptable use policies, vendor evaluations, security controls. The GC checklist, the vendor contract negotiation guide, the CISO briefing — all of these address the contracts a company signs after deciding to deploy AI.
None of them address the contracts already signed.
This is the gap. A 200-500 person company typically operates under hundreds of active agreements — client MSAs, NDAs, vendor SaaS terms, employment agreements, data processing addenda — many executed years before anyone contemplated AI use. These contracts contain clauses that were drafted for a human-only operating model: confidentiality provisions that prohibit disclosure to “third parties” (which may include AI vendors), IP assignment language that assumes human-authored work product, data use restrictions that predate generative AI entirely.
Debevoise & Plimpton — one of the few firms to publish specific research on this problem — identifies contractual use limitations as “AI’s biggest enterprise challenge in 2026.” Their analysis notes that many restrictive clauses “were written in 2023 and 2024 when firms did not have access to enterprise AI models and were concerned that AI models would train on client data.” Those restrictions often prohibit exactly what organizations now need to do (Debevoise Data Blog, November 2025).
The FTC has staked a clear position on the reverse side of this problem: companies that retroactively change their own privacy policies or terms of service to permit AI data usage face enforcement action. The FTC’s February 2024 guidance states that “surreptitious, retroactive amendment” to privacy terms is potentially unfair or deceptive, and the Commission has brought enforcement actions on this basis — including a 2023 action against a genetics testing company that altered its privacy policy without proper consent (FTC Tech Blog, February 2024).
The contract audit is the step between “we decided to deploy AI” and “we deployed it responsibly.”
The Five Contract Categories That Need Review
Category 1: Client and NDA Agreements (Highest Priority)
This is where Day One breach risk lives. Three clause types create immediate exposure:
Confidentiality and non-disclosure provisions. Most NDAs define “confidential information” broadly and restrict its disclosure to anyone other than authorized personnel. When an employee enters client data into a cloud-based AI tool, that data is transmitted to the AI vendor’s infrastructure. If the NDA does not specifically authorize this — and pre-2024 NDAs almost never do — the disclosure may constitute a breach.
Roth Jackson’s December 2025 analysis finds that “AI tools often process and store data in ways that are difficult to fully erase,” creating “persistent disclosure vulnerabilities unlike human information handling.” Traditional NDAs addressed human conduct. AI data flows are categorically different: the data may be retained in logs, used for model evaluation, or processed by sub-processors the disclosing party never contemplated.
IP ownership and work-for-hire provisions. Many client agreements guarantee that all deliverables constitute the company’s original work product with full IP ownership transferred to the client. If deliverables are generated with AI assistance, two problems emerge. First, the U.S. Copyright Office has confirmed that purely AI-generated content is not copyrightable — meaning the IP “ownership” the company is transferring may be legally meaningless. Second, the AI vendor’s terms of service may grant the vendor a license to use inputs and outputs, creating a conflict with the client’s exclusivity expectations.
As one legal analysis puts it: “Your client agreement guarantees full IP ownership transfer, but your AI vendor contract says you only get a license to use outputs. You’re in breach of your client agreement the moment you use the AI tool” (Galkin Law, 2025).
Data use restrictions. Contracts with financial services, healthcare, and government clients often impose specific restrictions on how data may be processed, stored, and transmitted. These restrictions may prohibit cloud processing, require data residency within specific jurisdictions, or mandate that processing occurs only on the company’s own infrastructure — all conditions that most AI tools violate by default.
| Clause Type | Pre-2024 Default | AI Deployment Reality | Gap |
|---|---|---|---|
| Confidentiality | Disclosure to authorized personnel only | Data transmitted to AI vendor infrastructure | Vendor not authorized |
| IP ownership | All work product is company’s original creation | AI-assisted output may not be copyrightable | Ownership promise may be undeliverable |
| Data processing | On-premises or approved cloud only | AI vendor cloud, sub-processors, potential data retention | Processing location unauthorized |
| Data training | Not addressed | Vendor may use inputs for model improvement | Client data may train competitor’s models |
Remediation priority: Review all active client agreements and NDAs that govern relationships where AI tools will touch client data or produce client deliverables. Flag any agreement with (a) broad confidentiality restrictions without AI carve-outs, (b) IP assignment clauses that conflict with AI vendor licensing terms, or © specific data handling restrictions that AI cloud processing would violate.
Category 2: Vendor and SaaS Terms (High Priority)
The contracts a company already has with software vendors are creating AI exposure through a mechanism most legal teams have not reviewed: silent AI integration.
Most major SaaS platforms — Microsoft 365, Salesforce, Google Workspace, ServiceNow, SAP — have added AI capabilities to existing products under existing terms. Microsoft 365 Copilot, Salesforce Agentforce, Google Gemini for Workspace, and similar features are rolling into enterprise licenses without new contract negotiation. The existing MSA and terms of service govern how these AI features use company data.
Three issues require immediate review:
Data training rights. Most SaaS vendors default to retaining the right to use customer data for model improvement. While major vendors now represent that enterprise-tier data is excluded from training, the contractual language varies, and the terms that govern consumer-tier and personal accounts differ materially. Gouchev Law (2025) identifies data training and usage as the single highest-risk clause in AI vendor contracts.
Sub-processor chains. AI features often rely on third-party model providers. Microsoft uses OpenAI. Salesforce uses multiple model providers. ServiceNow uses both internal and external models. The company’s existing data processing agreement may not authorize these additional sub-processors.
Automatic feature deployment. AI capabilities added through software updates may process company data in ways the original contract did not contemplate. A company that signed a Salesforce MSA in 2022 did not agree to Agentforce processing customer records through a language model in 2026.
Remediation priority: Audit existing SaaS terms for data training rights, sub-processor disclosures, and AI feature opt-out mechanisms. Where AI features have been auto-enabled, verify that the existing DPA authorizes the new processing activities.
Category 3: Employment Agreements (Medium-High Priority)
Employment agreements drafted before 2024 contain three gaps that AI deployment exposes.
IP assignment clauses. Standard employment agreements assign “all inventions, improvements, and work product” to the employer. But if the employee uses AI tools to generate that work product, the assignment may transfer something the employer cannot own (uncopyrightable AI output) or something the AI vendor claims rights to (under the vendor’s terms of service). State laws add complexity: California Labor Code 2870 protects employee inventions developed on personal time without company resources, but does not address AI-assisted inventions developed during work hours using employer-provided AI tools.
Confidentiality obligations. Employment confidentiality provisions typically prohibit disclosure of company trade secrets and proprietary information. They do not address the scenario where an employee enters that information into a third-party AI tool. The distinction matters: a non-disclosure agreement between the employee and the company does not bind the AI vendor.
AI use disclosure requirements. Most employment agreements contain no obligation for employees to disclose their use of AI tools in producing work product. Without this requirement, the company cannot verify whether deliverables comply with client IP warranties or regulatory disclosure obligations.
Remediation priority: Update employment agreements to (a) explicitly address AI-assisted work product in IP assignment clauses, (b) define AI tools as “third parties” for confidentiality purposes, and © require disclosure of material AI assistance in work product. For existing employees, these changes require either new agreements or formal policy amendments acknowledged in writing.
Category 4: Data Processing Agreements (Medium Priority)
Companies that process personal data — customer records, employee information, user data — through AI tools face a privacy compliance gap in their existing data processing agreements.
Existing DPAs define permissible processing purposes, authorized sub-processors, data transfer mechanisms, and retention periods. AI deployment changes all four:
- Processing purposes expand from “providing the contracted service” to include inference, summarization, classification, and other AI operations not contemplated in the original DPA.
- Sub-processors multiply as AI features depend on model providers not listed in existing sub-processor disclosures.
- Data transfers change as AI processing may occur in different jurisdictions than the original service.
- Retention periods become ambiguous when data persists in model weights, embeddings, or evaluation datasets beyond the contractual deletion timeline.
Nineteen states will have comprehensive privacy laws in effect by end of 2026. CCPA amendments specifically address automated decision-making technology. Companies with European customers face GDPR implications for any new processing activity, including AI processing of personal data. Existing DPAs that do not authorize AI-specific processing expose the company to regulatory penalties under multiple overlapping frameworks.
Remediation priority: Identify all DPAs governing data that will flow through AI tools. Amend to add AI-specific processing purposes, update sub-processor lists, and verify that data transfer mechanisms cover AI processing locations.
Category 5: Professional Engagement Letters (Priority Varies by Industry)
Companies that provide professional services — legal, accounting, consulting, engineering, marketing — face an additional contract category: the engagement letter or statement of work that governs each client relationship.
ABA Formal Opinion 512 (July 2024) establishes that lawyers must obtain “informed consent” before using client data in generative AI tools, and that “merely adding general, boiler-plate provisions to engagement letters purporting to authorize the lawyer to use generative AI is not sufficient.” The consent must explain how AI tools will be used in the specific matter, what data will enter the tool, and what risks that creates.
The Journal of Accountancy (April 2025) identifies a parallel exposure for CPAs: while no specific AICPA standard mandates AI disclosure, the Confidential Client Information Rule and Section 7216 tax return regulations create disclosure obligations when client data enters third-party AI systems. The article notes that clients discovering undisclosed AI use may interpret it as “intentional deception,” triggering negligence or fraud claims.
The principle extends beyond licensed professionals. Any company that produces deliverables for clients using AI — consulting reports, engineering analyses, marketing materials, financial models — faces the question of whether existing engagement terms authorize AI use and adequately allocate the associated risks.
Remediation priority: Review active engagement letters and SOWs for AI authorization language. For ongoing relationships, issue addenda addressing AI use, data handling, and IP allocation. For new engagements, build AI-specific terms into the engagement letter template.
The Audit Methodology: Four Weeks to Contract Compliance
Week 1: Inventory and Categorization
Identify the contract universe. Pull all active agreements across the five categories. For most 200-500 person companies, this is 50-200 contracts that matter — the rest are low-volume vendor agreements with no AI-relevant data flows.
Prioritize by data exposure. Rank contracts by the sensitivity and volume of data that will flow through AI tools. Client agreements governing regulated data (financial, health, legal) go to the top. Employment agreements are universal. Low-data-volume vendor agreements go to the bottom.
Build the clause matrix. For each priority contract, extract: confidentiality restrictions, IP assignment/ownership terms, data processing limitations, sub-processor consent requirements, and data training/retention restrictions.
Week 2-3: Gap Analysis and Risk Scoring
Map each contract against AI data flows. For every AI tool the company plans to deploy, trace the data path: what data enters the tool, where it is processed, who has access, what the vendor’s terms say about retention and reuse. Compare this data flow against each contract’s restrictions.
Score gaps by consequence. A confidentiality breach in an NDA governing a $5M client relationship is a different risk profile than a DPA sub-processor gap for a low-volume data stream. Score by: (a) likelihood of violation, (b) severity of contractual remedy (termination rights, liquidated damages), and © relationship value at risk.
Flag deal-breakers. Some contracts will contain restrictions that AI deployment cannot satisfy without renegotiation. Identify these early.
Week 4: Remediation Plan and Execution
Three remediation paths exist for each gap:
- Amend the existing contract. Draft and propose AI-specific addenda covering data processing authorization, AI tool disclosure, IP allocation, and training data restrictions. This is the cleanest path for high-value client relationships.
- Exclude the relationship from AI processing. For contracts where renegotiation is impractical or the client will not consent, establish technical controls that prevent that client’s data from entering AI tools. This requires coordination with IT.
- Accept residual risk with documentation. For low-value contracts where the gap is minor and the consequences are manageable, document the risk assessment, the decision rationale, and the risk owner. This is not “ignore it” — it is a deliberate, documented business decision.
Deliverables from the audit:
- Complete clause matrix across all five contract categories
- Gap analysis with risk scores
- Remediation priority list with recommended action per contract
- Template AI addenda for client agreements, vendor agreements, and employment agreements
- Updated engagement letter language for professional services
Key Data Points
| Metric | Finding | Source |
|---|---|---|
| GC AI adoption rate | 87% of GCs report AI use, up from 44% one year earlier | FTI Consulting GC Report (n=224, Summer 2025) |
| Contracts requiring review | “Hundreds or even thousands” per enterprise | Debevoise & Plimpton (November 2025) |
| AI vendor liability caps | 88% cap liability at monthly subscription fees | WilmerHale AI Vendor Survey (2026) |
| Informed consent standard | Boilerplate engagement letter language insufficient | ABA Formal Opinion 512 (July 2024) |
| State privacy laws | 19 states with comprehensive privacy laws by end of 2026 | National Conference of State Legislatures (2026) |
| FTC enforcement position | Retroactive privacy policy changes for AI are “potentially unfair or deceptive” | FTC Tech Blog (February 2024) |
| GC priorities | >33% of GCs focused on AI adoption, risk management, or AI skills | Gartner GC Survey (October 2025) |
| AI vendor training defaults | Most SaaS vendors retain data training rights unless contract restricts | Gouchev Law, Galkin Law analysis (2025) |
| Copyright protection | Purely AI-generated material not copyrightable | U.S. Copyright Office, confirmed by Supreme Court cert denial (March 2026) |
| Audit cost estimate | $15K-$35K for 50-200 contract review | Based on mid-market outside counsel rates for contract review scope |
What This Means for Your Organization
The contract audit is the unglamorous prerequisite that determines whether AI deployment creates value or liability. Every other governance investment — the acceptable use policy, the vendor evaluation, the training program, the insurance application — assumes the company’s existing contractual obligations permit AI use. If they do not, the company is building on a foundation that a single client inquiry or regulatory notice can crack.
The practical sequence is clear: inventory, categorize, gap-analyze, remediate. A GC at a 200-500 person company can scope this work to four weeks and $15K-$35K — a fraction of the cost of a single client relationship damaged by an undisclosed AI-related contract breach. The companies that run this audit before deploying AI tools position themselves to move faster, not slower. They can deploy with confidence that their existing obligations have been addressed, their clients have been properly notified, and their employment agreements reflect the reality of how work is now produced.
If this raised questions specific to your contract portfolio, I’d welcome the conversation — brandon@brandonsneider.com
Sources
-
Debevoise & Plimpton, “AI’s Biggest Enterprise Challenge in 2026: Contractual Use Limitations on Data,” Debevoise Data Blog, November 17, 2025. https://www.debevoisedatablog.com/2025/11/17/ais-biggest-enterprise-problem-in-2026-contractual-use-limitations-on-data/ — High credibility. Top-tier law firm analysis based on client advisory work across financial services, insurance, and asset management sectors.
-
Federal Trade Commission, “AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive,” FTC Tech Blog, February 2024. https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/02/ai-other-companies-quietly-changing-your-terms-service-could-be-unfair-or-deceptive — Highest credibility. Direct regulatory guidance from the enforcement authority.
-
American Bar Association, Formal Opinion 512, “Generative Artificial Intelligence Tools,” July 29, 2024. https://www.americanbar.org/news/abanews/aba-news-archives/2024/07/aba-issues-first-ethics-guidance-ai-tools/ — Highest credibility. Binding ethics guidance for the legal profession; persuasive authority for informed consent standards across professional services.
-
Roth Jackson, “Non-Disclosure Agreements 2.0: Why It’s Crucial to Include AI Provisions in Your Non-Disclosure Agreements,” December 2025. https://www.rothjackson.com/blog/2025/12/non-disclosure-agreements-2-0-why-its-crucial-to-include-ai-provisions-in-your-non-disclosure-agreements/ — Moderate credibility. Regional law firm practice guidance; useful for clause-level analysis.
-
Journal of Accountancy, “Should I Disclose My Use of Gen AI to Clients?,” April 2025. https://www.journalofaccountancy.com/issues/2025/apr/should-i-disclose-my-use-of-gen-ai-to-clients/ — High credibility. AICPA publication; authoritative for accounting profession standards.
-
Galkin Law, “AI-Specific Issues in SaaS Agreements,” 2025. https://galkinlaw.com/ai-issues-in-saas-agreements/ — Moderate credibility. Practitioner analysis of SaaS agreement conflicts; useful for identifying IP licensing conflicts between vendor and client terms.
-
Gouchev Law, “10 Critical Clauses for AI Vendor Contracts,” 2025. https://gouchevlaw.com/10-critical-clauses-for-ai-vendor-contracts/ — Moderate credibility. Practitioner guidance on AI-specific vendor contract provisions.
-
Tascon Legal, “AI Clauses in Contracts: The Practical Guide for 2025,” 2025. https://tasconlegal.com/ai-clauses-in-contracts-the-practical-guide-for-2025/ — Moderate credibility. Practitioner framework for AI clause categorization and audit methodology.
-
DarrowEverett, “Key IP Licensing Considerations in AI Technology Agreements,” 2025. https://darroweverett.com/ai-technology-agreements-licensing-legal-analysis/ — Moderate credibility. Detailed IP licensing analysis with recommended contract language.
-
Gartner, “Survey Shows AI and Contract Analytics Are Urgent Priorities for General Counsel,” Press Release, October 1, 2025. https://www.gartner.com/en/newsroom/press-releases/2025-10-01-gartner-survey-shows-ai-and-contract-analytics-ar-urgent-priorities-for-general-counsel — High credibility. Gartner survey data on GC priorities; sample size not specified in press release.
-
FTI Consulting, General Counsel Report (n=224, organizations >$100M revenue), Summer 2025. Referenced in prior research. — High credibility. Independent consulting firm survey of senior legal leaders.
-
Corporate Compliance Insights, “AI Risk in 2026: 3 Critical Changes for the General Counsel,” 2026. https://www.corporatecomplianceinsights.com/ai-risk-2026-critical-changes-general-counsel/ — Moderate credibility. Industry publication synthesizing GC risk trends.
Brandon Sneider | brandon@brandonsneider.com March 2026