The Mid-Market AI Acceptable Use Policy: The General Counsel’s Day 1 Document
Brandon Sneider | March 2026
Executive Summary
- Only 31% of organizations have formal, comprehensive AI policies — yet 83% of employees are already using AI at work (ISACA, n=3,200+, April 2025). At mid-market companies without dedicated compliance teams, the gap is wider. The AI acceptable use policy is not a governance aspiration. It is the single document that stands between your organization and an unforced data breach, IP exposure, or regulatory violation.
- Formalized AI policies shifted from best practice to compliance obligation in 2026. Colorado’s AI Act (effective June 30, 2026), Illinois AIPA (January 2026), and the EU AI Act’s high-risk regime (August 2026) create overlapping mandates. Gartner projects AI governance platform spending at $492M in 2026. Organizations without documented policies face enforcement exposure today.
- The Samsung precedent made the cost of inaction concrete. Three employees entered proprietary source code and meeting notes into ChatGPT within 20 days in 2023 — forcing a company-wide ban. The incident is now the default case study in every board AI governance discussion. A mid-market company cannot afford to learn this lesson firsthand.
- This document identifies the 10 provisions that cover 90% of mid-market AI risk. Not governance philosophy. Not a framework. A draftable policy instrument the General Counsel can adapt in two weeks and deploy before the next AI tool goes live. Organizations that govern AI proactively show 46% agentic AI early adoption rates versus 12% for those still developing policies (CSA/Google Cloud, 2025).
Why the General Counsel Owns This Document
The acceptable use policy sits at the intersection of legal risk, data security, and operational efficiency — and at a 200-2,000 person company, the GC is the only officer who spans all three.
FTI Consulting’s 2026 General Counsel Report (n=224 quantitative respondents plus 30 personal interviews, organizations over $100M revenue, Summer 2025) found that 87% of general counsel now report AI use within their teams — nearly double the 44% in 2025. Yet half of legal operations teams identified “evaluating and implementing generative AI use cases” as their greatest current challenge. The tools are deployed. The guardrails are not.
Courts hold legal counsel personally responsible for AI failures regardless of which department selected the technology. The GC who delegates the acceptable use policy to IT or HR — or worse, leaves it unwritten — creates personal professional liability alongside organizational risk.
PwC’s 2025 Responsible AI survey (n=310 U.S. business leaders, director-level and above, October 2025) found that only 5% of organizations assign primary responsible AI accountability to Legal/Compliance. Thirty-two percent assign it to IT/Engineering. This creates a structural problem: the function with the deepest understanding of regulatory risk and contractual liability is the least likely to own the policy.
The GC does not need to become a technologist. The GC needs to own the policy instrument that governs how every other function uses AI.
The 10 Provisions
Provision 1: Scope and Applicability
Define exactly who and what the policy covers. Every employee, contractor, and temporary worker. Every AI tool — commercial SaaS (ChatGPT, Claude, Copilot, Gemini), embedded AI features within existing platforms (Microsoft 365 Copilot, Salesforce Einstein, ServiceNow Now Assist), open-source models run locally, and any tool an employee accesses through a personal account.
The scope clause is where most policies fail. Traliant’s 2024 survey (n=500 U.S. HR professionals, organizations 100-1,000+ employees, September 2024) found that 31% of HR professionals had not shared any communications or guidelines about AI use. If employees do not know the policy exists, it does not exist.
Draft language: “This policy applies to all employees, contractors, temporary workers, and agents who use, access, or interact with any artificial intelligence tool in connection with company business, whether on company-owned or personal devices, and whether through company-licensed or personally obtained accounts.”
Provision 2: Tool Classification — Approved, Conditional, Prohibited
Maintain a living registry of AI tools in three tiers:
| Tier | Definition | Examples | Governance |
|---|---|---|---|
| Approved | Enterprise-licensed, security-vetted, data processing agreements in place | Company-licensed Copilot, Claude Enterprise, approved Salesforce AI features | Use within data classification rules |
| Conditional | Permitted for specific use cases with restrictions | Free-tier AI tools for non-confidential brainstorming, personal learning | No company data input; no client-facing output |
| Prohibited | Blocked by policy; violates security, regulatory, or ethical standards | Unvetted open-source models processing company data, AI tools from sanctioned entities, deepfake generators | Disciplinary action for use |
Gartner found 68% of employees use AI tools without IT approval (2025). The three-tier system channels that behavior rather than pretending it does not happen. A blanket ban does not work — Samsung tried it and reversed course within a year.
Update cadence: Review and update the tool registry quarterly. Assign a named owner (typically IT Security or the CIO) with authority to reclassify tools based on vendor changes, security assessments, or new regulatory requirements.
Provision 3: Data Classification and Input Restrictions
This is the highest-risk provision. Samsung’s three incidents — source code, optimization algorithms, and meeting transcripts pasted into ChatGPT — all involved data that should never have reached an external model.
Define four data tiers mapped to AI tool permissions:
| Data Classification | AI Tool Permission | Examples |
|---|---|---|
| Public | Any approved or conditional tool | Published marketing materials, public financial filings, general industry research |
| Internal | Approved tools only | Internal memos, non-sensitive project plans, general business communications |
| Confidential | Approved tools with enterprise data protections only | Customer data, financial projections, employee records, source code, strategic plans |
| Restricted | No external AI tools under any circumstances | Trade secrets, privileged legal communications, M&A materials, PII/PHI datasets, board materials |
ISACA’s 2025 research found 63% of IT professionals are extremely or very concerned about generative AI misuse — and data leakage is the primary driver of that concern. The policy must be explicit: entering Confidential or Restricted data into any non-enterprise AI tool is a terminable offense, not a coaching opportunity.
Practical guidance: Include 3-5 concrete examples of what employees may and may not enter into AI tools. Abstract rules fail. “Do not enter confidential data” is insufficient. “Do not paste customer names, contract values, employee performance reviews, source code, or litigation strategy into any AI tool not classified as Approved” is actionable.
Provision 4: Output Review and Human Accountability
Deloitte’s 2025 survey found that 47% of enterprise AI users made at least one major business decision based on hallucinated content. The policy must establish that AI-generated output is a draft — never a final product.
Three mandatory review requirements:
Accuracy verification. Every AI-generated output used in a business decision, client deliverable, or external communication must be reviewed by a qualified human before use. The reviewer must verify factual claims against primary sources. “The AI said so” is not a citation.
Attribution and disclosure. AI-generated content used in client-facing materials, regulatory filings, or legal documents must be disclosed as AI-assisted. This is not optional courtesy — it is an emerging regulatory requirement (EU AI Act Article 14, multiple state bar ethics opinions).
Accountability assignment. The person who submits AI-generated output assumes full professional responsibility for its accuracy, completeness, and appropriateness. The AI tool is not an author. It is not a co-worker. It is not a defense.
Draft language: “You are personally responsible for the accuracy, completeness, and appropriateness of any AI-generated content you use in company business. AI output must be reviewed and verified before inclusion in any deliverable, decision, communication, or filing. The use of an AI tool does not diminish or transfer your professional accountability.”
Provision 5: Prohibited Use Cases
Certain AI applications create disproportionate legal, ethical, or reputational risk. Enumerate them explicitly:
- Employment decisions. AI may not make or materially influence hiring, termination, promotion, or compensation decisions without documented human review by a qualified decision-maker. NYC Local Law 144, Illinois AIPA (effective January 2026), and Colorado’s AI Act (effective June 2026) impose audit, disclosure, and anti-discrimination requirements on automated employment decision tools.
- Legal conclusions. AI may not draft legal opinions, provide legal advice to clients, or generate content represented as legal analysis without review and approval by a licensed attorney.
- Financial commitments. AI may not authorize expenditures, approve contracts, or generate financial projections presented to investors, lenders, or regulators without human verification.
- Customer-facing deception. AI-generated communications to customers may not be represented as human-authored. Content personalization and automated responses must be clearly identified as AI-assisted where regulations require.
- Surveillance and monitoring. AI tools may not be used for employee surveillance, behavioral scoring, or keystroke monitoring beyond what existing company policies and applicable law expressly permit.
Provision 6: Intellectual Property Protections
The U.S. Copyright Office confirmed in January 2025 — and the Supreme Court declined to disturb on March 2, 2026 — that purely AI-generated material is not copyrightable. The policy must protect the company’s IP going in and manage expectations about IP coming out.
Input protections: Employees may not enter trade secrets, proprietary algorithms, unpublished product designs, or other competitively sensitive intellectual property into any AI tool unless the tool is Approved-tier with contractual protections against training on customer data.
Output expectations: AI-generated output used in products, deliverables, or competitive assets must include documented human authorship contribution. The minimum human modification threshold should be defined by the GC in consultation with business unit leaders. Content that is purely AI-generated should not be relied upon for IP protection.
Third-party IP: Employees may not use AI tools to reproduce, summarize, or circumvent access controls on copyrighted, licensed, or paywalled third-party content.
Provision 7: Vendor and Procurement Requirements
Every new AI tool — whether a standalone product or an AI feature activated within an existing platform — must pass through a defined procurement gate before deployment.
Minimum vetting criteria:
- Data processing agreement specifying how the vendor handles, stores, and retains input data
- Training exclusion — written confirmation the vendor does not use customer data for model training
- SOC 2 Type II or equivalent security certification
- Data residency — where data is processed and stored, mapped to applicable regulatory requirements
- Incident response — vendor’s contractual obligations for data breach notification
- Model change notification — vendor’s obligation to disclose material changes to the underlying model
Only 17% of AI vendors currently offer regulatory compliance warranties (WilmerHale, 2026). The procurement gate ensures the GC reviews terms before the tool is live — not after an incident.
Provision 8: Incident Reporting
A third of organizations dealt with an AI-related security incident or near-miss in the past year. Ninety percent of CISOs identify shadow AI as a significant concern, yet fewer than 30% have implemented technical controls beyond policy statements.
Define a clear, low-friction reporting mechanism:
- What to report: Any suspected data leakage through an AI tool, hallucinated content that reached a customer or was used in a decision, use of a Prohibited-tier tool, AI-generated output that produced discriminatory or harmful results, or any vendor notification of a security incident
- How to report: Single channel (email alias, internal form, or Slack channel) — not a 12-step process that ensures no one reports anything
- Timeline: Within 24 hours of discovery. Immediate escalation for incidents involving Restricted data, client data, or regulatory exposure
- No retaliation: Employees who report in good faith are protected from disciplinary action for the act of reporting, even if the report reveals a policy violation they committed
Provision 9: Training and Acknowledgment
Traliant found 21% of employees have received no AI training at all (n=500, September 2024). A policy without training is wallpaper.
Initial deployment: Every employee and contractor must complete a 30-minute training module and sign a written acknowledgment within 30 days of policy adoption. The training should include 3-5 concrete scenarios — not abstract principles.
Annual refresh: Update training annually to reflect new tools, regulatory changes, and lessons from internal incidents. The 2025-2026 regulatory acceleration (Colorado, Illinois, EU AI Act) means the policy and training will evolve faster than most HR content cycles.
Role-specific modules: Employees who use AI tools daily (developers, marketing, analysts) need deeper training than employees who use embedded AI features passively. One-size-fits-all training produces one-size-fits-none compliance.
Provision 10: Enforcement and Consequences
A policy without enforcement is a suggestion. Define a graduated response:
| Violation Severity | Example | Consequence |
|---|---|---|
| Minor | Using a Conditional tool without following restrictions; failing to verify an AI output before internal use | Written warning; mandatory refresher training |
| Moderate | Entering Internal-classification data into a Conditional tool; repeated minor violations | Formal disciplinary action; temporary suspension of AI tool access |
| Severe | Entering Confidential or Restricted data into any non-Approved tool; using AI for prohibited employment decisions; falsifying AI output as human-authored work | Termination; potential referral for legal action |
Make the consequences real and known. The Samsung incident went viral not because three employees used ChatGPT, but because no policy told them not to.
Implementation Timeline
A mid-market GC can move from zero to deployed in 30-45 days:
| Phase | Timeline | Activities |
|---|---|---|
| Draft | Days 1-10 | GC drafts policy with input from CIO/CISO, HR, and one business unit leader. Inventory existing AI tools. |
| Review | Days 11-20 | Cross-functional review. Legal review of regulatory alignment (Colorado, Illinois, applicable state laws). Board or executive team approval. |
| Deploy | Days 21-30 | All-hands communication. Training module deployment. Written acknowledgment collection. Tool registry published on intranet. |
| Sustain | Ongoing | Quarterly tool registry review. Annual policy refresh. Incident response after-action reviews. |
Cost for a mid-market company: $15,000-$50,000 including outside counsel review, training platform license, and internal staff time. This is a fraction of the cost of one data breach (IBM’s 2024 Cost of a Data Breach Report puts the average at $4.88M globally; organizations with AI and automation in security save $2.22M per incident).
Key Data Points
| Metric | Finding | Source |
|---|---|---|
| Organizations with formal AI policies | 31% | ISACA (n=3,200+, April 2025) |
| Employees using AI at work | 83% | ISACA (n=3,200+, April 2025) |
| Employees using AI without IT approval | 68% | Gartner (2025) |
| Companies with no AI policy and no plan for one | 25%+ | Security Magazine (2025) |
| GC departments using AI | 87% (up from 44% in 2025) | FTI Consulting (n=224, Summer 2025) |
| Enterprise AI users who acted on hallucinated content | 47% | Deloitte (2025) |
| HR professionals who shared no AI guidelines to employees | 31% | Traliant (n=500, September 2024) |
| CISOs citing shadow AI as significant concern | 90% | Industry surveys compiled (2025) |
| CISOs with technical controls beyond policy | <30% | Industry surveys compiled (2025) |
| AI governance platform spend (2026) | $492M | Gartner (February 2026) |
| Average cost of data breach | $4.88M | IBM Cost of a Data Breach (2024) |
| Savings from AI/automation in security incident response | $2.22M per incident | IBM Cost of a Data Breach (2024) |
What This Means for Your Organization
The acceptable use policy is the fastest legal risk reduction available to a mid-market General Counsel. It requires no technology purchase, no board resolution, and no organizational restructuring. It requires the GC to spend ten days drafting a document and twenty days deploying it.
The regulatory window is closing. Colorado’s AI Act takes effect June 30, 2026. Illinois AIPA is already live. The EU AI Act’s high-risk provisions arrive August 2, 2026. Organizations without documented AI governance face not only operational risk from shadow AI — which 68% of employees are already creating — but regulatory exposure from increasingly specific state and federal requirements. The difference between organizations that captured AI’s value and those that created AI’s liability often came down to whether a clear acceptable use policy existed before the first incident.
The ten provisions above are not exhaustive. Industry-specific requirements — HIPAA for healthcare, FINRA/SEC for financial services, FERPA for education — layer additional obligations. But the 10-provision framework covers the 90% of risk that is common across mid-market organizations regardless of sector. If this raised questions specific to your industry or organization, I’d welcome the conversation — brandon@brandonsneider.com
Sources
-
ISACA, “AI Use Is Outpacing Policy and Governance, ISACA Finds,” April 2025 (n=3,200+ global, 561 European IT/business professionals). Independent professional association survey. High credibility. https://www.isaca.org/about-us/newsroom/press-releases/2025/ai-use-is-outpacing-policy-and-governance-isaca-finds
-
FTI Consulting, “The General Counsel Report 2026,” March 2026 (n=224 quantitative, 30 personal interviews, organizations $100M+ revenue). Independent consulting firm research. High credibility. https://www.globenewswire.com/news-release/2026/03/11/3253654/33891/en/AI-Adoption-in-Corporate-Legal-Departments-Doubles-According-to-The-General-Counsel-Report.html
-
Traliant, “HR Report on AI: Insights on HR’s Readiness and Risk Management,” November 2024 (n=500 U.S. HR professionals, conducted by Researchscape, September 2024). Vendor-commissioned but independently conducted survey. Moderate-high credibility. https://www.traliant.com/resources/hr-report-on-ai-insights/
-
PwC, “2025 Responsible AI Survey,” October 2025 (n=310 U.S. business leaders, director-level and above). Big Four consulting survey. High credibility. https://www.pwc.com/us/en/tech-effect/ai-analytics/responsible-ai-survey.html
-
Gartner, “Global AI Regulations Fuel Billion-Dollar Market for AI Governance Platforms,” February 2026. Independent analyst firm. High credibility. https://www.gartner.com/en/newsroom/press-releases/2026-02-17-gartner-global-ai-regulations-fuel-billion-dollar-market-for-ai-governance-platforms
-
Samsung ChatGPT data leak incident, multiple sources, April-May 2023. Widely reported and confirmed by Samsung. High credibility as event documentation. https://techcrunch.com/2023/05/02/samsung-bans-use-of-generative-ai-tools-like-chatgpt-after-april-internal-data-leak/
-
IBM, “Cost of a Data Breach Report 2024,” July 2024 (n=604 organizations globally). Annual study conducted by Ponemon Institute. High credibility. Referenced for breach cost and AI/automation savings figures.
-
Deloitte, “State of AI in the Enterprise 2026,” 2025 (n=3,235, August-September 2025). Big Four consulting survey. High credibility. https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/content/state-of-ai-in-the-enterprise.html
-
Tenable, “A Complete Guide to Creating Your Company’s AI Acceptable Use Policy,” 2025. Vendor guide but substantive framework. Moderate credibility. https://www.tenable.com/cybersecurity-guide/learn/ai-acceptable-use-policy-aup
-
CSA/Google Cloud, AI governance and adoption survey, 2025. Industry consortium with vendor sponsorship. Moderate-high credibility. Referenced for governance-adoption correlation data.
-
NIST AI Risk Management Framework (AI RMF 1.0), January 2023; AI 600-1 Cybersecurity Profile, December 2025. Federal standards body. Highest credibility. https://www.nist.gov/itl/ai-risk-management-framework
-
Colorado AI Act (SB 24-205), effective June 30, 2026. State legislation. Primary source. https://leg.colorado.gov/bills/sb24-205
Brandon Sneider | brandon@brandonsneider.com March 2026