AI Acceptable Use Policies at Mid-Market Companies: Data, Frameworks, and Regulatory Requirements

Brandon Sneider | March 2026


Executive Summary

  • Only 28-31% of organizations have a formal, comprehensive AI acceptable use policy (ISACA, n=3,200 globally, April 2025). Sixty percent report having “some form” of AI policy, but only 10% describe it as comprehensive. The gap between adoption and governance is the single largest controllable risk factor in enterprise AI.
  • Shadow AI breaches cost $670,000 more per incident than standard breaches — $4.63M vs. $3.96M average (IBM Cost of a Data Breach Report 2025, n=604 organizations). One in five organizations experienced a breach due to shadow AI. Only 37% have policies to manage or detect it.
  • Organizations with governance programs extract 3x more business value from AI. Gartner (n=360 organizations, May-June 2025) finds regular AI system assessments triple the likelihood of high GenAI value. Organizations with AI-specific usage policies are 2x more likely to report higher value. Governance is the accelerator, not the brake.
  • A credible Day 1 AI acceptable use policy can be deployed in 2-3 weeks. A comprehensive governance program takes 90 days. The minimum viable policy fits on one page; the full program requires five documents.
  • Regulatory deadlines are converging: Colorado AI Act (June 2026), Illinois HB-3773 (January 2026), EU AI Act high-risk provisions (August 2026). There is no federal preemption. Mid-market companies operating in multiple states face overlapping obligations now.

1. Policy Adoption Rates: How Many Companies Have an AI AUP

The Numbers

Finding Source Date Sample Credibility
28% of organizations have a formal AI policy ISACA AI Governance Survey April 2025 n=3,200 IT/business professionals globally High — ISACA is a respected governance body; large global sample
31% of organizations have a formal, comprehensive AI policy ISACA (European subset) April 2025 n=561 European IT/business professionals High — same survey, regional cut
60% of companies have “some form” of AI AUP, but only 10% describe it as comprehensive Industry compilation (multiple surveys) 2025 Varies Medium — aggregated stat, directional
43% of businesses have an AI governance policy ISACA / AiDataAnalytics compilation 2025 Referenced from ISACA data Medium-High — derived from ISACA primary data
73% of marketing teams say their company has no AI usage policy Brafton Research Lab 2025 Survey of marketing professionals using AI Medium — industry-specific, marketing-skewed sample
76% of organizations have “established governance structures and policies” Knostic compilation of governance surveys 2025 Multiple surveys aggregated Medium — compilation, not primary research
70% of organizations lack optimized AI governance; ~40% have no AI-specific governance at all Acuvity State of AI Security Report October 2025 n=275 executives at enterprises 500-10,000+ employees High — enterprise-focused, senior respondents
55% of organizations have an AI board or dedicated oversight committee Gartner poll of executive leaders 2025 n=1,800+ executive leaders High — Gartner primary data, large sample

What This Means

The data converges on a clear picture: roughly one-third of organizations have formal AI policies, and fewer than one in ten have anything rigorous. The remaining two-thirds have either informal guidance, partial policies, or nothing. For mid-market companies (200-2,000 employees), the number is almost certainly lower than these enterprise-weighted averages suggest — Acuvity’s data showing 70% lack optimized governance at the 500+ employee level is probably optimistic for companies at the 200-employee mark.

The ISACA finding is the most reliable single data point: 28% formal policy adoption, based on 3,200 respondents with a clear methodology (fieldwork March-April 2025).

Sources


2. Key Components of Effective AI AUPs

What the Frameworks Recommend

The convergence across law firms, consulting firms, governance bodies, and template providers is striking. Effective AI AUPs share seven core components:

Component 1: Scope and Definitions

  • Define what counts as an “AI tool” (generative AI, predictive analytics, embedded AI features in existing SaaS)
  • Specify who the policy covers (all employees, contractors, vendors, board members)
  • Clarify that the policy covers both company-provided and personal AI tools used for work purposes

Component 2: Data Classification Rules A tiered classification system appears in virtually every credible template:

Data Tier Description AI Tool Restriction
Level 0 — Public Marketing copy, press releases, public documentation Any approved AI tool
Level 1 — Internal Team emails, draft plans, internal memos Tier 1 (sanctioned) tools only
Level 2 — Confidential Customer contracts, financial projections, roadmap details Tier 1 tools with manager sign-off and documented business justification
Level 3 — Restricted Source code, API keys, encryption keys, unreleased IP, PII, PHI No AI tool input permitted

Source: PDQ, Tenable, PurpleSec, CentreXIT templates (2025). This classification mirrors ISO 27001 data handling tiers and maps cleanly to the NIST AI RMF’s data governance requirements.

Component 3: Approved Tool List (Tiered)

  • Tier 1 — Sanctioned: Enterprise-licensed tools with completed security reviews (e.g., ChatGPT Enterprise, Claude for Business, Microsoft 365 Copilot). Full usage permitted within data classification rules.
  • Tier 2 — Tolerated with restrictions: Free-tier or personal-account tools allowed for non-sensitive work only. No client data, PII, or proprietary information.
  • Tier 3 — Prohibited: Tools that failed security review, operate in jurisdictions with inadequate data protection, or have been flagged by government agencies (e.g., DeepSeek, banned by multiple U.S. states and agencies).

The approved list must be maintained by IT/Security, reviewed quarterly, and accessible to all employees. Source: PDQ, AIHR, Tenable, Certified NETS templates.

Component 4: Human Oversight Requirements

  • All AI-generated outputs must be reviewed by a qualified human before external use, client delivery, or consequential decisions
  • Escalation thresholds: effective HITL systems target 10-15% escalation rates (85-90% of decisions execute autonomously; critical cases get human review)
  • Financial services typically use 90-95% confidence thresholds; customer service may accept 80-85% for routine inquiries
  • No AI tool may autonomously make hiring, firing, lending, pricing, or legal decisions without human review and sign-off

Source: Illumination Works (December 2025), Galileo AI, Presidio governance frameworks, IAPP analysis.

Component 5: Prohibited Uses

  • Inputting confidential/restricted data into any AI tool
  • Using AI for fact-finding without human verification
  • Automated decision-making for consequential outcomes without human review
  • Using AI outputs in legal filings, regulatory submissions, or financial statements without verification
  • Generating content that impersonates real people
  • Using AI to circumvent security controls or access restrictions

Component 6: Accountability and Consequences

  • Named owner for AI policy (typically CISO, CIO, or Chief Data Officer)
  • Employees must acknowledge policy annually (physical or digital signature)
  • Clear consequences for violations, aligned to existing disciplinary framework
  • Incident reporting procedures

Component 7: Training and Review Cadence

  • Quarterly staff training on policy changes and new AI risks
  • Annual employee acknowledgment requirement
  • Quarterly policy reviews by governance committee
  • Audit trails: log prompts, sources, and outputs per retention policy

Key Sources on Components


3. Impact of Having vs. Not Having an AI Policy

Security Incident Data

Finding Source Date Sample Credibility
Shadow AI breaches cost $4.63M average — $670K more than standard breaches IBM Cost of a Data Breach Report 2025 July 2025 n=604 organizations, 17 countries Very High — gold-standard annual study
1 in 5 organizations experienced a breach due to shadow AI IBM 2025 July 2025 Same Very High
63% of breached organizations lacked AI governance policies IBM 2025 July 2025 Same Very High
97% of organizations reporting AI model breaches lacked proper AI access controls IBM 2025 July 2025 13% of sample reporting AI breaches Very High
20% of organizations experienced security incidents linked to shadow AI in 2025 Second Talent / industry compilation 2025 Multiple surveys Medium — compiled stat
Shadow AI costs companies an average of $412K per year (non-breach operational costs) Programs.com compilation 2025 Multiple surveys Medium — aggregated
AI incidents jumped 56.4% in a single year (233 reported cases in 2024) Stanford AI Index / Kiteworks 2025 Incident database High — Stanford’s annual count
86% of organizations have no visibility into AI data flows Kiteworks / IBM 2025 Enterprise survey High

Adoption and Productivity Data

Finding Source Date Sample Credibility
Organizations with AI governance are 3x more likely to achieve high GenAI business value Gartner May-June 2025 n=360 organizations, 250+ employees Very High — Gartner primary survey
Organizations with AI-specific usage policies are 2x more likely to report higher value Gartner May-June 2025 Same Very High
Organizations offering role-based AI guidance are 2x more likely to report higher value Gartner May-June 2025 Same Very High
Organizations providing GenAI ethics training are 1.7x more likely to report higher value Gartner May-June 2025 Same Very High
Organizations investing in AI governance platforms are 1.9x more likely to report higher value Gartner May-June 2025 Same Very High
Companies with governance policies have 46% agentic AI early adoption rate vs. 12% for those still developing policies CSA/Google Cloud 2025 Referenced in governance research High
Leading organizations achieve 15-25% higher AI usage rates than industry averages Worklytics 2025 Enterprise telemetry Medium-High — observational, not causal
Only 27% of organizations using gen AI say employees review all content before use McKinsey State of AI March 2025 n=1,400+ respondents Very High — McKinsey’s annual global survey
47% of organizations report having experienced at least one negative consequence from generative AI McKinsey State of AI March 2025 Same Very High
Employees with high AI exposure experience 4x productivity growth vs. non-AI counterparts PwC AI Jobs Barometer 2025 Labor market analysis High — PwC methodology is solid

Shadow AI Behavior (The Case for Policy)

Finding Source Date Sample Credibility
81% of employees use unapproved AI tools at work UpGuard State of Shadow AI November 2025 n=1,020 employees (US/UK), n=542 security leaders High — rigorous survey methodology (Dynata + Prolific)
93% of executives and senior managers use shadow AI tools UpGuard November 2025 Same High
90% of security leaders use unapproved AI tools; 69% of CISOs use them daily UpGuard November 2025 Same High
38% of employees share confidential data with AI platforms without approval ISC2 / industry surveys 2025 Multiple Medium-High
Shadow AI tool usage increased 156% from 2023 to 2025 Industry compilation 2025 Trend data Medium
50% expect data loss through generative AI tools in the next year Acuvity October 2025 n=275 executives High
60% of employees would accept security risks to meet deadlines using unsanctioned AI BlackFog Research January 2026 Survey data Medium-High

The UpGuard Paradox

The most counterintuitive finding: employees who received AI safety training and report understanding AI security requirements are more likely to use unapproved tools, not less. UpGuard (November 2025) found a positive correlation between self-reported understanding of AI risks and regular use of unapproved tools. Training increases confidence in personal risk judgment — even when that judgment contradicts policy. This means training alone is insufficient. Policy must be paired with technical controls and monitoring.

Sources


4. Specific Policy Examples: Data Classification, Approved Tools, Human Oversight

Real-World Data Classification Examples

New York State ITS Policy (ITS-P24-002): The New York State Office of Information Technology Services publishes one of the most detailed government AI acceptable use policies. It classifies data into four tiers aligned with the state’s existing information classification framework, prohibits inputting data classified as “Confidential” or higher into any AI tool, and requires encryption and access controls for AI systems handling “Internal Use” data.

Source: NY State ITS: Acceptable Use of AI Technologies

Salesforce AI AUP: Published publicly. Prohibits using AI services to generate outputs that could directly or indirectly identify individuals, process protected health information, or produce content for regulated financial decisions without human review.

Source: Salesforce AI Acceptable Use Policy (PDF)

Box AI AUP: Explicitly defines acceptable and prohibited uses. Prohibits processing of restricted data categories without enterprise licensing and appropriate data processing agreements.

Source: Box AI Acceptable Use Policy

Approved Tool List Best Practices

The practical consensus from Tenable, PDQ, AIHR, and PurpleSec templates:

  1. Maintain a living document — not buried in the appendix of a 50-page policy, but accessible via the company intranet or a shared link.
  2. Three-tier classification: Sanctioned / Tolerated / Prohibited.
  3. Quarterly review cadence — AI tool landscape changes too fast for annual reviews.
  4. Require employees to check the current list before using any AI tool for work purposes.
  5. If you require enterprise AI tools, ensure employees actually have access to them. Nothing undermines policy faster than approving tools nobody can use. (Source: PDQ, 2025)

Human Oversight Requirements in Practice

The tiered risk-based framework emerging across enterprise policies:

Risk Level AI Autonomy Human Role Example
Low AI executes autonomously Periodic audit only Email subject line suggestions, calendar scheduling
Medium AI recommends; human decides Review before action Draft customer communications, internal report summaries
High AI assists; human owns Mandatory review, sign-off, and documentation Hiring recommendations, contract analysis, financial forecasting
Critical AI prohibited or advisory-only Human performs the work; AI provides data/context only Legal filings, regulatory submissions, patient care decisions

Source: Illumination Works (December 2025), Presidio governance framework, Galileo AI. This framework aligns with the EU AI Act’s risk-based classification system and maps to NIST AI RMF MANAGE function requirements.


5. Regulatory Requirements: 2025-2026

EU AI Act

Status: In force since August 1, 2024. Phased implementation through August 2027.

Deadline Requirement AUP Relevance
February 2, 2025 (in effect) Prohibited AI practices banned; AI literacy obligation (Article 4) applies Every organization using AI in the EU must ensure staff have adequate AI literacy. This is a training mandate, not optional.
August 2, 2025 (in effect) GPAI model obligations; governance rules apply Providers of general-purpose AI must comply with transparency and copyright obligations
August 2, 2026 High-risk AI system obligations apply Full risk classification, impact assessment, human oversight, transparency, and documentation requirements for high-risk systems
August 2, 2027 Remaining provisions for certain embedded AI systems Complete enforcement

Article 4 (AI Literacy) — already in force: “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf.” This applies to all AI systems regardless of risk level. Enforcement begins August 2026, but the obligation is current. Any company with EU operations or employees must demonstrate AI literacy measures.

Source: EU AI Act Article 4, EU Commission AI Literacy Q&A

U.S. State Laws

State Law Effective Date Key Requirements AUP Impact
Colorado Colorado AI Act (SB 24-205) June 30, 2026 (delayed from Feb 1) Risk management policy, annual impact assessments, public disclosure of high-risk AI systems, consumer notification, reasonable care to prevent algorithmic discrimination Deployers of any size must maintain risk management policies. No revenue or employee threshold. Small business exemption (< 50 FTEs) from most onerous requirements but still must notify AG of discovered discrimination within 90 days.
Illinois HB-3773 January 1, 2026 Civil rights violation to use AI resulting in employment discrimination. Affirmative notice requirements: AI product name, employment decisions affected, purpose, data collected, targeted positions, contact information Employers using AI in hiring, promotion, or termination must provide specific, detailed notice. Compliance requires knowing which AI tools are in use (circular dependency on shadow AI audit).
California FEHA amendments (employment); multiple transparency bills October 1, 2025 (FEHA, in effect) Unlawful to use automated decision systems that discriminate in employment on protected characteristics. Separate bills require disclosure of training data sources for generative AI. Employers must audit AI hiring tools for discriminatory impact. Training data transparency obligations for AI providers.
Texas TRAIGA (HB 149) January 1, 2026 Prohibits developing/using AI “with the intent to unlawfully discriminate.” Explicitly excludes disparate impact as theory of liability. Government transparency requirements. Less burdensome than Illinois/Colorado — requires intent, not just impact. But still creates documentation and transparency obligations.
New York City Local Law 144 In effect (2023) Bias audits for automated employment decision tools (AEDTs). Public disclosure of audit results. Candidate notification. Any employer using AI in NYC hiring must conduct and publish annual bias audits.

Key convergence across states: California and Illinois both require (1) discriminatory impact analysis, (2) testing and evaluation of AI systems, and (3) documentation and transparency through disclosure, recordkeeping, and retention of AI HR system data. Source: Manatt: AI-Assisted Hiring Compliance Landscape 2026

SEC Guidance

Current status: The SEC withdrew its proposed rules on conflicts of interest from predictive data analytics (June 17, 2025). There are no formal AI-specific SEC rules.

However:

  • The SEC Investor Advisory Committee voted (December 2025) to recommend guidance requiring issuers to disclose AI’s impact on their companies, define AI, disclose board oversight, and report material AI deployments. These are recommendations, not rules. The SEC has responded “tepidly.”
  • SEC examination priorities for 2025-2026 explicitly include AI: examiners will evaluate whether firms’ actual AI usage matches their client representations, check AI-related compliance policies, and review disclosures.
  • Enforcement actions in 2024-2025: The SEC charged two investment advisory firms for misrepresenting AI’s role in their investment processes. The signal is clear — AI-washing (overstating AI capabilities to clients/investors) is an enforcement priority even without new rules.

Source: SEC Investor Advisory Committee Recommendations, Goodwin: 2026 SEC Exam Priorities, Norton Rose Fulbright: SEC Heightens AI Enforcement

Federal Executive Action

The December 11, 2025 Executive Order signals an “innovation-first” federal posture, building on the America’s AI Action Plan (July 2025). This is a directional signal, not enforceable law, but influences agency procurement expectations and enforcement priorities. State laws remain the binding regulatory reality for most mid-market companies. Source: Sidley Austin: December 2025 EO Analysis


6. Day 1 AUP vs. Comprehensive Policy: Deployment Timeline

The Two-Speed Approach

The consensus across governance practitioners (Veilsun, TechTarget, FRSecure, PDQ, Certified NETS) is a two-speed deployment:

Day 1 AUP (2-3 weeks):

  • One page, front and back
  • Covers: scope, data classification rules, approved/prohibited tool list, human review mandate, reporting requirements
  • Designed so employees can scan it in under 2 minutes
  • “Minimum viable governance — practical policies that employees will actually follow — and expand from there with quarterly reviews” (Veilsun, 2025)
  • The one-page constraint is deliberate: “not because the issues are simple, but because employees won’t reference a document they can’t quickly scan” (multiple template sources)

Comprehensive Program (90 days, three phases):

Phase Weeks Activities
Phase 1: Assess Weeks 1-4 Shadow AI audit, current usage inventory, risk exposure assessment, data classification review
Phase 2: Draft Weeks 5-8 Acceptable use guidelines, vendor evaluation standards, incident response plan, legal/security review, executive sign-off
Phase 3: Deploy Weeks 9-12 Policy rollout, employee training, vendor review process activation, monitoring setup

Source: Veilsun (2025 — 90-day phased approach), TechTarget, FRSecure templates.

Update cadence: Quarterly reviews are mandatory given the rate of AI tool landscape changes. Annual reviews are insufficient.

What Makes the Difference

Gartner’s data (May-June 2025, n=360) provides the clearest evidence that governance quality matters more than speed:

  • Regular assessments: 3x more likely to achieve high GenAI value
  • AI-specific policies: 2x more likely to report higher value
  • Role-based guidance: 2x more likely to report higher value
  • Ethics training: 1.7x more likely to report higher value
  • Safe rollout expansion: 3.3x more likely to report higher value

The implication: a Day 1 policy gets you from “zero governance” to “basic protection” quickly. But the value multiplier comes from building the full program within 90 days and maintaining quarterly review discipline.


7. NIST AI RMF and ISO 42001 Requirements Relevant to AUP

NIST AI Risk Management Framework (AI RMF 1.0)

Status: Voluntary U.S. framework. Not legally required but increasingly referenced in contracts, procurement requirements, and as a compliance baseline.

The framework has four core functions, each with subcategories relevant to AI acceptable use policy:

GOVERN (the policy function):

  • Govern 1.1: Legal and regulatory requirements are understood and documented
  • Govern 1.2: Policies incorporate trustworthy AI characteristics (valid, reliable, safe, secure, resilient, accountable, transparent, explainable, fair with harmful bias managed, privacy-enhanced)
  • Govern 1.3: Risk-based decision-making procedures are established
  • Govern 1.4: Risk management is transparent and documented
  • Govern 1.7: Processes for decommissioning AI systems are defined

MAP (the risk identification function):

  • Map 1.1: Intended purpose, context, and limitations are documented
  • Map 5.1: Likelihood and magnitude of potential impacts are documented

MEASURE (the monitoring function):

  • Measure 2.11: Fairness and bias evaluation required
  • Measure 4.1: Measurement approaches for deployed AI systems defined

MANAGE (the response function):

  • Manage 1.1: Processes for post-deployment monitoring, appeal and override mechanisms, decommissioning, and change management
  • Manage 4.1: Risk treatments (accept, transfer, mitigate, avoid) are documented

AUP-specific takeaway: An AI AUP that addresses data classification, approved tools, human oversight, and prohibited uses covers GOVERN 1.1-1.4 and provides the operational backbone for MAP and MANAGE requirements. A mid-market company does not need to implement all 400+ subcategories — the risk classification decision tree and approval workflow are the minimum viable NIST alignment.

Source: NIST AI RMF Playbook, NIST AI RMF 1.0 (PDF), ISPartners: Core Functions Explained

ISO/IEC 42001:2023

Status: Certifiable international standard. Not legally required in any jurisdiction but increasingly demanded by enterprise clients and as a proof point in B2B procurement.

Structure: 10 clauses, of which Clauses 4-10 are auditable requirements:

Clause Requirement AUP Relevance
4: Context Identify internal/external factors, define AIMS scope, identify AI-related risks, understand stakeholder expectations The AUP scope section addresses this directly
5: Leadership Top management commitment, AI policy, organizational roles and responsibilities The AUP must be endorsed by leadership and assign named owners
6: Planning AI governance objectives, risk assessments, bias mitigation strategies The AUP’s data classification and human oversight sections feed this
7: Support Resources, competence, awareness, communication, documented information Training requirements and employee acknowledgment
8: Operation Operational planning, AI risk assessment, AI system impact assessment The approved tool list and risk-tiered approach operationalize this
9: Performance Monitoring, measurement, analysis, evaluation, internal audit, management review Quarterly review cadence and audit trail requirements
10: Improvement Nonconformity, corrective action, continual improvement Incident response and policy update procedures

AUP-specific takeaway: An AI AUP is the operational expression of ISO 42001 Clauses 5 (policy), 7 (awareness), and 8 (operational controls). Organizations pursuing ISO 42001 certification need an AUP as a foundational document — it is not a standalone compliance artifact but a required component of the broader AI Management System (AIMS).

Integration point: ISO 42001 aligns naturally with ISO 27001 (information security) and SOC 2 controls. Organizations already certified in ISO 27001 can extend their existing management system to incorporate AI-specific controls rather than building from scratch.

Source: ISO 42001 Implementation Guide (ISMS.online), EY: ISO 42001 — Paving the Way for Ethical AI, Advisera: ISO 42001 Clauses and Requirements, CSA: What Are the ISO 42001 Requirements?, RSI Security: The 10 Clauses of ISO 42001

NIST-ISO Crosswalk

NIST publishes an official crosswalk mapping AI RMF functions to ISO 42001 clauses. The practical implication: a mid-market company that builds an AUP aligned to NIST AI RMF’s GOVERN function is simultaneously addressing ISO 42001 Clauses 5, 7, and 8. The two frameworks are complementary, not competing.

Source: NIST AI RMF to ISO 42001 Crosswalk (PDF), CSA: How ISO 42001 and NIST AI RMF Help with EU AI Act Compliance


Source Credibility Summary

Source Type Credibility Notes
IBM Cost of a Data Breach 2025 Independent annual study Very High n=604 organizations, 17 countries. Gold standard.
Gartner (Nov 2025 survey) Analyst firm primary research Very High n=360, 250+ employee orgs. Rigorous methodology.
McKinsey State of AI (March 2025) Consulting firm annual survey Very High n=1,400+. Longest-running enterprise AI survey.
ISACA (April 2025) Professional governance body Very High n=3,200 globally. ISACA’s core competency.
UpGuard (Nov 2025) Security vendor primary research High n=1,562 (employees + security leaders). Dynata/Prolific methodology. Vendor has shadow AI product interest but methodology is sound.
Acuvity (Oct 2025) Security vendor primary research High n=275 executives. Smaller sample but senior respondent quality. Vendor interest noted.
PwC AI Jobs Barometer Consulting firm labor analysis High Macro labor data, not survey-based. Methodology transparent.
NIST AI RMF U.S. government framework Very High Authoritative, non-commercial, peer-reviewed.
ISO 42001 International standards body Very High Certifiable standard. Multi-stakeholder development.
Template providers (AIHR, PurpleSec, etc.) Practitioner tools Medium Useful for operational detail. Not primary research. May promote their own services.
State law summaries (Manatt, King & Spalding, etc.) Law firm analysis High Am Law firms with regulatory practices. Reliable legal analysis.

Research Metadata

  • Research date: March 25, 2026
  • Sources searched: 45+ sources across web search, analyst reports, government publications, law firm analyses, template providers
  • Date range of primary sources: March 2025 — March 2026
  • Primary data preference: Surveys with disclosed sample sizes and methodologies prioritized over compilations and vendor marketing
  • Known gaps: Mid-market-specific policy adoption data (most surveys weight toward enterprise 1,000+ employees). No primary survey exclusively targeting 200-2,000 employee companies was found. Gartner’s threshold of 250+ employees is the closest proxy.