The 90-Day AI Governance Sprint: From Zero to Insurable, Auditable, and Enterprise-Client-Ready

Brandon Sneider | March 2026


Executive Summary

  • The components of a governance program are well-understood. The sequencing is not. Minimum viable governance, security controls, insurance documentation, due diligence readiness, and shadow AI discovery have each been defined independently. No single document sequences them into a project plan that a CIO or GC can hand to their team on Monday morning. This is that document.
  • The sprint costs $15,000-$45,000 in direct spend and $30,000-$50,000 in imputed staff time across 90 days. The total — $45,000-$95,000 — is less than 7% of the average shadow AI breach premium ($670,000, IBM 2025, n=600 organizations) and produces artifacts that satisfy cyber insurers, enterprise buyers, and state regulators simultaneously.
  • Sequencing determines success. Organizations that begin with policy drafting before completing a shadow AI audit write policies for tools they do not know about. Organizations that deploy DLP before classifying data block the wrong things. The sprint sequences discovery before policy, policy before controls, and controls before training — the order that prevents rework.
  • Over 50% of small-to-mid-size businesses applying for cyber insurance were denied in the past year due to inadequate security controls (Grab The Axe/industry aggregation, 2026). The 90-day sprint produces exactly the documentation that moves a renewal application from denial to approval: tool inventory, data classification, incident response protocol, training records, and framework alignment evidence.
  • The output is four deliverables that serve every external audience at once. The governance package answers enterprise procurement questionnaires (Shared Assessments SIG 2026, FS-ISAC), satisfies cyber insurance underwriters (who now require AI-specific documentation), demonstrates Colorado AI Act compliance posture, and gives the board a quarterly reporting structure it can oversee.

Why Sequencing Matters More Than Completeness

Most governance guides describe what to build. Few describe the order in which to build it, and the order is where mid-market companies fail.

The failure pattern is consistent. A GC drafts an AI acceptable use policy before anyone has cataloged what tools employees are using. IT deploys DLP rules before the organization has classified which data categories apply to AI tools. The CISO writes an incident response addendum for scenarios the steering committee has not reviewed. Each artifact is correct in isolation. Together, they form a governance program that describes the company the organization wishes it were, not the company it is.

OneTrust’s survey of 1,250 IT decision-makers (North America and Europe, 2025) finds 70% of technology leaders admit their governance efforts cannot keep pace with AI initiatives. The gap is not effort — it is architecture. Ninety percent of advanced AI adopters say AI exposed the limits of siloed or manual governance processes. The sprint addresses this by sequencing deliverables so each one uses the output of the previous one as its input.

NIST’s own implementation guidance estimates 3-6 months for foundational AI RMF adoption (IS Partners, 2025). The sprint targets the lower bound by concentrating effort on the five governance documents and ten security controls that produce 80% of the external-facing value — not the full 72 subcategories across 19 categories that NIST’s complete framework contains.

The Sprint Architecture: Four Phases, Twelve Weeks

The sprint divides into four phases. Each phase produces a deliverable that the next phase depends on. Skipping ahead creates governance artifacts that do not match organizational reality.

Phase 1: Discovery (Weeks 1-3)

Purpose: Understand what exists before writing a single policy.

Week 1: Shadow AI Audit Launch

Activity Owner Hours Output
Deploy CASB scans for OAuth grants to AI platforms IT/Security 4-8 List of AI tool connections
Review SSO/identity provider logs for AI platform authentication IT 2-4 Unauthorized access patterns
Scan expense reports and corporate cards for AI subscriptions Finance 4-8 Subscription inventory
Draft anonymous employee AI usage survey HR + Governance Lead 4-6 Survey instrument

The employee survey is the highest-value discovery mechanism. Frame it as enablement, not enforcement: “We want to provide better AI tools. Help us understand what you’re using.” Elvex (2025) reports employees willingly disclose tools when the purpose is governance rather than punishment. Gartner (2025) finds 68% of employees use AI tools without IT approval — the survey surfaces the 68%.

Week 2: Inventory Build and Risk Triage

Activity Owner Hours Output
Close employee survey, compile results HR 2-4 Usage data by department
Consolidate all discovery sources into AI Use Case Registry Governance Lead 8-12 Master inventory spreadsheet
Assign preliminary risk tiers to every tool (Low/Medium/High) Governance Lead + Legal 4-8 Tiered registry
Review web gateway logs for AI platform traffic IT 2-4 Traffic patterns

The registry is the foundation everything else builds on. For each tool: vendor name, business owner, data types processed, user count, enterprise vs. personal account, training data policy, and preliminary risk tier. A spreadsheet works. ModelOp (n=100, March 2026) finds 55% of organizations still manage AI governance in spreadsheets — the tool does not matter; the inventory does.

Week 3: First Steering Committee Meeting and Data Classification

Activity Owner Hours Output
Convene AI Steering Committee (first meeting) Governance Lead 2 Meeting minutes, charter
Present audit findings and registry to committee Governance Lead 1 Reviewed inventory
Draft four-category data classification for AI use Legal + IT 4-8 Data classification policy
Identify Tier 3 (prohibited) tools for immediate blocking IT + Legal 2-4 Block list

The steering committee composition: Governance Lead (chair), IT Security, Legal/Compliance, HR, Finance, and one rotating business unit leader. Five to seven people. Two hours per month ongoing. The first meeting reviews audit findings, approves the committee charter, and assigns policy drafting responsibilities.

The data classification scheme has four categories: Public (any approved AI tool), Internal (Tier 1 enterprise tools only), Confidential (no external AI tool), Restricted (no AI tool including on-premise). This classification becomes the input to the DLP rules deployed in Phase 3.

Phase 1 Cost: $3,000-$8,000 (staff time for discovery + any CASB licensing gap). If the company lacks a CASB, standalone shadow IT discovery tools run $2-4/user/month.

Phase 1 Deliverables:

  1. AI Use Case Registry (complete inventory with risk tiers)
  2. Data classification for AI use (four categories)
  3. Steering committee charter and first meeting minutes

Phase 2: Policy (Weeks 4-6)

Purpose: Convert discovery findings into the five governance documents that satisfy regulators, insurers, and enterprise clients.

Week 4: AI Acceptable Use Policy

Activity Owner Hours Output
Draft AI Acceptable Use Policy (2-4 pages) GC/Legal + Governance Lead 8-12 Draft policy
Legal review for state regulatory alignment Legal counsel 4-8 Reviewed draft
Executive approval and publishing CEO/COO 1-2 Published policy
Launch 30-day employee acknowledgment period HR 2-4 Acknowledgment tracking

The policy defines three tool tiers (Sanctioned, Tolerated with Restrictions, Prohibited), maps them to the data classification from Phase 1, and specifies consequences for violations. Adapt an existing template — AIHR, Tenable, and Lattice publish free ones — to the company’s specific data classification and regulatory exposure. Sixty-three percent of companies still lack a generative AI usage policy (industry surveys, 2025). Publishing this document moves the company ahead of the majority overnight.

Ship this policy first. Employees need guardrails now while the rest of the program develops.

Week 5: Risk Assessment Framework and Vendor Checklist

Activity Owner Hours Output
Draft AI Risk Assessment Framework (3-5 pages) Governance Lead + Legal 8-12 Risk decision tree
Draft AI Vendor Evaluation Checklist (2-3 pages) Governance Lead + IT 6-8 Vendor checklist
Apply vendor checklist retroactively to all Tier 1 tools IT + Procurement 8-16 Completed vendor assessments
Second steering committee meeting: review policies All 2 Meeting minutes

The risk framework is a decision tree, not a manual. Every proposed AI use case gets classified as Low (approve with training acknowledgment), Medium (require security review and quarterly monitoring), or High (formal impact assessment, human-in-the-loop mandate, monthly audits). Map categories to NIST AI RMF’s Govern/Map/Measure/Manage functions for framework alignment without implementing all 72 subcategories.

The vendor checklist covers four domains: data handling (storage, access, training data usage), security certifications (SOC 2, ISO 27001), contractual terms (liability, audit rights, data deletion), and regulatory alignment (NIST AI RMF, EU AI Act if applicable). Apply it retroactively to every tool already in the registry — this produces the vendor documentation that enterprise buyers and insurers require.

Week 6: Incident Response Addendum

Activity Owner Hours Output
Draft AI Incident Response Plan addendum (2-3 pages) IT/Security + Legal 6-8 IR addendum
Integrate with existing IR plan IT/Security 2-4 Updated IR plan
Define three AI-specific incident scenarios IT/Security + Legal 2-4 Scenario documentation

Three scenarios that most IR plans lack: (1) Data leakage through AI prompts — an employee submitted confidential data to an unauthorized tool. (2) AI output error in production — an AI-generated deliverable contained material errors that reached a client. (3) Compromised AI agent — an AI tool with API access behaves unexpectedly. For each: notification chain, containment procedure, disclosure obligation, and fallback to manual process.

Gartner (March 2026) predicts 50% of cybersecurity incident response efforts will involve AI applications by 2028, up from near-zero in 2024. Building the playbook before the incident is the difference between contained and catastrophic.

Phase 2 Cost: $5,000-$15,000 (external legal review if no in-house AI expertise; staff time for drafting). Companies with in-house counsel land at the lower end.

Phase 2 Deliverables: 4. AI Acceptable Use Policy (published, acknowledgment tracking active) 5. AI Risk Assessment Framework (decision tree with NIST alignment) 6. AI Vendor Evaluation Checklist (applied retroactively to all Tier 1 tools) 7. AI Incident Response Plan addendum (integrated with existing IR plan)


Phase 3: Controls (Weeks 7-9)

Purpose: Implement the technical controls that enforce the policies from Phase 2 and produce the evidence that insurers and auditors require.

Week 7: Network and Access Controls

Activity Owner Hours Output
Block Tier 3 (prohibited) tools at network/web gateway level IT 2-4 Updated block lists
Enforce SSO + MFA for all Tier 1 AI tools IT 4-8 SSO configuration evidence
Provision enterprise accounts for most-used shadow AI tools IT + Procurement 4-8 License agreements
Publish approved tool list to all employees IT + Comms 1-2 Published list

Replace the risk with sanctioned alternatives. If the shadow AI audit found 40 employees using personal ChatGPT, provision ChatGPT Enterprise or Claude for Business through SSO. The goal is not to ban AI — it is to move usage into governed channels. Seventy-five percent of organizations discovered unsanctioned AI tools with active credentials during security reviews (Saviynt/Cybersecurity Insiders, n=235, 2026). SSO enforcement creates a centralized kill switch: one identity provider action terminates access to every AI tool simultaneously.

Week 8: Data Loss Prevention

Activity Owner Hours Output
Deploy AI-aware DLP monitoring browser-based AI interactions IT/Security 8-16 DLP deployment
Configure DLP rules mapped to data classification from Phase 1 IT/Security 4-8 Policy rules
Enable activity logging for Tier 2 (tolerated) tools IT 2-4 Logging configuration
Test DLP rules against real usage patterns from audit data IT/Security 4-8 Test results

Traditional DLP monitors email and file transfers. AI data leakage happens through browser prompts — a channel most legacy DLP tools cannot see. AI-aware DLP solutions inspect prompts before they reach AI platforms, redact sensitive data in real time, and log interactions for audit. Cost: $3-8/user/month. For a 300-person company: $10,800-$28,800/year. This is the largest direct cost in the sprint and the most consequential control — 69% of organizations cite AI-powered data leaks as their top security concern, yet 47% have no AI-specific security controls (industry surveys, 2025).

Map DLP rules to the four-category data classification from Phase 1. Confidential and Restricted categories trigger block-and-alert. Internal category triggers log-and-monitor. Public category flows freely. Because the classification was completed before DLP deployment, the rules match actual data patterns rather than theoretical categories.

Week 9: Human Review Workflow and Documentation

Activity Owner Hours Output
Document human review workflow for client-facing AI output Operations + Legal 4-8 Review workflow
Define reviewer sign-off requirements by output type Operations 2-4 Sign-off matrix
Complete AI tool inventory documentation for vendor risk IT + Governance Lead 4-8 Documented inventory
Third steering committee meeting: review controls deployment All 2 Meeting minutes

The human review workflow is the control that professional liability insurers care most about. Every document, analysis, or deliverable containing AI-generated content and reaching an external audience requires a named reviewer who confirms factual accuracy, checks for hallucinated content, and verifies no confidential data leaked into the output. This is not a suggestion — it is a documented workflow step with sign-off records that insurers treat as a coverage precondition.

Phase 3 Cost: $7,000-$22,000 (DLP licensing dominates; SSO likely already in place; staff time for configuration and testing).

Phase 3 Deliverables: 8. Network-level blocking of prohibited AI tools 9. SSO + MFA enforcement for all approved AI tools 10. AI-aware DLP deployed and configured to data classification 11. Human review workflow documented with sign-off requirements 12. Complete AI tool inventory with vendor risk documentation


Phase 4: Training, Testing, and Operationalization (Weeks 10-12)

Purpose: Convert paper policies into employee behavior, test the incident response plan, and establish the quarterly cadence that keeps the program alive.

Week 10: Role-Specific Training

Activity Owner Hours Output
Develop training content (three role-specific modules) Governance Lead + HR 8-12 Training materials
Deliver all-employee training (90 minutes) Governance Lead/External 1.5 per session Completion records
Deliver manager training (2 hours) Governance Lead 2 per session Completion records
Deliver IT/security training (4 hours) Governance Lead + IT 4 Completion records

Three audiences, three sessions. All employees (90 minutes): approved tool list, data classification rules, what to do when something goes wrong. Managers (2 hours): risk classification, team AI use monitoring, escalation procedures, answering “what does this mean for my job?” Fourteen percent of organizations can point to completed AI training across their workforce (Cornerstone OnDemand, November 2025). Completion records are the evidence that insurers and regulators require — the training itself changes behavior; the records change premiums.

Week 11: Incident Response Tabletop and Insurance Preparation

Activity Owner Hours Output
Run IR tabletop exercise with steering committee IT/Security + Legal 2-4 Exercise report
Compile governance package for insurance renewal Governance Lead + CFO 4-8 Insurance submission package
Review all policies for Colorado AI Act alignment Legal 4-8 Compliance assessment
Prepare enterprise due diligence response template Governance Lead 4-8 Pre-populated questionnaire

The tabletop exercise tests the three AI-specific scenarios from the IR addendum. Run it before the first real incident. Organizations with tested IR plans reduce breach costs by 55% compared to those without (industry data, 2025-2026).

The insurance package assembles five items for the broker: (1) complete AI tool inventory, (2) AI acceptable use policy with employee acknowledgment records, (3) training completion evidence, (4) human review workflow documentation, and (5) steering committee meeting minutes showing board-level oversight. This package serves every underwriter across every policy line — cyber, D&O, E&O, and professional liability. It converts the governance investment from a compliance cost into a risk transfer asset.

The Colorado AI Act (effective June 30, 2026, after SB25B-004 delay) requires deployers to adopt risk management policies, perform annual impact assessments, and issue consumer notices for high-risk AI systems. The sprint’s risk assessment framework and registry satisfy the “reasonable risk management program” requirement. The 60-day cure period before enforcement provides additional runway, but the sprint completes before the effective date for companies starting by April 2026.

Week 12: Quarterly Cadence and Steady State

Activity Owner Hours Output
Establish quarterly governance review cadence Governance Lead 2-4 Review calendar and agenda template
Schedule first quarterly shadow AI scan IT 1-2 Scan calendar
Document governance program summary for board reporting Governance Lead 4-8 Board-ready one-pager
Fourth steering committee meeting: program launch review All 2 Meeting minutes

The sprint ends, but the program does not. Quarterly cadence: full registry refresh (new tools discovered, retired tools removed), policy updates (regulatory changes, new state laws), DLP rule tuning (false positive reduction, new AI platform detection), training refresh for new hires, and metrics review (incidents, tool adoption, policy compliance rates). The steering committee continues to meet monthly (2 hours). The governance lead continues at 15-20% of their role.

Phase 4 Cost: $0-$5,000 (internal delivery of training is free; external facilitator runs $1,500-2,000; legal review of Colorado alignment may require outside counsel).

Phase 4 Deliverables: 13. Role-specific training delivered with completion records 14. Tabletop exercise completed with documented results 15. Insurance renewal governance package assembled 16. Enterprise due diligence response template pre-populated 17. Quarterly governance cadence established with calendar and templates

Total Sprint Budget

Phase Direct Cost Staff Time (Imputed) Calendar Time
Phase 1: Discovery $3,000-$8,000 $8,000-$12,000 Weeks 1-3
Phase 2: Policy $5,000-$15,000 $10,000-$18,000 Weeks 4-6
Phase 3: Controls $7,000-$22,000 $8,000-$14,000 Weeks 7-9
Phase 4: Training & Operations $0-$5,000 $6,000-$10,000 Weeks 10-12
Total $15,000-$50,000 $32,000-$54,000 12 weeks
All-in (direct + imputed) $47,000-$104,000

The range reflects company size within the 200-500 employee band, existing security infrastructure (companies with CASBs and SSO pay less), and whether outside counsel is needed for policy review. The median mid-market company lands at approximately $65,000-$75,000 all-in.

For context: a single negligent insider incident costs $747,107 on average (Ponemon/DTEX 2026, n=354). A shadow AI breach costs $4.63 million (IBM 2025). Colorado AI Act violations reach $20,000 each. One lost enterprise deal from failing due diligence pays for the program many times over.

What the Sprint Produces: The Governance Package

At the end of 90 days, the company possesses a governance package that satisfies four external audiences simultaneously:

Audience What They Ask For Sprint Deliverable
Cyber insurer AI tool inventory, DLP evidence, IR plan, training records, framework alignment Registry, DLP logs, IR addendum + tabletop report, completion certificates, NIST mapping
Enterprise buyer Governance program description, vendor assessments, data handling practices, risk framework 20-question due diligence response template, vendor checklists, data classification, risk decision tree
State regulator (Colorado, Texas) Risk management program, impact assessments, consumer notices Risk framework, registry with tier assignments, incident scenarios, steering committee minutes
Board of directors Oversight evidence, program status, risk posture Quarterly one-pager, steering committee minutes, training metrics, incident log

The same $65,000-$75,000 investment answers every question. This is the CFO’s leverage: one governance investment, four risk reduction surfaces.

Key Data Points

Metric Value Source
SMBs denied cyber insurance for inadequate controls >50% Grab The Axe/industry, 2026
Shadow AI breach cost premium +$670,000 IBM Cost of Data Breach 2025 (n=600)
Technology leaders whose governance can’t keep pace with AI 70% OneTrust (n=1,250), 2025
Companies lacking generative AI usage policy 63% Industry surveys, 2025
Employees using AI without IT approval 68% Gartner, 2025
Organizations discovered unsanctioned AI tools with credentials 75% Saviynt/Cybersecurity Insiders (n=235), 2026
IR cost reduction with tested plans 55% lower Industry aggregation, 2025-2026
Colorado AI Act penalties $20,000/violation SB 24-205, effective June 30, 2026
NIST AI RMF foundational adoption timeline 3-6 months IS Partners/NIST, 2025
Average negligent insider incident cost $747,107 Ponemon/DTEX 2026 (n=354)
Governance program all-in cost (200-500 person company) $47,000-$104,000 Aggregated from sprint phases
10-control security minimum cost $15,000-$45,000/year Aggregated (SSO + DLP + staff time)
Employees who received AI training 44% Cornerstone OnDemand, November 2025
AI governance spending projected 2026 $492 million Gartner, February 2026
Governed companies: agentic AI adoption rate 46% vs. 12% CSA/Google Cloud, 2025

What This Means for Your Organization

The 90-day governance sprint is designed for a specific company: 200-500 employees, no dedicated AI team, selling to enterprise clients, operating across multiple states, facing a cyber insurance renewal in the next 12 months. If that is you, the question is not whether to build a governance program — the insurance market, the regulatory calendar, and the procurement teams of every enterprise client are answering that question for you. The question is whether to build it in a sequenced 90-day sprint or discover the gaps one at a time, at the worst possible moments.

The sequencing matters as much as the content. Discovery before policy. Policy before controls. Controls before training. Each phase uses the output of the previous phase as its input, which means the DLP rules match actual data flows, the training covers tools employees actually use, and the insurance package documents controls that actually exist. Organizations that skip ahead — drafting policies before auditing, training before classifying — produce governance programs that describe a fictional company. The sprint builds from reality upward.

The economics compress further when viewed across all four audiences. The same $65,000-$75,000 investment satisfies the cyber insurer at renewal, pre-populates the enterprise due diligence template, demonstrates Colorado AI Act compliance posture, and gives the board quarterly reporting structure. If the sprint raised questions about sequencing, staffing, or scope specific to your organization, I would welcome that conversation — brandon@brandonsneider.com.

Sources

  1. IBM Cost of a Data Breach Report 2025 (n=600 organizations, Ponemon Institute). Independent research, high credibility. Shadow AI $670K premium, 97% lacking AI access controls, 63% no AI governance. https://www.ibm.com/reports/data-breach

  2. Ponemon Institute / DTEX 2026 Cost of Insider Risks Global Report (n=354 organizations, February 2026). Independent research, high credibility. $747K per negligent insider incident, 67-day containment. https://ponemon.dtex.ai/

  3. OneTrust AI-Ready Governance Report (n=1,250 IT decision-makers, North America and Europe, 2025). Vendor-published, moderate-high credibility. 70% governance pace gap, 90% exposed limits, 24% budget increase. https://www.onetrust.com/resources/2025-ai-ready-governance-report/

  4. Gartner, “Global AI Regulations Fuel Billion-Dollar Market for AI Governance Platforms” (February 2026, n=360 Q2 2025 survey). Independent analyst, moderate credibility. $492M governance spending, 68% unsanctioned AI use. https://www.gartner.com/en/newsroom/press-releases/2026-02-17-gartner-global-ai-regulations-fuel-billion-dollar-market-for-ai-governance-platforms

  5. Gartner, March 2026. Independent analyst, high credibility. 50% of IR efforts will involve AI applications by 2028. https://www.gartner.com/en/newsroom/press-releases/2026-03-17-gartner-predicts-ai-applications-will-drive-50-percent-of-cybersecurity-incident-response-efforts-by-2028

  6. Saviynt / Cybersecurity Insiders CISO AI Risk Report 2026 (n=235 CISOs). Industry survey, moderate-high credibility. 75% discovered unsanctioned AI tools with credentials. https://saviynt.com/

  7. ModelOp, “2026 AI Governance Benchmark Report” (n=100, March 2026). Vendor-funded, low-moderate credibility. 55% manage governance in spreadsheets. https://www.globenewswire.com/news-release/2026/03/11/3253668/0/en/ModelOp-s-2026-AI-Governance-Benchmark-Report

  8. Cornerstone OnDemand (November 2025). Vendor survey. Only 44% of U.S. employees have received AI training. https://www.hrdive.com/news/ai-use-secrecy-amid-lack-of-training/806312/

  9. CSA & Google Cloud, “The State of AI Security and Governance” (2025). Moderate-high credibility. 46% agentic AI adoption with governance vs. 12% without. https://cloudsecurityalliance.org/blog/2025/12/18/ai-security-governance-your-maturity-multiplier

  10. Colorado AI Act (SB 24-205) and SB25B-004 amendments. Primary legislation, highest credibility. $20,000/violation, effective June 30, 2026 (delayed from February 1). https://leg.colorado.gov/bills/sb24-205

  11. IS Partners, “NIST AI RMF: Process, Timeline, and Cost” (2025). Implementation advisory, moderate credibility. 3-6 month foundational adoption timeline. https://www.ispartnersllc.com/hubs/nist-ai-rmf/process-timeline-cost/

  12. Grab The Axe, “Cyber Insurance Underwriting Requirements 2026” (2026). Security advisory, moderate credibility. >50% SMB denial rate, IR plan reduces costs 55%. https://grabtheaxe.com/cyber-insurance-underwriting-requirements-2026/

  13. Elvex, “How to Conduct a Shadow AI Audit” (2025). Vendor blog, moderate credibility. Employee disclosure methodology. https://www.elvex.com/blog/how-to-conduct-shadow-ai-audit-organization

  14. Tenable, “AI Acceptable Use Policy Enforcement Guide” (2025). Vendor-published, moderate credibility. Three-tier tool classification framework. https://www.tenable.com/blog/security-for-ai-a-practical-guide-to-enforcing-your-ai-acceptable-use-policy

  15. NIST AI Risk Management Framework (AI RMF 1.0) (January 2023, updated through 2026). Federal standard, highest credibility. Four-function model (Govern, Map, Measure, Manage), 72 subcategories. https://www.nist.gov/itl/ai-risk-management-framework

  16. Shared Assessments SIG Workbook, 2026 Edition. Industry standard, high credibility. Maps to ISO 42001 for AI vendor assessments.

  17. Ethisphere/SpeakUp/Davis Wright Tremaine (n=136 organizations, 32,000 data points, September 2025). Independent research, high credibility. 85% lack adequate AI safeguards for vendor governance; only 15% include AI clauses in vendor codes.


Brandon Sneider | brandon@brandonsneider.com March 2026