Enterprise AI Due Diligence: What Buyers Ask, What Answers Pass

Executive Summary

  • Eighty-five percent of ethics and compliance teams lack adequate AI safeguards for third-party vendor governance — despite 84% owning third-party risk management directly (Ethisphere/SpeakUp/Davis Wright Tremaine, n=136 organizations, 32,000 data points, September 2025). Only 15% include AI clauses in vendor codes of conduct, and just 14% have audited even half their vendors. For a mid-market company selling to enterprise buyers, this gap is closing fast — the 15% who ask today will be the 70% who require evidence by 2027.
  • The questionnaire has standardized. The Shared Assessments SIG Workbook (2026 edition) now maps to ISO 42001 for AI management systems. FS-ISAC published a three-tier vendor evaluation framework for financial services. OneTrust published an AI-specific supplement for existing vendor assessments. The questions are converging around 15-20 core areas, and a mid-market company can prepare for all of them with a single governance package.
  • Insurance is becoming the shadow regulator. Verisk’s generative AI liability exclusion endorsements took effect January 2026. WR Berkley’s absolute AI exclusion eliminates D&O, E&O, and Fiduciary Liability coverage for any AI-related claim. Insurers are beginning to condition coverage on documented governance — audit trails, risk assessments, model inventories. Companies without governance face both the liability and the coverage gap simultaneously.
  • The minimum program that passes enterprise due diligence costs $30K-$50K and takes 90 days. It does not require ISO 42001 certification ($30K-$80K additional). It requires five documents, a quarterly review cadence, and the ability to populate a 20-question assessment with evidence — not promises.

The Buyer’s Perspective: Why Due Diligence Is Accelerating

Three forces are compressing the timeline for enterprise AI vendor scrutiny.

Force 1: Board pressure. Seventy percent of Fortune 500 executives report having AI risk committees, but only 14% say their organizations are “fully ready for AI deployment” (Sedgwick 2026 Report). Boards are demanding that procurement teams close this gap — not by slowing AI adoption, but by requiring that vendors demonstrate the governance their own organizations are still building.

More than 62% of board directors now dedicate agenda time to AI discussions, a significant increase from prior periods (NACD 2025 Public Company Board Practices & Oversight Survey, n=201). But 45% of boards still have not put AI on the agenda at all. The boards that are engaged are asking specific questions: Who are our AI vendors? What data are they processing? What happens if the model fails?

Force 2: Insurance market pressure. The insurance industry is becoming the most effective AI regulator for mid-market companies. Silent AI coverage — where AI risks were implicitly covered by existing policies — is ending. Carriers are filing explicit exclusion endorsements.

WR Berkley’s endorsement eliminates coverage for any claim “based upon, arising out of, or attributable to” AI use, deployment, or development. It specifically lists excluded applications: AI-generated content, failure to detect AI-created materials, inadequate AI governance, chatbot communications, and regulatory actions related to AI oversight. Verisk’s general liability AI exclusion endorsements took effect January 2026 and are available for any insurer to adopt.

When insurers do offer affirmative AI coverage, they require documented governance as a precondition: AI usage policies, tool inventories, risk assessments, and oversight records. Building these retroactively is expensive and often impossible. The companies that prepare governance documentation before their 2026-2027 renewal cycle will negotiate from strength. Those that don’t may find coverage unavailable at any price.

Force 3: Regulatory cascade. Seventy-two percent of S&P 500 companies now disclose AI as a “material risk” in their 10-K filings (Fortune, October 2025). That disclosure creates a procurement obligation — if AI is a material risk, then AI vendors are a material risk input. Procurement teams must demonstrate they evaluated that risk. The questionnaire is the evidence.

The 20 Questions Enterprise Buyers Actually Ask

The specific questions vary by industry, but they cluster into five domains that appear in every standardized framework — SIG, FS-ISAC, OneTrust’s AI supplement, and the growing number of custom enterprise questionnaires. A mid-market company that can answer these 20 questions with evidence — not just assertions — passes the vast majority of enterprise AI due diligence reviews.

Domain 1: AI Usage and Scope

  1. Do you use AI in any products, services, or internal processes that touch our data or deliverables? The baseline question. Many vendors still answer “no” when the answer is “yes, in six places you haven’t inventoried.” Enterprise buyers now expect a complete AI inventory — every tool, model, and API that processes their data.

  2. What specific AI models or services do you use, and who provides them? Buyers want to know if your AI runs on OpenAI, Anthropic, Google, or a custom model. Each carries different data handling, retention, and subprocessor risk. A vendor using GPT-4 through Azure has a different risk profile than one using it through the OpenAI consumer API.

  3. For each AI system, what is the intended use, and is it fit for purpose? Enterprise buyers assess whether the AI was purpose-built for your use case or repurposed from a general model. Repurposing introduces performance and reliability risks that purpose-built systems avoid (Trustible AI, 2025).

Domain 2: Data Handling and Privacy

  1. Will our data be used to train, fine-tune, or improve your AI models? The most consequential question in the entire assessment. Enterprise buyers require a contractual “no” backed by architecture — not just a terms-of-service clause. They want to know whether training data isolation is enforced technically (separate instances, no shared training pipelines) or only contractually.

  2. Where is our data processed and stored? What jurisdiction applies? Data residency matters for GDPR, state privacy laws, and industry-specific regulations (HIPAA, GLBA). Buyers need to know if data crosses borders, passes through third-party APIs, or is cached in intermediate systems.

  3. What data retention and deletion policies apply to AI-processed data? Enterprise buyers want specific timelines: How long is prompt data retained? Can inference logs be deleted on request? What happens to embeddings, cached outputs, and intermediate processing artifacts on contract termination?

  4. Do you share data with third-party AI providers, subprocessors, or model hosts? The FS-ISAC framework specifically flags this as a critical question for financial services. Each subprocessor in the chain creates additional data exposure. Buyers expect a named list of all AI subprocessors with their own security posture described (FS-ISAC Generative AI Vendor Risk Assessment Guide, February 2024).

Domain 3: Model Governance and Explainability

  1. Can you provide documentation on how your AI models make decisions? Model cards — documentation covering training data, intended use, performance benchmarks, and known limitations — are becoming the baseline expectation. Enterprise buyers check whether these exist, not whether the model is perfectly explainable.

  2. How do you test AI models for bias, fairness, and accuracy before deployment? Regulated industries (financial services, healthcare, employment) face specific bias testing requirements. Colorado’s AI Act (effective June 2026) requires impact assessments for high-risk AI systems. Enterprise buyers want evidence of testing methodology — not a claim that “we test for bias.” Red-team results, evaluation artifacts, and change-control documentation are the evidence that satisfies.

  3. What are the known limitations of the AI system, and how are they documented? All models have failure modes. Buyers respect vendors who document limitations honestly. They distrust vendors who claim their AI “just works.” Known hallucination rates, accuracy benchmarks on representative data, and documented edge cases build trust.

  4. How do you detect and respond to model drift or performance degradation? Models degrade over time as the data they were trained on becomes stale or as usage patterns shift. Buyers want to know: Is performance monitored continuously? What triggers a model update? How are updates tested before deployment?

Domain 4: Security and Access Controls

  1. What security measures protect your AI infrastructure from unauthorized access? Standard SOC 2 Type II and ISO 27001 controls remain the baseline. AI-specific additions include: inference endpoint access controls, prompt injection defenses, and training pipeline security. Ninety-seven percent of organizations that experienced AI-related breaches lacked proper AI access controls (IBM Cost of a Data Breach Report, 2025, n=600+).

  2. How do you prevent prompt injection, data poisoning, or adversarial attacks? Prompt injection was found in 73% of production AI deployments assessed during security audits (OWASP, 2025). Buyers in regulated industries specifically ask about this. The answer should reference specific defenses — input validation, output filtering, sandboxed execution — not generic “we follow best practices.”

  3. What certifications or independent audits cover your AI systems? SOC 2 Type II remains the most commonly required certification. ISO 27001 is the second. ISO 42001 (AI Management Systems) is emerging but not yet widely required — only an estimated 300-500 organizations worldwide held the certification by early 2026. Buyers accept a SOC 2 report that explicitly covers AI systems within its scope. They do not accept a SOC 2 report for your main application that excludes the AI components.

Domain 5: Compliance, Governance, and Incident Response

  1. Which AI-specific regulations and frameworks do you comply with? The frameworks buyers reference most frequently: NIST AI Risk Management Framework, ISO/IEC 42001, EU AI Act (for companies with European exposure), and industry-specific frameworks (FS-ISAC for financial services, HITRUST for healthcare). Alignment is sufficient — full certification is rarely required for vendors outside highly regulated industries.

  2. Do you maintain records of AI system decisions for audit and regulatory purposes? Audit trail requirements are expanding. The EU AI Act mandates logging for high-risk systems. Colorado’s AI Act requires “records sufficient to demonstrate compliance.” Enterprise buyers want to know that decision logs exist, how long they are retained, and whether they can be produced for regulatory examination.

  3. What is your AI incident response plan? Distinct from a cybersecurity IR plan. AI incidents include: hallucination in customer-facing output, bias detected post-deployment, model producing results inconsistent with documented behavior, data exposure through prompts. Buyers want to see a documented plan with escalation paths, notification timelines, and rollback procedures.

  4. What human oversight exists for AI-driven decisions, especially high-stakes ones? Human-in-the-loop requirements are hardening from best practice to legal mandate. Ethisphere’s five-question RFP litmus test specifically includes “human-in-the-loop triggers for high-impact decisions” (Ethisphere, 2025). Colorado’s AI Act requires human oversight for high-risk systems. Enterprise buyers in regulated industries treat this as a threshold requirement — not a nice-to-have.

  5. What is your organization’s AI governance structure? The question behind the question: Does someone own this? Buyers want to know who is responsible for AI risk — by name and title, not “our AI committee.” Only 28% of organizations have formally defined oversight roles for AI governance (IAPP Governance Survey, 2024). The 28% who have are in a stronger position with enterprise buyers.

  6. What happens if your AI system fails, produces incorrect output, or must be rolled back? Operational resilience. Buyers want rollback plans, data purging procedures, version pinning capabilities, and SLA commitments for AI-specific performance. This question gained urgency after multiple high-profile AI failures in customer-facing deployments — Klarna’s chatbot quality crisis, Air Canada’s chatbot being held legally binding on refund promises.

What Evidence Satisfies (And What Doesn’t)

Enterprise buyers have seen enough boilerplate to spot it instantly. The gap between what fails and what passes is the gap between assertions and artifacts.

What fails:

  • “We take data security seriously.” (Every vendor says this.)
  • “We follow industry best practices.” (Name them.)
  • “We are committed to responsible AI.” (Show the commitment.)
  • Pointing to a privacy policy page as evidence of AI governance
  • Claiming NIST AI RMF “alignment” without any mapping document

What passes:

  • An AI inventory listing every model, its provider, its use case, and its data access scope
  • A documented acceptable use policy with data classification tiers
  • A completed AI risk assessment for each use case, with risk ratings and mitigation measures
  • SOC 2 Type II report that explicitly includes AI systems in scope
  • Named AI governance owner with authority to approve, reject, or suspend AI deployments
  • AI incident response plan with defined escalation paths and notification timelines
  • Evidence of bias testing (methodology, results, remediation actions)
  • Model cards or equivalent documentation for each AI system
  • Subprocessor list with named third-party AI providers and their security posture
  • Data flow diagram showing where customer data enters, is processed by, and exits AI systems

The Ethisphere framework projects that by FY27, enterprise buyers will routinely request four categories of evidence: coverage metrics (percentage of AI use cases inventoried and risk-reviewed), controls metrics (percentage with human-in-the-loop, incident rates), third-party assurance (vendor AI clause completion rates, audit results), and capability metrics (employee training completion rates).

The Minimum Program That Passes

A 200-500 person company selling to enterprise buyers does not need ISO 42001 certification ($30,000-$80,000 and 6-12 months). It does not need a dedicated CAIO. It needs a governance program that produces the evidence described above.

The five-document package:

  1. AI Acceptable Use Policy (2-4 pages): Data classification tiers, approved tools, prohibited uses, employee obligations. Draft time: 2-3 weeks.

  2. AI Risk Assessment Framework (3-5 pages): Decision tree for evaluating AI use cases by risk tier. Low/medium/high classification with corresponding review requirements. Draft time: 3-4 weeks.

  3. AI Vendor Evaluation Checklist (2-3 pages): The 20 questions above, completed for each AI tool you use. You need to answer these about your own AI vendors before enterprise buyers ask you to answer them about yourself. Draft time: 2 weeks.

  4. AI Incident Response Plan (2-3 pages): Escalation paths, notification timelines, rollback procedures specific to AI failures. Draft time: 2 weeks.

  5. AI Inventory and Data Flow Map (living document): Every AI tool in use, its provider, its data access, its risk classification. This is the single most valuable artifact in enterprise due diligence. It proves you know where AI exists in your organization.

Total cost: $30K-$50K in year one (part-time governance lead allocation, legal review, external gap assessment). More detailed cost modeling is available in the minimum viable governance framework.

Total timeline: 90 days from decision to operational program.

What this buys you: The ability to respond to any enterprise AI due diligence questionnaire within 48 hours with evidence, not promises. That response speed itself signals maturity. Companies that take three weeks to respond — or worse, have nothing to produce — signal risk.

The Insurance Dimension: Governance as Coverage Prerequisite

The insurance market is creating a parallel due diligence requirement that reinforces the enterprise buyer requirement.

Cyber insurers are moving from generalized underwriting to AI-specific scrutiny. The 2026 renewal cycle increasingly requires:

  • Documented AI usage policy and tool inventory
  • Evidence of human oversight for AI-assisted decisions
  • Restrictions on public LLM use with sensitive data
  • Stronger authentication protocols to counter deepfake fraud
  • Logging and auditing of AI outputs

Carriers that offer affirmative AI coverage condition it on documented governance maturity. Companies presenting AI governance artifacts during renewal negotiations — the same five documents that satisfy enterprise buyers — position themselves for narrower exclusions, lower premiums, and higher coverage limits.

The convergence is the key insight: the same governance program that passes enterprise due diligence also satisfies insurance underwriters, regulatory requirements, and board oversight obligations. One investment, four returns.

Key Data Points

Metric Value Source
E&C teams owning TPRM but lacking AI clauses 84% own TPRM; only 15% include AI Ethisphere/SpeakUp (n=136, 32K data points, Sep 2025)
Vendors audited for AI governance Only 14% have audited even half their vendors Ethisphere/SpeakUp (Sep 2025)
Organizations with fully embedded AI governance 7% (despite 93% using AI) Trustmarque AI Governance Report (2025)
Fortune 500 execs with AI risk committees 70% Sedgwick Report (2026)
Fortune 500 “fully ready” for AI deployment 14% Sedgwick Report (2026)
Boards dedicating agenda time to AI 62% NACD Survey (n=201, 2025)
AI-breached orgs lacking access controls 97% IBM Cost of a Data Breach (n=600+, 2025)
Organizations with fully implemented AI governance 25% AuditBoard (2025)
S&P 500 disclosing AI as material risk 72% Fortune (Oct 2025)
AI governance budget increases expected 98% OneTrust AI-Ready Governance Report (2025)
Organizations with AI usage policies 75% Pacific AI Governance Survey (2025)
Organizations with formal governance frameworks 36% Pacific AI Governance Survey (2025)
Minimum viable governance program cost $30K-$50K year one Author estimate from multiple sources
ISO 42001 certification cost (mid-market) $30K-$80K Multiple certification consultancies (2025-2026)
ISO 42001 certification timeline 6-12 months Multiple certification consultancies (2025-2026)

What This Means for Your Organization

If you sell to enterprise buyers — any Fortune 500 company, any regulated industry, any government contractor — the AI due diligence questionnaire is coming. For 15% of companies, it already arrived. The Ethisphere data suggests this will reach majority adoption by the FY27 procurement cycle.

The mid-market company that builds its five-document governance package now gains three advantages. First, competitive differentiation: when your competitor takes three weeks to respond to the questionnaire and you respond in 48 hours with artifacts, you win the deal on trust before you win it on price. Second, insurance positioning: the same documents satisfy underwriter requirements during your next renewal cycle. Third, regulatory preparedness: Colorado’s AI Act, Illinois AIPA, and Texas RAIGA all require governance documentation that overlaps substantially with what enterprise buyers request.

The companies that treat this as a compliance exercise will produce minimum-viable documents and move on. The companies that treat it as a trust-building investment will use the governance process to actually understand where AI exists in their organization, what data it touches, and where the real risks are. The second group will be faster to deploy new AI tools, not slower — because governance creates the approval infrastructure that replaces the ad-hoc decision-making that stalls most mid-market AI adoption.

Sources

  1. Ethisphere/SpeakUp/Davis Wright Tremaine, “AI in Ethics & Compliance: Risk to Manage, Tool to Leverage” (September 2025, n=136 organizations, 32,000 program data points, 1.2M employee survey responses across 58 countries) — Independent industry survey; high credibilityhttps://ethisphere.com/magazine/ai-governance-risk-ethics-compliance-report-2025/

  2. Ethisphere, “Third-Party AI Governance: The Data, Risks, and Board Reporting” (2025) — Independent industry analysis; high credibilityhttps://ethisphere.com/third-party-ai-governance-board-reporting/

  3. NACD, “2025 Public Company Board Practices & Oversight Survey” (n=201, 2025) — Independent board survey; high credibilityhttps://www.nacdonline.org/all-governance/governance-resources/governance-surveys/surveys-benchmarking/2025-public-company-board-practices--oversight-survey/2025-board-practices-oversight-ai/

  4. Sedgwick 2026 Report on Fortune 500 AI Governance (2026) — Industry report; moderate-high credibilityhttps://fortune.com/2025/12/18/ai-governance-becomes-board-mandate-operational-reality-lags/

  5. IBM, “Cost of a Data Breach Report” (2025, n=600+ organizations) — Independent research; high credibility — Referenced via https://www.atlassystems.com/blog/ai-vendor-risk-questionnaire

  6. Fortune, “72% of S&P 500 companies disclosed AI as a material risk” (October 2025) — Verified SEC filing analysis; high credibilityhttps://fortune.com/2025/10/08/sp-500-companies-disclosed-ai-risk-10-k-forms-reputation-risk/

  7. Trustmarque, “AI Governance Report” (2025) — Independent research; moderate credibility — Referenced via https://www.knostic.ai/blog/ai-governance-statistics

  8. AuditBoard, “From Blueprint to Reality” Research Study (2025) — Vendor-funded but broad sample; moderate credibility — Referenced via https://www.knostic.ai/blog/ai-governance-statistics

  9. Pacific AI, “2025 AI Governance Survey” (2025) — Independent survey; moderate credibility — Referenced via https://www.knostic.ai/blog/ai-governance-statistics

  10. OneTrust, “2025 AI-Ready Governance Report” (2025) — Vendor-funded; moderate credibility (flag: OneTrust sells governance tools)https://www.onetrust.com/resources/questions-to-add-to-existing-vendor-assessments-for-ai-checklist/

  11. IAPP, “AI Governance Profession Report” (2025) — Independent professional association; high credibility — Referenced via https://www.knostic.ai/blog/ai-governance-statistics

  12. FS-ISAC, “Generative AI Vendor Risk Assessment Guide” (February 2024) — Industry consortium; high credibility for financial serviceshttps://www.fsisac.com/hubfs/Knowledge/AI/FSISAC_GenerativeAI-VendorEvaluation&QualitativeRiskAssessmentGuide.pdf

  13. Shared Assessments, “2026 SIG Workbook Updates” (2025) — Industry standard; high credibilityhttps://sharedassessments.org/blog/2026-sig-workbook-updates/

  14. Trustible AI, “10 Questions for Vendor Due Diligence” (2025) — Vendor-published but substantive framework; moderate credibilityhttps://trustible.ai/post/navigating-ai-vendor-risk-10-questions-for-your-vendor-due-diligence-process/

  15. Atlas Systems, “AI Vendor Risk Assessment Questionnaire for Compliance” (2026) — Vendor-published; moderate credibilityhttps://www.atlassystems.com/blog/ai-vendor-risk-questionnaire

  16. Knostic, “The 20 Biggest AI Governance Statistics and Trends of 2025” (2025) — Aggregator with primary source citations; useful as indexhttps://www.knostic.ai/blog/ai-governance-statistics


Created by Brandon Sneider | brandon@brandonsneider.com March 2026