Your Vendors Are Adopting AI on Your Behalf: The Third-Party Risk You Are Not Managing

Brandon Sneider | March 2026


Executive Summary

  • The AI you did not buy is already processing your data. Microsoft enabled Anthropic models by default in Microsoft 365 Copilot for most commercial tenants as of January 7, 2026 — adding a new sub-processor to every organization’s data flow without requiring explicit consent. Google embedded Gemini AI features into Workspace subscriptions by default between January and March 2026. Zoom’s AI Companion analyzes meeting content with no individual attendee opt-out. These are not optional add-ons. They are changes to the tools your employees already use, made by vendors acting unilaterally.
  • 89% of enterprise AI usage is invisible to the organizations it affects. Accorian’s 2026 analysis finds that most AI interactions happen without central oversight, and more than half of all AI failures originate from third-party tools. 64% of organizations lack full visibility into their AI risk exposure. The traditional Third-Party Risk Management (TPRM) framework was built before generative AI became routine — it does not cover the AI your vendors embed in products you already purchased.
  • The regulatory and financial exposure is real and growing. Australia’s ACCC sued Microsoft in October 2025 for misleading 2.7 million customers by bundling Copilot into Microsoft 365 without adequately disclosing a Copilot-free alternative — resulting in price increases of 29-45%. The EU AI Act reaches full high-risk enforcement on August 2, 2026, with penalties up to €35 million or 7% of global turnover. 88% of AI vendors cap liability at one month’s subscription fee, leaving the customer holding the risk.
  • A structured vendor AI audit — mapping where AI touches your data across existing tools, renegotiating data processing terms, and establishing ongoing monitoring — costs $10K-$25K and prevents exposure that no cyber insurance policy currently covers.

The Invisible AI in Your Existing Stack

Every mid-market company runs its business on 3-5 core platforms: Microsoft 365 or Google Workspace for productivity, Salesforce or HubSpot for CRM, NetSuite or QuickBooks for finance, ServiceNow or Freshservice for IT. In the past 18 months, every major platform vendor has embedded AI features into these products — often enabled by default, frequently without updating data processing agreements, and almost always without requiring customer re-consent.

This is categorically different from shadow AI, where employees adopt unauthorized tools. This is the vendor itself changing what the approved tool does with your data.

The Microsoft Case Study: A Sub-Processor You Did Not Choose

The most documented example is Microsoft’s Anthropic integration. On December 8, 2025, a toggle appeared in the Microsoft 365 admin center enabling Anthropic’s Claude models as a sub-processor for Copilot experiences — on by default for most commercial tenants outside the EU/EFTA and UK. By January 7, 2026, Anthropic was processing data for every organization that did not affirmatively opt out (2toLead, December 2025).

The implications are significant:

  • New sub-processor without notification. Under GDPR Article 28, processors must notify controllers before adding sub-processors, with contracts granting at least 30 days to object. Microsoft transitioned Anthropic from opt-in (under Anthropic’s own terms) to default-on (under Microsoft’s DPA) in under a month. Organizations with GDPR obligations that did not catch this admin center change may be non-compliant.
  • Data residency gaps. Anthropic models remain excluded from Microsoft’s EU Data Boundary. Organizations that chose Microsoft partly for EU data residency commitments now have a tool that routes prompts and organizational data outside that boundary — unless an admin disabled it.
  • Feature dependencies. Disabling Anthropic disables Agent Mode in Word, Excel, and PowerPoint, Researcher agent, and Copilot Studio agents. Microsoft has tied the sub-processor to functionality, creating pressure to accept the new data flow.

Google’s Default-On Approach

Google deployed Gemini AI features to existing Workspace subscribers between January and March 2026, enabled by default across Business Standard, Business Plus, and Enterprise tiers. Enterprise-tier subscribers received admin controls to disable features. Business-tier subscribers had to request access to admin controls — meaning the AI was activated before the governance mechanism was available (Google Workspace Updates, April 2025; Google Knowledge Center, 2026).

Google commits that enterprise prompts and data will not touch core model training. But the processing itself — routing organizational content through Gemini models — changes the data processing reality under existing contracts that predated AI features.

Zoom’s AI Companion presents a distinct problem. Meeting hosts can enable AI features that process every participant’s speech, but individual attendees cannot opt out. The notification displayed during meetings is a declaration, not a request for permission. A 2025 TechCrunch investigation found that some AI features retained data longer than disclosed (TechCrunch, 2025). Privacy lawyer Sarah Chen warned in a Bloomberg analysis that “many organizations using Zoom AI Companion may be unknowingly violating European privacy law on a daily basis.”

For a mid-market company with 200-500 employees conducting dozens of meetings daily, every client call, partner discussion, and internal strategy session may be processed by AI systems the company never explicitly authorized.

The Pricing Pressure Play

Microsoft announced in December 2025 that commercial Microsoft 365 pricing will increase $3/user/month effective July 1, 2026 — attributing the increase to “AI capabilities such as Copilot Chat” now bundled into core suites. Enterprise E3 moves from $36 to $39/user; E5 from $57 to $60 (Microsoft 365 Blog, December 2025).

Australia’s ACCC brought the bundling strategy into sharp relief. The competition authority sued Microsoft in October 2025, alleging the company misled 2.7 million Australian subscribers by forcing Copilot adoption through price increases of 29-45% without adequately disclosing that a “Classic” plan without Copilot was available at the original price (ACCC Media Release, October 2025).

The pattern: embed AI features, raise prices to cover them, remove the non-AI option, and frame the price increase as an upgrade rather than a tax. For mid-market CFOs budgeting $500K-$2M annually in SaaS spend, these forced increases compound across every vendor doing the same thing.

Why Your TPRM Program Does Not Cover This

Traditional third-party risk management was built for a world where vendors provided defined services under stable terms. AI breaks three foundational assumptions:

Assumption 1: You know what the vendor does with your data. When Salesforce embeds Data Cloud by default in most editions (as of 2025), the data processing profile of a CRM you evaluated three years ago has fundamentally changed. When ServiceNow integrates Now Assist with generative AI capabilities across ITSM, CSM, and HRSD, the ticketing system your IT team approved now routes employee and customer data through large language models.

Assumption 2: Vendor changes require re-assessment. Most TPRM programs review vendors annually or at contract renewal. Platform vendors are shipping AI features quarterly. The assessment gap — 12 months between reviews versus 3 months between AI feature releases — means your risk profile is perpetually stale.

Assumption 3: Your contracts control the relationship. Accorian’s research finds that 88% of AI technology providers cap their liability at one month’s subscription fee (CIO, October 2025). If a vendor’s AI feature causes a data breach, misprocesses client information, or produces discriminatory outputs, the vendor’s maximum exposure is often $3,000-$5,000 per seat — while the customer faces regulatory penalties, client claims, and reputational damage with no cap.

Gartner projects that through 2026, at least 80% of unauthorized AI transactions will result from internal policy violations related to AI features embedded in approved tools — not malicious external attacks (Gartner, 2025). The threat is inside the approved vendor stack.

The Regulatory Exposure

Three regulatory regimes create immediate exposure for companies whose vendors silently process data through AI:

GDPR sub-processor obligations. Article 28 requires that processors notify controllers before engaging new sub-processors. When Microsoft adds Anthropic, or Salesforce routes data through new AI models, the vendor is adding sub-processors. If the vendor’s notification mechanism is a buried admin center toggle rather than a formal communication — and the customer’s Data Processing Agreement does not authorize the specific change — the customer may be non-compliant as a controller.

The EDPB’s April 2025 report clarified that large language models rarely achieve anonymization standards. Controllers deploying third-party LLMs must conduct comprehensive legitimate interests assessments. A vendor that routes your data through an LLM triggers this obligation for you, whether you asked for the feature or not.

CCPA/CPRA automated decision-making. California’s new ADMT regulations, effective January 1, 2026, grant consumers the right to opt out of automated decision-making technology for significant decisions. If a vendor’s embedded AI makes automated recommendations that affect customers — pricing, eligibility, risk scoring — and the customer company has not implemented opt-out mechanisms, the company faces enforcement risk even though the AI was the vendor’s feature.

EU AI Act high-risk enforcement. On August 2, 2026, the full weight of high-risk AI system requirements under Annex III takes effect. Penalties reach €35 million or 7% of global annual turnover. A mid-market company that uses a vendor’s AI-powered HR tool for hiring decisions may be classified as a “deployer” of a high-risk AI system — with documentation, transparency, and human oversight obligations — even if the AI was embedded by the vendor without the company’s awareness.

The Five Vendors to Audit First

For a 200-500 person company running a standard mid-market technology stack, five vendor categories represent the highest immediate exposure:

Vendor Category AI Changes Risk Level Audit Priority
Productivity Suite (Microsoft 365, Google Workspace) Copilot/Gemini processing document content, email, calendar data; new sub-processors added Critical — touches all organizational data Week 1-2
CRM (Salesforce, HubSpot) Einstein/AI features embedded in customer interaction data; Data Cloud default provisioning High — client and prospect data exposure Week 2-3
Communication (Zoom, Teams, Slack) AI Companion/Copilot processing meeting transcripts, chat content, voice data High — confidential client communications Week 2-3
Finance (NetSuite, QuickBooks, Sage) AI-powered categorization, anomaly detection, forecasting on financial data Medium — regulated financial data Week 3-4
HR/Payroll (ADP, Workday, BambooHR) AI features in compensation, performance, hiring — potential high-risk under EU AI Act High — employment decisions with regulatory exposure Week 3-4

The Vendor AI Audit: A 30-Day Protocol

Week 1: Discovery and Inventory

Map where AI is active. For each platform in your stack, answer three questions:

  1. Has the vendor added AI features since your last vendor assessment?
  2. Are those features enabled by default, and can they be disabled?
  3. Has the vendor’s Data Processing Agreement been updated to reflect AI processing?

The practical approach: review vendor release notes from the past 18 months, check admin control panels for AI toggles you did not configure, and compare your current DPA against the vendor’s posted version for AI-related amendments.

Identify sub-processor changes. Under most enterprise DPAs, vendors maintain a sub-processor list. Check whether AI model providers (OpenAI, Anthropic, Google, AWS Bedrock) have been added since your last review. Microsoft’s addition of Anthropic as a sub-processor is the clearest example — and most organizations discovered it from blog posts, not from Microsoft’s formal notification process.

Week 2-3: Risk Assessment and Classification

Classify each AI feature by data sensitivity.

  • Critical: AI processing client confidential information, personal data, or regulated data (financial, health, employment records)
  • High: AI processing internal strategic data (board materials, M&A analysis, competitive intelligence)
  • Medium: AI processing operational data (IT tickets, meeting notes, project documentation)
  • Low: AI providing general assistance without access to organizational data

Check contract coverage. For each critical and high-risk AI feature, verify:

  • Does the current DPA authorize AI processing of this data category?
  • Does the NDA with affected clients permit data processing by the vendor’s AI sub-processors?
  • Do industry-specific regulations (HIPAA, SOX, PCI) permit the specific AI processing the vendor has enabled?

Week 3-4: Remediation and Controls

Disable what you cannot govern. For AI features processing data that your contracts or regulations do not authorize — disable them. Microsoft, Google, and most enterprise vendors provide admin controls. Use them. The feature can be re-enabled once governance is in place.

Renegotiate DPAs. Four clauses that CIO recommends every organization add to vendor agreements (CIO, October 2025):

  1. AI disclosure requirement: Vendor must formally disclose where and how AI operates in service delivery, including sub-processors, data routing, and model training practices
  2. Data usage prohibition: Vendor shall not use customer data to train, improve, or modify AI models without prior written consent
  3. Human oversight specification: Define which AI-driven outputs require human review before action
  4. AI-specific liability: Create remedies scaled to AI failure impact, not capped at one month’s subscription

Establish monitoring cadence. Vendor AI features change quarterly. Annual reviews are insufficient. Set a quarterly review of vendor release notes, DPA amendments, and sub-processor lists. Assign ownership — this belongs to whoever owns vendor management, not to the AI governance committee.

Key Data Points

Metric Value Source
Enterprise AI usage invisible to organizations 89% Accorian, January 2026
Organizations lacking full AI risk visibility 64% Accorian, January 2026
AI failures originating from third-party tools >50% Accorian, January 2026
AI vendor liability typically capped at 1 month’s subscription CIO / Deshpande et al., October 2025
Microsoft 365 E3 price increase (Copilot bundling) $36→$39/user/month (July 2026) Microsoft, December 2025
Australian customers misled by Copilot bundling 2.7 million ACCC, October 2025
Microsoft 365 price increase for Australian consumers 29-45% ACCC, October 2025
EU AI Act high-risk penalty ceiling €35M or 7% of global turnover EU AI Act, August 2026 enforcement
GDPR sub-processor objection period 30 days minimum GDPR Article 28
Enterprises that will deploy GenAI by 2026 80%+ Gartner, 2025
Unauthorized AI transactions from internal policy violations 80%+ (through 2026) Gartner, 2025
Organizations with deployed AI governance platforms vs. without 3.4x higher governance effectiveness Gartner (n=360), Q2 2025

What This Means for Your Organization

The risk here is not theoretical. It is operational and immediate. If your company runs Microsoft 365, Google Workspace, Salesforce, Zoom, or ServiceNow — and nearly every mid-market company runs at least three of these — your vendors have changed what these tools do with your data in the past 12 months. Most organizations have not updated their vendor assessments, DPAs, or client agreements to reflect these changes.

The 30-day audit protocol above costs $10K-$25K depending on stack complexity and number of vendor agreements. The alternative is discovering the exposure during a client’s due diligence questionnaire, a regulatory inquiry, or an incident investigation — at which point the remediation cost includes legal fees, client trust erosion, and potential regulatory penalties that dwarf the audit investment.

The companies that handle this well treat vendor AI changes the way they treat vendor security incidents: with a defined response protocol, clear ownership, and a monitoring cadence that matches the pace of change. The companies that do not handle it are accumulating compliance debt with every vendor release cycle.

If mapping your vendor AI exposure raised questions about where your organization stands — or about the contract renegotiations required — I would welcome that conversation: brandon@brandonsneider.com.

Sources

  1. 2toLead — “Anthropic Models On by Default in Copilot: Admin Action Plan and Risks,” December 8, 2025. Source: IT advisory firm; detailed technical analysis of admin controls and timeline. https://www.2tolead.com/insights/anthropic-models-on-default-copilot-admin-action-plan-and-risks

  2. Microsoft — “Advancing Microsoft 365: New capabilities and pricing update,” December 4, 2025. Source: vendor announcement; primary source for pricing changes. https://www.microsoft.com/en-us/microsoft-365/blog/2025/12/04/advancing-microsoft-365-new-capabilities-and-pricing-update/

  3. Microsoft Learn — “Anthropic as a subprocessor for Microsoft Online Services,” 2025-2026. Source: vendor documentation; primary source for sub-processor terms. https://learn.microsoft.com/en-us/copilot/microsoft-365/connect-to-ai-subprocessor

  4. Google Workspace — “Control Workspace Business and Enterprise users’ access to new Google Workspace with Gemini features before general availability,” April 2025. Source: vendor documentation; primary source for default-on deployment. https://workspaceupdates.googleblog.com/2025/04/control-access-to-gemini-alpha-features-workspace-business-enterprise.html

  5. Google Knowledge Center — “Generative AI in Google Workspace Privacy Hub,” 2026. Source: vendor privacy documentation. https://knowledge.workspace.google.com/admin/gemini/generative-ai-in-google-workspace-privacy-hub

  6. ACCC — “Microsoft in court for allegedly misleading millions of Australians over Microsoft 365 subscriptions,” October 2025. Source: government regulator; primary enforcement action. https://www.accc.gov.au/media-release/microsoft-in-court-for-allegedly-misleading-millions-of-australians-over-microsoft-365-subscriptions

  7. CIO — Deshpande, Stines, Vaughan. “Your vendor’s AI is your risk: 4 clauses that could save you from hidden liability,” October 30, 2025. Source: independent IT publication; expert analysis. Credibility: high (practitioner-authored, specific recommendations). https://www.cio.com/article/4081326/your-vendors-ai-is-your-risk-4-clauses-that-could-save-you-from-hidden-liability.html

  8. Accorian — “AI Risk in Third-Party Vendor Tools,” January 21, 2026 (modified March 2, 2026). Source: cybersecurity advisory firm; industry analysis. Credibility: moderate (no disclosed methodology for statistics, but consistent with Gartner findings). https://www.accorian.com/ai-risk-in-third-party-vendor-tools/

  9. GRC Report — Levine, Norman J. “The Invisible Third-Party: AI as a Vendor Risk You’re Probably Not Managing,” March 19, 2026. Source: governance, risk, and compliance publication. Credibility: moderate (framework-oriented, no quantitative data). https://www.grcreport.com/post/the-invisible-third-party-ai-as-a-vendor-risk-youre-probably-not-managing

  10. Gartner — “Global AI Regulations Fuel Billion-Dollar Market for AI Governance Platforms,” February 17, 2026. Survey of 360 organizations, Q2 2025. Source: independent analyst firm. Credibility: high (defined methodology, large sample). https://www.gartner.com/en/newsroom/press-releases/2026-02-17-gartner-global-ai-regulations-fuel-billion-dollar-market-for-ai-governance-platforms

  11. Debevoise & Plimpton — Chandrasekhar et al. “AI’s Biggest Enterprise Challenge in 2026: Contractual Use Limitations on Data,” November 17, 2025. Source: Am Law 50 firm; expert legal analysis. Credibility: high. https://www.debevoisedatablog.com/2025/11/17/ais-biggest-enterprise-problem-in-2026-contractual-use-limitations-on-data/

  12. PwC — “Responsible AI and third-party risk management: what you need to know,” June 19, 2025. Source: Big Four professional services firm. Credibility: high (practitioner guidance). https://www.pwc.com/us/en/tech-effect/ai-analytics/responsible-ai-tprm.html

  13. Parloa — “AI Privacy Rules: GDPR, EU AI Act, and U.S. Law,” 2026. Source: industry analysis. https://www.parloa.com/blog/AI-privacy-2026/

  14. Zoom — “AI Companion Security and Privacy,” 2025-2026. Source: vendor documentation. https://www.zoom.com/en/products/ai-assistant/resources/privacy-security/


Brandon Sneider | brandon@brandonsneider.com March 2026