The AI Disclosure Decision: When and How to Tell Customers, Partners, and Regulators

Brandon Sneider | March 2026


Executive Summary

  • 69% of consumers say companies should always disclose AI use, but only 22% say companies actually do (Gravity CX, 2026). The gap between expectation and practice is where reputational risk accumulates
  • The transparency paradox is real: thirteen experiments show that actors who disclose AI usage are trusted less than those who do not (Schilke & Reimann, Organizational Behavior and Human Decision Processes, 2025). Disclosure is necessary but how you disclose determines whether it builds trust or erodes it
  • 82% of consumers see loss of control over their data as a serious threat, and 81% suspect companies are training AI on their data without telling them — whether or not that is actually occurring (Relyance AI, n=1,000+ U.S. consumers, December 2025)
  • The first company in an industry to demonstrate verified AI transparency captures 76% of the addressable market willing to switch providers (Relyance AI, 2025). Transparency is a competitive weapon, not just a compliance obligation
  • 39% of professional services firms admit they do not proactively disclose AI use to clients (Journal of Accountancy, 2025). For mid-market companies whose client relationships are their primary asset, this is a bet that disclosure will never be forced — a bet that is getting riskier every quarter

The Disclosure Paradox: Why Getting This Right Is Harder Than It Appears

The intuitive assumption is that transparency always builds trust. The research says otherwise — and understanding the paradox is the starting point for getting the decision right.

Schilke and Reimann’s thirteen-experiment study (Organizational Behavior and Human Decision Processes, 2025) demonstrates a consistent pattern: disclosing AI involvement in a decision, recommendation, or output reduces trust in the actor making the disclosure. The mechanism is attribution — when people learn AI was involved, they attribute the output to the machine rather than the person or company, and they trust machines less than humans for judgment-intensive tasks.

This does not mean “don’t disclose.” It means the framing of the disclosure matters as much as the disclosure itself. The companies that navigate this well follow a specific pattern:

They disclose the human role, not just the AI role. “Our team uses AI-assisted research tools, with every recommendation reviewed and validated by a senior analyst” builds trust. “We use AI to generate recommendations” erodes it. The difference is whether the disclosure emphasizes human judgment or machine output.

They disclose before they are asked. Reactive disclosure — after a client discovers AI involvement independently — is uniformly worse than proactive disclosure. The Relyance AI survey finds that 81% of consumers already assume companies are using their data for AI training. Being caught in an omission is worse than any trust cost from proactive transparency.

They disclose specifically, not generically. “We use AI” is too vague to build trust and too vague to satisfy regulatory requirements. “We use AI-assisted tools for document review, with human verification on every output and no client data used for model training” is specific enough to demonstrate governance.

The Decision Framework: When Disclosure Is Required, Recommended, or Optional

Required Disclosure

Scenario Trigger Framework
Consumer-facing AI that affects decisions about individuals EU AI Act (high-risk systems), state-level AI laws (Colorado, Connecticut, others) Must disclose that AI is involved, what data it uses, and how humans oversee it
AI in regulated industries (financial services, healthcare, legal) Industry-specific regulations, professional ethics rules Must disclose to clients and regulators; engagement letters should address AI use
AI that materially changes the nature of a service FTC Act (deceptive practices), common law fraud If the client is paying for human judgment and receiving AI output, non-disclosure is a legal risk
AI processing personal data State privacy laws (CCPA, CPRA, others), GDPR for EU-facing operations Must disclose data processing purposes, including AI training
Scenario Rationale
AI in client deliverables (reports, analysis, code, designs) Clients increasingly ask; proactive disclosure builds trust before they ask
AI in customer support (chatbots, automated routing, response drafting) 69% of consumers expect this disclosure; not disclosing is a growing reputational bet
AI in hiring and HR processes Even where not legally required, employee and candidate trust depends on transparency
AI in partner-facing operations Partners assessing your company’s risk profile will ask; having a disclosure ready signals maturity

Optional Disclosure (Internal AI That Does Not Touch Clients)

Scenario Guidance
AI tools for internal productivity (Copilot, internal search, scheduling) No obligation to disclose to customers, but AI acceptable use policy should govern internally
AI in back-office operations (accounting automation, supply chain optimization) Disclose to auditors and board; no customer disclosure needed unless it affects service quality

The IAB Framework: A Practical Standard

The Interactive Advertising Bureau released its AI Transparency and Disclosure Framework in January 2026, providing the clearest industry standard to date. The core principle: disclosure is required only when AI materially affects authenticity, identity, or representation in ways that could mislead consumers. Routine production tasks and background AI tools can proceed without disclosure. Use cases that risk misleading consumers require clear, consumer-facing labels.

This is a useful heuristic for mid-market companies across industries: if a reasonable customer would want to know AI was involved because it changes their assessment of the output, disclose. If AI is a background efficiency tool that does not change what the customer receives, disclosure is optional.

The Competitive Angle: Transparency as Market Position

The Relyance AI survey reveals that transparency is not just a compliance play — it is a market differentiator:

  • 76% of consumers willing to switch providers would move to the first company in their industry that demonstrates verified AI data practices
  • More than 75% of consumers will pay more for services from companies with verified AI transparency
  • Half of consumers choose transparency over the lowest price when given the option

For a mid-market company in a competitive market, being the first to publish a clear, specific AI transparency statement is a brand move, not just a legal move. The statement does not need to be long. It needs to be specific:

“We use AI-assisted tools in [specific functions]. Every AI-generated output is reviewed by a qualified [role]. Your data is not used for AI model training. Our AI governance policy is reviewed quarterly by [committee/role]. Questions? [Contact].”

Key Data Points

Metric Value Source
Consumers who say companies should always disclose AI use 69% Gravity CX, 2026
Consumers who say companies actually disclose 22% Gravity CX, 2026
Consumers who suspect undisclosed AI data training 81% Relyance AI, n=1,000+, December 2025
Consumers who see data loss-of-control as serious threat 82% Relyance AI, December 2025
Market willing to switch for verified AI transparency 76% Relyance AI, 2025
Consumers willing to pay more for AI transparency 75%+ Relyance AI, 2025
Professional services firms not proactively disclosing AI 39% Journal of Accountancy, 2025
Experiments showing disclosure reduces trust 13 (consistent) Schilke & Reimann, OBHDP, 2025

What This Means for Your Organization

The disclosure decision is not binary. It is a spectrum that depends on what AI touches, who is affected, and what regulatory environment applies to your industry. The companies getting this right do three things: they disclose proactively to clients whose work involves AI (because reactive disclosure is always worse), they frame disclosure around human oversight rather than AI capability (because the research shows this preserves trust), and they treat transparency as a competitive position rather than a compliance burden (because the market data shows it is).

For a mid-market company, the practical next step is a one-page AI disclosure framework: what gets disclosed to whom, in what format, at what point in the client relationship. This is a 2-3 day exercise with legal, sales, and operations in the room — and it preempts the harder conversation that happens when a client asks and no one has an answer ready.

If you are navigating the disclosure decision and want to benchmark your approach against what peer companies in your industry are doing, that is a conversation worth having before the first client asks — brandon@brandonsneider.com


Sources

  • Gravity CX — “CX Trends 2026: AI Transparency in Customer Experience Explained” (2026). Credibility: MEDIUM — industry analysis, citing survey data
  • Harvard Business Review — “How to Get Your Customers to Trust AI” (January 2026). Credibility: HIGH — premier business publication
  • IAB (Interactive Advertising Bureau) — “AI Transparency and Disclosure Framework” (January 2026). Credibility: HIGH — industry standards body, cross-industry applicability
  • Journal of Accountancy — “Should I Disclose My Use of Gen AI to Clients?” (April 2025). Credibility: HIGH — professional standards publication
  • MIT Sloan Management Review — “Artificial Intelligence Disclosures Are Key to Customer Trust” (2025). Credibility: HIGH — academic institution
  • Relyance AI — “Consumer AI Trust Survey: 82% See Data Loss Threat” (n=1,000+ U.S. consumers, December 2025). Credibility: MEDIUM — vendor-funded but large sample, disclosed methodology
  • Schilke, O. & Reimann, M. — “The Transparency Dilemma: How AI Disclosure Erodes Trust” in Organizational Behavior and Human Decision Processes (2025). Credibility: HIGH — peer-reviewed academic journal, 13 experiments

Brandon Sneider | brandon@brandonsneider.com March 2026