AI and Customer-Facing Disclosure: When and How to Tell Customers That AI Is Involved
Brandon Sneider | March 2026
Executive Summary
- The disclosure obligation is real and expanding. Five state laws now require some form of AI disclosure in customer interactions, the FTC treats undisclosed AI as potentially deceptive under Section 5, and industry-specific regulators in healthcare and financial services impose additional requirements. A 200-500 person company operating across multiple states faces overlapping obligations that will only increase.
- Consumers demand transparency — and punish its absence. Salesforce finds 73% of consumers want to know when they interact with AI. Relyance AI (n=1,000+, December 2025) finds 76% would switch brands for transparency, with 57% willing to stop using a product entirely when companies cannot explain AI data usage. The trust penalty for concealment exceeds the friction cost of disclosure.
- Disclosure design determines whether transparency builds trust or destroys conversion. A Marketing Science field experiment (n=6,200, 2019) found pre-conversation chatbot disclosure reduced purchase rates by 79.7%. But the effect reverses when disclosure is well-designed: late timing, clear escalation paths, and competence signals mitigate the penalty. The companies capturing value from AI treat disclosure as a design problem, not a compliance checkbox.
- The practical framework has four layers. Consumer-facing chatbots, AI-assisted professional work product, AI-generated content, and automated decision-making each carry different disclosure triggers, regulatory requirements, and communication design challenges. A single disclosure policy does not cover all four.
The Regulatory Landscape: Five Layers of Disclosure Obligation
Layer 1: Federal — FTC Section 5 and Operation AI Comply
The FTC has no AI-specific disclosure statute, but Section 5’s prohibition on unfair and deceptive practices applies with full force. The materiality test is straightforward: if knowing that AI was involved would affect a consumer’s decision to purchase or use a product, the omission is deceptive.
The FTC’s “Operation AI Comply” — launched September 2024, sustained through administration change — demonstrates enforcement appetite. Since launch, the FTC has brought actions against DoNotPay ($193,000 fine for claiming to be “the world’s first robot lawyer”), Workado/Content at Scale (false claims of 98% AI detection accuracy when actual performance was 53%), Air AI Technologies (overstating chatbot capabilities in customer service), and several e-commerce schemes using AI-based deception. Ascend Ecom and FBA Machine operators received permanent bans with assets redirected toward $35 million in consumer restitution.
The enforcement pattern is clear: the FTC does not require companies to disclose that AI exists in their product. It requires that companies not mislead consumers about what AI does. The distinction matters. A customer service chatbot does not necessarily require disclosure under federal law. A chatbot that a reasonable consumer would believe is human — and where that belief affects purchasing behavior — does.
Layer 2: State Consumer Protection — The Patchwork Accelerates
Five states have enacted laws with AI disclosure implications for customer-facing interactions, and more are pending:
| State | Law | Effective Date | Key Requirement | Penalty |
|---|---|---|---|---|
| California | AI Transparency Act (SB 942) | August 2, 2026 | Covered providers (1M+ monthly users) must offer AI content detection tools and embed provenance markers | $5,000/violation/day |
| California | Companion Chatbot Law (SB 243) | 2026 | Disclosure that bot is not human; exempts customer service bots | Private right of action, min. $1,000 + attorney’s fees |
| Colorado | AI Act (SB 24-205) | June 30, 2026 | Disclosure when consumers interact with high-risk AI systems affecting consequential decisions | AG enforcement |
| New York | AI Companion Law (S-3008C) | 2025 | Disclosure at start of each interaction and every 3 hours | AG enforcement |
| Utah | SB 452 | 2025 | Disclosure before accessing chatbot features or when users ask about AI involvement | AG enforcement |
Two critical nuances for mid-market companies. First, California’s SB 942 applies only to providers with 1 million+ monthly users — most mid-market companies fall below this threshold. But SB 243’s companion chatbot law has no user threshold, though it exempts standard customer service bots. Second, Colorado’s AI Act applies to any company making “consequential decisions” about Colorado consumers using AI — covering employment, financial services, insurance, healthcare, housing, and legal services. A 300-person company in Ohio that serves Colorado customers and uses AI in any of these categories must comply.
Layer 3: Industry-Specific Requirements
Healthcare and financial services face the most immediate obligations:
Healthcare: Texas TRAIGA (effective January 1, 2026) requires practitioners to provide “conspicuous written disclosure” of AI use in diagnosis or treatment before or at the time of interaction. California has additional healthcare AI communication requirements. The FDA is developing guidance on AI-enabled medical devices that will carry labeling requirements.
Financial Services: Colorado’s AI Act explicitly covers AI affecting “financial or lending services.” The U.S. Treasury released a Financial Services AI Risk Management Framework in February 2026, adapting NIST AI RMF for financial institutions with specific guidance on consumer-facing AI disclosure. Regulators have signaled that limiting consumers to chatbot-only customer service — without access to human representatives — may constitute unfair practice.
Professional Services: The ABA’s Formal Opinion 512 (2024) establishes that lawyers must consider whether to disclose AI use to clients, particularly when AI involvement is material to the client’s decision-making. Boilerplate engagement letter language is insufficient — the ABA requires specific disclosure of which AI tools are used and how. Similar professional obligations are developing in accounting and engineering.
Layer 4: The Liability Backdrop
The Air Canada case (British Columbia Civil Resolution Tribunal, February 2024) established that companies are liable for information their AI chatbots provide to customers. The chatbot gave incorrect bereavement fare information; the tribunal held Air Canada responsible. The damages were modest ($812 CAD), but the precedent is significant: deploying a customer-facing chatbot creates the same liability as deploying a human agent who gives wrong answers.
For professional services firms, the liability exposure is steeper. Industry data shows 91% of professional liability insurers exclude AI errors without specific endorsements. Average AI-related malpractice settlements are reaching $127,000. The professional’s duty of care does not diminish because a machine drafted the answer.
The Consumer Reality: What the Data Shows About Disclosure and Trust
The research on disclosure impact reveals a paradox that determines whether transparency helps or hurts.
Consumers Want Transparency
The demand signal is unambiguous:
- 73% of consumers want to know when they interact with AI (Salesforce, State of the AI Connected Customer, 2025)
- 76% would switch brands for transparency, even at higher cost (Relyance AI, n=1,000+, December 2025)
- 84% of AI experts support mandatory AI disclosures (MIT Sloan Management Review/BCG, 32-expert panel, 2025)
- 89% believe companies should always offer the option to speak with a human (SurveyMonkey/CX Dive, 2026)
- Only 42% trust companies to use AI ethically, down from 58% in 2023 (Salesforce, 2025)
The trust trajectory is falling. Each year that AI proliferates without transparent deployment practices accelerates the decline. Companies that build disclosure into their customer experience now are positioning against a market-wide trust crisis that will reward early transparency.
But Clumsy Disclosure Destroys Conversion
The Marketing Science field experiment (Luo et al., n=6,200 customers, 2019) remains the most rigorous study of disclosure impact on purchasing behavior. When chatbot identity was disclosed before the conversation, purchase rates dropped 79.7%. Customers perceived disclosed bots as less knowledgeable and less empathetic, cut conversations short, and purchased less.
The mechanism is not anti-AI sentiment — it is anti-incompetence inference. When customers learn they are talking to a bot, they assume it cannot help with their specific problem. The 79.7% drop is not a reason to hide AI involvement. It is a reason to design disclosure properly.
The same study found two mitigating factors: late disclosure (after the bot demonstrates competence) and customer prior AI experience both significantly reduced the negative impact. Companies that let the bot prove itself before revealing its identity retained most of the conversion benefit while satisfying transparency obligations.
The Organizational Behavior Paradox
Research published in the Journal of Organizational Behavior and Human Decision Processes (2025) identifies a “transparency dilemma”: disclosing AI use can erode trust even when the disclosure is meant to build it. The effect is strongest when disclosure triggers pre-existing negative associations with AI — which are widespread in the current environment.
A ScienceDirect study (2025) on professional services found that disclosing AI-assisted employees led consumers to infer lower professional competence, reducing satisfaction. The inference is not rational — the AI-assisted work was identical in quality — but the perception is real.
The implication: disclosure without competence signals is worse than no disclosure. Companies that say “this was generated by AI” without framing what that means for quality, accuracy, and human oversight invite the negative inference.
The Practical Framework: Four Disclosure Categories
A mid-market company deploying AI across operations faces four distinct disclosure challenges. Each requires different language, timing, and regulatory compliance.
Category 1: Consumer-Facing Chatbots and Virtual Agents
When disclosure is required: When a reasonable consumer might believe they are communicating with a human and that belief would affect their behavior. When operating in Colorado and making consequential decisions about consumers. When operating in healthcare in Texas.
Disclosure design that works:
- Open the interaction with a brief, confident identification: “I’m [Name], [Company]'s AI assistant. I can help with [specific capabilities]. For anything else, I’ll connect you to a team member.”
- Name the bot. Unnamed bots trigger more negative reactions than named ones.
- State capabilities, not limitations. “I can check order status, process returns, and answer product questions” is better than “I’m just a bot and can’t handle complex issues.”
- Provide a visible, persistent escalation option. Salesforce data shows 45% are more likely to use an AI agent when a clear escalation path exists.
- After demonstrating competence on the first exchange, reinforce with: “I’ve found your order. Would you like me to process this, or would you prefer to speak with a team member?”
What to avoid: Generic “this is an AI” disclaimers with no capability framing. Burying disclosure in terms of service. Requiring customers to navigate multiple screens to reach a human.
Category 2: AI-Assisted Professional Work Product
When disclosure is required: When AI materially contributes to work product delivered to clients — legal analysis, financial models, engineering calculations, medical assessments. ABA Opinion 512 applies to legal work. Professional duty of care applies across all licensed professions.
Disclosure design that works:
- Address disclosure in the engagement letter or SOW: “This firm uses AI tools in [specific functions] to enhance efficiency and accuracy. All AI-assisted work product receives [describe review process] before delivery.”
- Do not list every AI tool. Describe the category of use and the human oversight layer.
- Frame AI use as a quality and efficiency enhancement, not a cost-cutting measure. Clients who hear “we use AI to reduce our costs” infer they are getting cheaper work. Clients who hear “we use AI to review 100% of contracts rather than sampling 20%” infer they are getting better work.
- Maintain a documented review workflow that demonstrates human judgment in AI-assisted output. This is both the professional obligation and the litigation defense.
Category 3: AI-Generated Marketing and Content
When disclosure is required: California SB 942 (August 2026) requires embedded provenance markers for AI-generated image, video, and audio content — but only for providers with 1M+ monthly users. The FTC treats materially misleading AI-generated content as deceptive. Consumer sentiment (Getty Images, 2025) shows 90% want transparency on AI-generated images.
Disclosure design that works:
- Adopt a consistent internal standard regardless of whether SB 942 applies to the company. The regulatory trajectory is toward universal disclosure; building the workflow now avoids a scramble later.
- For AI-generated content: a brief footer (“Created with AI assistance”) or metadata tag satisfies most current and anticipated requirements.
- For AI-personalized marketing: “This recommendation is based on your purchase history and preferences” is sufficient. Customers expect personalization; they resent surveillance. Frame the disclosure around the value to the customer, not the technology behind it.
Category 4: Automated Decision-Making
When disclosure is required: Colorado’s AI Act requires notification when AI is a “substantial factor” in consequential decisions — employment, lending, insurance, healthcare, housing, legal services. The notification must inform the consumer that AI was involved, the purpose of the AI system, and how to contest the decision.
Disclosure design that works:
- Build disclosure into decision communication templates. An adverse credit decision letter, for example, adds: “This decision was made with the assistance of an automated system that evaluates [factors]. You have the right to request human review of this decision by contacting [specific channel].”
- Document the human-in-the-loop for every consequential decision category. The disclosure is only credible if the review pathway actually functions.
- Train the team that handles review requests. The disclosure creates a promise; the organization must deliver on it.
Key Data Points
| Metric | Finding | Source |
|---|---|---|
| Consumers wanting AI disclosure | 73% | Salesforce, 2025 |
| Would switch brands for transparency | 76% | Relyance AI (n=1,000+), December 2025 |
| Would stop using product over AI opacity | 57% | Relyance AI (n=1,000+), December 2025 |
| Trust companies to use AI ethically | 42% (down from 58% in 2023) | Salesforce, 2025 |
| Pre-disclosure purchase rate drop | 79.7% | Marketing Science (n=6,200), 2019 |
| Want option to speak with human | 89% | SurveyMonkey/CX Dive, 2026 |
| More likely to use AI with escalation path | 45% | Salesforce, 2025 |
| Professional liability insurers excluding AI errors | 91% | Industry data, 2025 |
| AI experts supporting mandatory disclosure | 84% | MIT SMR/BCG, 2025 |
| California SB 942 penalty | $5,000/violation/day | California Legislature, 2024 |
| FTC DoNotPay fine | $193,000 | FTC, January 2025 |
What This Means for Your Organization
The disclosure question is not whether to tell customers about AI. It is how to tell them in a way that builds trust instead of eroding conversion. The 79.7% purchase drop in the Marketing Science study represents what happens when disclosure is designed as a legal disclaimer. The 76% brand-switching willingness in the Relyance AI data represents what happens when transparency is absent. The narrow path between these two penalties is a design challenge, not a compliance checkbox.
For a 200-500 person company, the practical starting point is an inventory: which customer touchpoints involve AI, which fall under state-specific disclosure obligations, and which professional services carry duty-of-care requirements? The inventory produces four buckets — chatbots, professional work product, content, and automated decisions — each with different disclosure triggers, language, and timing.
Companies that treat disclosure as a brand differentiator rather than a regulatory burden are already gaining measurable advantage. Salesforce’s trust data shows the competitive gap widening: 42% of consumers trust companies to use AI ethically, down from 58% two years ago. Every month that a company deploys customer-facing AI without a clear disclosure framework pushes it further behind the small number building trust through transparency.
If your organization is navigating specific disclosure obligations across multiple states or industries, I’d welcome the conversation — brandon@brandonsneider.com.
Sources
-
FTC, “Operation AI Comply” enforcement actions (September 2024-present). Federal enforcement sweep targeting deceptive AI claims; actions against DoNotPay, Workado, Air AI Technologies, Ascend Ecom, FBA Machine. Credibility: Primary federal enforcement record. https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes
-
DLA Piper, “AI disclosure laws on commercial chatbot interactions are on the rise” (January 2026). Analysis of emerging state chatbot disclosure legislation. Credibility: Major law firm regulatory analysis. https://www.dlapiper.com/en-us/insights/publications/2026/01/ai-disclosure-laws-on-chatbots-are-on-the-rise-key-takeaways-for-companies
-
Baker McKenzie, “United States: Navigating the Laws of Chatbots and AI Assistants” (February 2026). Comprehensive state-by-state law analysis including California SB 243, New York S-3008C, Utah SB 452, Maine LD 1727. Credibility: Major law firm regulatory analysis. https://www.bakermckenzie.com/en/insight/publications/2026/02/united-states-navigating-the-laws-of-chatbots-and-ai-assistants
-
Future of Privacy Forum, “Understanding the New Wave of Chatbot Legislation: California SB 243 and Beyond” (2025). Detailed analysis of companion chatbot vs. customer service bot exemptions. Credibility: Independent privacy research organization. https://fpf.org/blog/understanding-the-new-wave-of-chatbot-legislation-california-sb-243-and-beyond/
-
Luo, Tong, Fang, Qu, “Machines vs. Humans: The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases,” Marketing Science, Vol. 38, No. 6 (2019). Field experiment, n=6,200 customers. 79.7% purchase rate drop with pre-conversation disclosure. Credibility: Peer-reviewed academic journal; gold-standard field experiment methodology. https://pubsonline.informs.org/doi/10.1287/mksc.2019.1192
-
Relyance AI/Truedot.ai, “Consumer AI Trust Survey” (December 2025). n=1,000+ U.S. consumers, ±3.2% margin of error. 76% would switch brands for transparency. Credibility: Vendor-funded but methodologically sound with nationally representative sample. https://www.relyance.ai/consumer-ai-trust-survey-2025
-
Salesforce, “State of the AI Connected Customer” (2025). 73% want to know when interacting with AI; only 42% trust companies to use AI ethically. Credibility: Vendor-funded survey; Salesforce has commercial interest in AI agent adoption, but large sample and consistent methodology across six annual editions. https://www.salesforce.com/news/stories/ai-customer-research/
-
KPMG/University of Melbourne, “Trust, Attitudes and Use of AI: A Global Study” (2025). n=48,000+ across 47 countries. Only 46% willing to trust AI systems globally. Credibility: Independent academic partnership with Big Four firm; largest AI trust study conducted. https://kpmg.com/xx/en/our-insights/ai-and-technology/trust-attitudes-and-use-of-ai.html
-
Moffatt v. Air Canada (BCCRT, February 2024). Company held liable for chatbot’s incorrect bereavement fare information. Credibility: Tribunal decision; binding precedent in jurisdiction. https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416
-
ABA Formal Opinion 512 (2024). Lawyers must consider disclosure of AI use to clients; boilerplate insufficient. Credibility: Authoritative professional ethics guidance. https://www.fmglaw.com/professional-liability/aba-issues-formal-guidance-for-lawyers-use-of-generative-ai/
-
Jones Day, “California Enacts AI Transparency Law (SB 942)” (October 2024). $5,000/violation/day penalty; 1M+ user threshold; text-only systems exempted. Credibility: Major law firm analysis of primary legislation. https://www.jonesday.com/en/insights/2024/10/california-enacts-ai-transparency-law-requiring-disclosures-for-ai-content
-
MIT Sloan Management Review/BCG, “AI Disclosures Are Key to Customer Trust” (2025). 84% of 32-expert panel support mandatory AI disclosures. Credibility: Independent academic publication; small expert panel, not representative survey. https://sloanreview.mit.edu/article/artificial-intelligence-disclosures-are-key-to-customer-trust/
Brandon Sneider | brandon@brandonsneider.com March 2026