ANALYSISMEMBER

New study maps the privacy gap in consumer AI — and proposes a fix

A new academic study offers a comprehensive attempt to map the gap between the confidentiality consumer chatbot users expect and the confidentiality they actually receive.

Published
Subscribe to IAPP Newsletters

Contributors:

Théodore Christakis

Chair, Legal & Regulatory Implications of AI, Multidisciplinary Institute in AI

University of Grenoble Alpes

Consumer chatbots have become the world's most trusted strangers. Every day, hundreds of millions of people confide health symptoms, legal strategies, financial anxieties, relationship crises and moments of acute emotional distress to systems that feel private but are not governed by anything resembling professional secrecy. 

The interface invites intimacy; the fine print reserves broad rights most users will never read.

A new academic study, "You Trust Your Chatbot With Everything. Should You? Part 1: How the Controller Uses Your Chat Data," offers the first comprehensive attempt to map the gap between the confidentiality users expect and the confidentiality they actually receive. Through a comparative policy-and-interface analysis of five major consumer chatbots — ChatGPT, Gemini, Claude, Grok and DeepSeek — the study examines the internal boundary: how providers may reuse conversations for training, review them through human annotators, monetize them through advertising and share them across operational and ecosystem channels.

The focus is deliberately on everyday consumer use, not enterprise or business offerings, which typically include stronger contractual and technical protections. The findings do not reveal a landscape of abuse, but they do reveal a landscape of structural opacity. And they point toward a concrete proposal that the privacy community should take seriously: sealed mode.

Five findings privacy professionals need to know

The study examines decision points that together define the privacy risk profile of everyday chatbot use. The combined picture produces five principal findings.

1. Every major provider now trains on consumer chat data by default. 

Since Anthropic reversed its prior usage policy in September 2025, the last holdout among major providers has fallen. A Stanford Human-Centered Artificial Intelligence study confirmed this across a broader six-provider sample. 

Contributors:

Théodore Christakis

Chair, Legal & Regulatory Implications of AI, Multidisciplinary Institute in AI

University of Grenoble Alpes

MEMBER

Unlock this exclusive content and more

Join the IAPPAlready a member? Sign in

Membership opens up a world of resources

In-depth knowledge

From original research reports and daily news coverage to legislative trackers and infographics, we have the information you need to stay ahead of change.

A global network

Make valuable professional connections through more than 160 local IAPP KnowledgeNet chapters in 70 countries.

Access to the experts

Connect with top thinkers in privacy, AI governance and cybersecurity for fresh ideas and insights.

Learn what you get from membership