New study maps the privacy gap in consumer AI — and proposes a fix

A new academic study offers a comprehensive attempt to map the gap between the confidentiality consumer chatbot users expect and the confidentiality they actually receive.

Contributors:
Théodore Christakis
Chair, Legal & Regulatory Implications of AI, Multidisciplinary Institute in AI
University of Grenoble Alpes
Consumer chatbots have become the world's most trusted strangers. Every day, hundreds of millions of people confide health symptoms, legal strategies, financial anxieties, relationship crises and moments of acute emotional distress to systems that feel private but are not governed by anything resembling professional secrecy.
The interface invites intimacy; the fine print reserves broad rights most users will never read.
A new academic study, "You Trust Your Chatbot With Everything. Should You? Part 1: How the Controller Uses Your Chat Data," offers the first comprehensive attempt to map the gap between the confidentiality users expect and the confidentiality they actually receive. Through a comparative policy-and-interface analysis of five major consumer chatbots — ChatGPT, Gemini, Claude, Grok and DeepSeek — the study examines the internal boundary: how providers may reuse conversations for training, review them through human annotators, monetize them through advertising and share them across operational and ecosystem channels.
The focus is deliberately on everyday consumer use, not enterprise or business offerings, which typically include stronger contractual and technical protections. The findings do not reveal a landscape of abuse, but they do reveal a landscape of structural opacity. And they point toward a concrete proposal that the privacy community should take seriously: sealed mode.
Five findings privacy professionals need to know
The study examines decision points that together define the privacy risk profile of everyday chatbot use. The combined picture produces five principal findings.
1. Every major provider now trains on consumer chat data by default.
Since Anthropic reversed its prior usage policy in September 2025, the last holdout among major providers has fallen. A Stanford Human-Centered Artificial Intelligence study confirmed this across a broader six-provider sample.
Contributors:
Théodore Christakis
Chair, Legal & Regulatory Implications of AI, Multidisciplinary Institute in AI
University of Grenoble Alpes