The health AI agent rush: Five products in three months, and the privacy questions that got left behind

Key findings from a new study analyzing five health-specific AI products that connect electronic health records, wearables and wellness apps to their chatbots.

Contributors:
Théodore Christakis
Chair, Legal & Regulatory Implications of AI, Multidisciplinary Institute in AI
University of Grenoble Alpes
According to OpenAI, 230 million people ask ChatGPT health questions every week. Over 40 million do so every day. They describe symptoms, share test results, ask about drug interactions, confide fears about diagnoses they have not yet received. These conversations happen on every continent, in every language, under the same default privacy settings that govern a request to draft a birthday card.
That is already remarkable. But something far more consequential is now underway. In less than three months, five major technology companies have moved beyond chatbot conversations and into the business of building personal health data hubs powered by artificial intelligence.
The convergence nobody predicted
Between January and March 2026, five companies launched health-specific AI products in rapid succession: OpenAI introduced ChatGPT Health 7 Jan., Anthropic launched Claude for Healthcare 11 Jan., Amazon's Health AI became available 22 Jan. for One Medical members and was expanded in March, Microsoft launched Copilot Health 12 March, and Perplexity unveiled Perplexity Health 19 March. Consumer health AI has been described, with justification, as the year's fastest-moving product category.
Each product follows a remarkably similar model. Each connects to electronic health records through third-party intermediaries — b.well, HealthEx or state health information exchanges. Each integrates with wearables and wellness apps, like Apple Health, Fitbit, Oura and others. Each promises not to train on health data. Each stores health conversations separately from ordinary chat. And each is available, for now, only in the United States.
In a recent study, I examined all five products through a detailed comparative table covering 16 dimensions, assessed against the six components of the sealed mode framework I proposed in my March 2026 study on chatbot privacy, "You Trust Your Chatbot With Everything. Should You?," and its companion IAPP article.
Why these products exist
Before turning to the governance gaps, it is worth understanding why these products exist and why the clinical case for data integration is real. A recent research paper on personal health agent architectures argues that integrating a user's medical history is not merely a convenience feature but a clinical safety requirement: generic health guidance that ignores medications, allergies or pre-existing conditions may be not merely unhelpful but actively harmful.
The American Medical Association's 2026 Physician Survey reinforces this: 81% of U.S. physicians now use AI in their practice, and majorities are comfortable with patients using AI for medication questions, 68%, and general health queries, 64%. But nearly half oppose patients using AI to interpret pathology, 49%, or radiology, 46%.
The physician community, in other words, welcomes the category but draws a line within it. The more data these systems ingest, the more consequential their outputs become and the more essential it is that the governance frameworks match.
The sealed mode connection
My earlier study documented how five major chatbot providers handle consumer conversations across training, human review, advertising and data sharing. It proposed sealed mode: a clearly labeled consumer pathway for sensitive conversations where the default architecture materially constrains reuse and insider access, combining no training, no advertising, siloed personalization, strict retention, minimized human review and cryptographic hardening.
The health AI products that emerged in early 2026 overlap significantly with several sealed mode components. But the overlap is partial, and the orientation is fundamentally different. Sealed mode starts from the conversation: the moment a user types "I am terrified it might be something serious." The health AI products start from data integration: connecting medical records, wearables and wellness apps to deliver personalized health intelligence. The privacy protections are instrumental to the integration objective, not the objective itself.
This distinction matters because it explains both the promise and the problem.
Three key findings
First, the market has validated the core intuition behind sealed mode. Five companies have independently concluded that health conversations cannot be governed by the same defaults as ordinary chat. That recognition, unanimous and rapid, is significant. But every product has converged on one specific form of differentiation: the health data integration hub. None addresses the question sealed mode was designed to answer: what protections apply to the hundreds of millions of health conversations that already happen every day, without any medical record being connected or any app being downloaded?
Second, none of the five products meets the full sealed mode standard. All five offer no-training commitments and data isolation. These are meaningful steps. But the comparative analysis reveals some gaps. For instance, while they all offer encryption protections, none offers cryptographic hardening that would go as far as to constrain the provider's own ability to access plaintext health conversations, protections that would matter considerably if, for instance, health data were sought through civil discovery or government demands. This is a question I examine in the forthcoming Part 2 of my study, and one that the recent court order requiring OpenAI to produce millions of ChatGPT conversation logs in the consolidated copyright litigation makes anything but hypothetical.
Human access constraints are vaguely described or not documented at all in four of the five products. And the governance frameworks remain under each provider's unilateral control, without independent verification.
Third, the European exclusion reveals a structural problem, not merely a market timing decision. Every product bundles two distinct things into a single offering: privacy protections — no training, isolated memory, no advertising in the health space — and data integration features — connecting medical records, wearables, wellness apps through dedicated application programming interfaces. It is the data integration dimension, not the privacy protections, that triggers the regulatory hurdles in Europe: the special-category requirements for systematic health data processing under the General Data Protection Regulation's Article 9, the Medical Device Regulation, if the software is deemed to have a medical purpose, and the probable AI Act's high-risk classification for such AI systems.
The privacy-protective dimension, by contrast, creates no new regulatory obstacle. Adding stronger privacy defaults to health conversations that already take place every day on these platforms would, if anything, bring the service closer to GDPR compliance, not further from it.
Providers could have proceeded in two steps. First, offer a protected health conversation environment globally, for the conversations that are already happening, with no data integration required. This step would have been available everywhere, including Europe, since it only adds protections to existing functionality. Second, layer the medical record and app connectivity on top, in jurisdictions where regulatory compliance permits.
Instead, by bundling both into a single product, they excluded Europeans from both the privacy protections and the data integrations. The paradox: the users most protected by data protection law are the ones denied access to the most privacy-protective chatbot feature currently available.
The governance questions the privacy profession needs to confront
The health agent rush raises a set of questions that extend well beyond sealed mode. Privacy professionals, whether they work in-house at the companies building these products, advise regulators or set policy, will need to engage with each of them.
The Health Insurance Portability and Accountability Act gap. None of the five consumer products operates as a HIPAA-covered entity. When a user voluntarily connects medical records to ChatGPT Health or Copilot Health, that data falls outside HIPAA's protective framework.
These are consumer products, not health care providers, health plans or health care clearinghouses. The governance of what may become the largest aggregation of health data in history rests, in the U.S., on each company's privacy policy and applicable state consumer protection laws.
This is a fundamental structural difference from the EU framework, where the GDPR applies to any controller processing personal data regardless of whether the controller operates within the health care system. The privacy profession should not assume that "health AI" means "HIPAA-covered."
The "no training" commitment deserves closer examination. Every provider states that health data is "not used to train our foundation models." But this commitment is narrower than it appears. It does not necessarily preclude product analytics, quality improvement, aggregate statistical research or what Amazon explicitly describes as training on "abstracted patterns without directly identifying information." The gap between "not training foundation models" and "not using this data for any form of learning or improvement" is significant and largely undisclosed. And privacy policies can be modified at any time.
Private health hubs and the European Health Data Space. The EU has spent years building the European Health Data Space, a public infrastructure for secondary use of health data with institutional gatekeeping: access through designated health data access bodies, purpose-specific permits and structured oversight. A private AI platform that aggregates consumer health data under consent-based processing could derive comparable epidemiological and pharmaceutical insights without passing through that institutional governance.
Even under the GDPR, this structural asymmetry would persist. It risks creating a two-track system where the most demanding governance requirements apply to public institutions and academic researchers, while the largest health data aggregations sit in private hands under less structured, consent-based frameworks. European policymakers should address this asymmetry proactively before these products seek European market access.
Cybersecurity concentration risk. Five companies are simultaneously centralizing health data linked to personal identities, medical records and years of behavioral patterns from wearables. A single breach at any of these providers could expose health data at a scale without precedent. Unlike hospital breaches, which are geographically contained, a breach of a global consumer health AI platform could simultaneously affect users across dozens of countries.
The security protections described in public materials are vague. All five mention encryption at rest and in transit. None has published a detailed security architecture or submitted to independent verification. Each product also relies on third-party data intermediaries, like b.well, HealthEx or Terra API, each of which adds a point of vulnerability in the data supply chain.
Structural tensions with GDPR principles. Even setting aside the specific health data questions under Article 9, the architectural choices behind these products sit in tension with several foundational principles of data protection law.
Data minimization: these products are designed to aggregate as much health data as possible from as many sources as possible.
Storage limitation: no product, with the partial exception of Perplexity, publicly specifies retention duration for uploaded health data and user's interactions.
Purpose limitation: the boundaries between primary use and secondary purposes are poorly defined.
Controller and processor complexity: the intermediaries involved create layered arrangements that users cannot meaningfully assess.
Data portability: nothing in current product architectures supports moving health data between providers. Once a user connects years of medical records and wearable data to a single platform, switching costs become prohibitive.
And children's data: most products require users to be 18 years old, but public materials do not evidence a uniform hard age-verification model across these products, which is significant in a health context because such systems may collect or infer sensitive data about minors.
There is no free lunch. Several products are free or bundled with existing subscriptions. But the strategic logic is clear. Amazon channels users toward One Medical consultations and Amazon Pharmacy prescriptions. OpenAI deepens engagement on a platform it is simultaneously monetizing through advertising, with personalization enabled by default. Perplexity and Anthropic restrict health features to paying tiers.
The question for privacy professionals advising these companies, and for regulators overseeing them, is whether privacy commitments made during the trust-building phase of product launch will survive the monetization phase that follows.
The path forward: governance, not exclusion
None of these questions is a reason to block health AI products whose potential to help people better understand and manage their health is real. They are reasons to get the governance right.
One avenue worth exploring is the use of regulatory sandboxes, already provided for under the AI Act, to allow health AI products to operate in Europe under supervised conditions with strong, verifiable privacy safeguards. The current outcome, where the regulatory complexity of deploying a health data integration product in Europe results in Europeans being excluded entirely, serves neither innovation nor protection. A supervised pathway that conditions market access on demonstrated compliance with robust privacy standards, including the kind of architectural safeguards that sealed mode envisions, would be preferable to the status quo.
The real question is no longer whether differentiated privacy for sensitive chatbot conversations is conceivable. Five companies answered that in less than three months. The question is whether the privacy-protective core can be extracted from the integration products it is currently bundled with and offered as a standalone standard, available to every user, everywhere.
The first provider to do so will set a standard the others will have to follow. The privacy profession should be helping to shape what that standard looks like.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
Théodore Christakis
Chair, Legal & Regulatory Implications of AI, Multidisciplinary Institute in AI
University of Grenoble Alpes
Tags:



