The state of Washington has led the way in creating the first set of guardrails (a geofence, some would say? pun intended) around consumer health data — which has now been followed by copycat laws such as the recently passed Connecticut Senate Bill 3 — to fill the void left by the landmark Dobbs decision overturning Roe v. Wade.
Recent legislative and regulatory actions, including the Federal Trade Commission cases Premom, GoodRX and Flo, were particularly challenging because companies and consumers alike are required to read between the lines. For example, what do lawmakers and regulators mean when they suggest adopting robust protections for consumer health information?
Washington's My Health My Data Act treats consumer health information as a blanket category, encompassing a wide array of both health-specific data and health-related data, running the gamut from genetic, biometric and pharmaceutical data to social determinants of health, geolocation, inference and web search data. This places industry in a challenging place, having to prioritize which aspects of this definition will require the most attention and where the lines need to be drawn regarding consumer health information.
Industry should not stand by idly, permitting the host of legislative and regulatory activity to set off alarm bells, or jump to hasty conclusions that could prompt companies to bar data processing in certain states, create distinct consumer experiences within individual states or halt marketing, research, development and product deployment in the consumer health space.
Instead, companies can prioritize their review of consumer health data and optimize their compliance process by incorporating a risk-based approach.
A risk-based approach to processing consumer health data
To date, a risk-based approach to consumer health data does not, to our knowledge, exist. The 10 state consumer privacy laws use their own baseline definitions of sensitive data, which, in some cases, parse health data separately or include it in the definitions of "sensitive data" and/or "sensitive data inferences."
In the case of so many differing definitions and proposed expanded applicability and enforcement of the FTC's Health Breach Notification Rule, one uniform risk-based framework would help with interpretation.
Some models may provide useful benchmarks and more reasonable indices of risk in the artificial intelligence space. For example, the National Institute of Standards and Technology is building the AI Risk Management Framework to support more trustworthy practices in AI development, design and deployment. It could be an interesting model for the consumer health space to use to further parse the sensitivities across different types of consumer health data.
Other countries have created a blanket opt-in approach to regulating data, including the most common framework in the EU General Data Protection Regulation, but, as we know, research demonstrates some clear shortcomings.
In the U.S., when we look to the Health Insurance Portability and Accountability Act, data is uniformly treated as part of the same confidential category of "protected" health information, with all communications strictly reserved for the provider-patient relationship and any respective business associates.
Academics have looked at the general consequences of regulating data and have shared their perspectives. In his recent piece, "Data Is What Data Does: Regulating Use, Harm, and Risk, Instead of Sensitive Data," George Washington University Professor of Intellectual Property and Technology Law Dan Solove argues that we should consider the impact of the risk and use of the data rather than trying to regulate the data itself. This would ensure we are future-proofing our approach, mitigating the harms raised by the data in real time and appropriately scoping the risks so they are manageable, rather than creating an unreasonable expectation to regulate all risk — particularly as it concerns the burden on regulators.
More robust data protection can be amplified by identifying the data’s risk profile, coupled with actual harms imposed, rather than hypothetical expectations of what could potentially cause or predict harm. This signals that industry should obtain the appropriate levels of consent, have a duty of care toward all consumer health data processed, and take reasonable steps — including incorporating impact assessments and risk mitigation techniques — to ensure the highest standard of protection for all consumer health data.
Categorizing consumer health data through risk-based sensitivity
A risk-based approach to consumer health data, taken at face value, is appropriate, but depending on the context and the risk associated with the use of that data, and whether it is combined with other data sources and elements or made available in the public domain, it could lend itself to differing levels of regulation and enforcement activity.
For a risk-based approach, we could think about the data in tiers descending from the most sensitive data that requires the most immediate level of action and care.
- The first tier of data, the "most sensitive" data elements, includes data that is clearly linkable to health care in its most traditional status. This tier would include data that identifies a consumer’s past, present or future physical or mental health status, such as reproductive care, drug or pharmaceutical care, or biometric and genetic data that identifies a specific individual. This would also include linkable consumer data shared with third parties for advertising purposes without obtaining consent.
- The second tier of midlevel sensitivity data includes data that could "reasonably indicate" an individual's physical or mental health status; data allowing for freemium, loyalty or other differential pricing programs; inferred data; and biometric data that is collected but does not identify a specific individual.
- The third tier of sensitivity focuses on commonplace activities, such as general web searches about health status or diagnosis run by a consumer, data relating to search or purchase of health and wellness apparel or related products, data for first-party marketing and analytics, and data combined with other data already in the public record.
- The fourth tier includes "prohibited" use cases of sensitive consumer health data, based on the implications from the recent FTC enforcement actions. Prohibited use cases would, at minimum, include sensitive precise location data that is collected, used or transferred for advertising purposes, and/or retained longer than is necessary or appropriate; and data generated by a geofence to collect health data, track, advertise to or message a consumer.
These prohibited use cases would appropriately convey the guidelines developed by industry thought leaders coalescing around digital advertising principles for self-regulation and their perspective on the intersection between consumer health and advertising.
In the age of wearables, wellness apps, generative AI, the metaverse and the quickly changing "moving targets" of the emerging tech world and its enthusiastic consumers, we must seek to build reasonable frameworks around data use.
Enforcement agencies have limited capacity to enforce the wide range of laws and regulations now manifesting in the patchwork of state consumer privacy laws. Industry accountability programs that review, reinforce and verify these best practices can provide an early review and a potential backstop for enforcement.
Otherwise, we might inadvertently limit the potential to meet admirable goals, whether it is to crowdsource a cure for cancer, make consumer health applications and services accessible to special needs populations, or other innovations steeped in the 247-year-old pursuit of the American dream.
If you want to comment on this post, you need to login.