In an excellent IAPP podcast interview by Angelique Carson, CIPP/US, Woody Hartzog makes the compelling point that informed consent was originally developed for rare, high-risk and potentially life-threatening situations, like surgery and medical research. Hartzog argues that the process of informed consent is not designed for, nor can it offer fair choices for, the "micro-permissions" that occur in our daily interaction with technology. What Hartzog points out derives from a fundamental difference between health care (digital or analog) and our interactions with social media and the internet of things.
In health care, one cannot get assistance without disclosing intimate physical and behavioral facts to others. If one doesn’t disclose this otherwise private information, death or serious permanent harm might result. In contrast, we would not die or be permanently disabled if we did not use the internet (people who use digital assistive devices due to disabilities are the exception here).
This fundamental difference is important to remember as we renew policy discussions of the appropriate balance between the conveniences of a technologically interconnected world and our personal privacy and dignity. Understanding this difference is crucial at a time when Congress is also holding hearings on whether health information privacy laws are too strict to adequately address the opioid crisis.
Our nationwide rules for collection of data in the traditional health care system recognize that the physician learns the patient’s health information solely to help the patient improve her health or alleviate suffering. Rooted in the Hippocratic Oath, this ethical rule frames a fiduciary role that exists between the physician and her patient: The physician receives health information in trust to be used only for legitimate health care purposes. Using that health information for other purposes would be a breach that trust, contrary to the Hippocratic Oath’s first principle, “do no harm.”
For example, an excerpt from the Oath administered by Tufts Medical School says, “I will respect the privacy of my patients, for their problems are not disclosed to me that the world may know.” Surveys and research show that patients trust their doctors to make appropriate health information-disclosure decisions, including sharing information with other health professionals who are responsible for the patient's care.
As a corollary, the original two iterations of the Health Insurance Portability and Accountability Act (in December 2000 and again in August 2002) contain significant discussion about what health information needs to go where in order for an individual to obtain care, get it paid for, and identify errors and mistakes in delivering that care. In particular, the preamble to the updates for the 2002 iteration of the Privacy Rule (67 Fed. Reg. 53181 Aug. 14, 2002) actually discuss, and then reject, a requirement that sick or dying patients consent to every disclosure. And this makes sense. We don’t really want people rushed to a hospital in an ambulance to check the privacy setting of their electronic health record portal app or for doctors and nurses to fail to supply care to an unconscious person because they don’t know the person’s privacy choices.
In contrast, we have to admit that we can actually get through our lives quite successfully without the ad-powered social media we use today or without digital assistance in everything we do.
For example, it was not too long ago that we paid for email accounts rather than having them for free so that the email sponsor could mine the content for web behavior and advertising insights. Grocery lists can be written by hand. Social media or online digital purchases are also not necessary. They are embedded in the fabric of our modern lives because we choose to use them, not, like health care, because we might die if we don’t.
And, our use of this technology, while convenient, does not come with a fiduciary relationship to the supplier. Quite the opposite when you think about ad-supported services: They provide the service for free in order to harvest data that they will use for many purposes besides supplying the service.
Nevertheless, despite the differences between health care and convenient technology, we keep trying to transplant the informed consent part of health care data collection into an ad tech context.
Nevertheless, despite the differences between health care and convenient technology, we keep trying to transplant the informed consent part of health care data collection into an ad tech context. There are some thoughtful academics, Hartzog among them, writing about fiduciary-like duties of digital data collectors and the importance of the collection context to inform a data subject and data collector. But, on the surface, the comparison to the medical discussion with a doctor is tenuous. It is pretty hard at a common-sense level to argue that participation in social media has the same life impact as seeking health care.
So, where does that leave us in discussing the future of privacy regulation in the U.S.?
The life-or-death impact of health care cannot be rationally compared to the benefits of choosing social media participation or a free email account. That being said, I think the collection context to ensure a fair deal about data is an underused concept that could be applied to many types of data collection (health, advertising and email, for example) and that can be better explained to consumers.
A good example is an emerging discussion about social media and health. Take, for example, the rise of health-condition affinity groups on social media: Participation in such groups can significantly improve people’s emotional or physical health or lead to new health research efforts. But the data itself falls outside traditional HIPAA rules. There are only a few choices among effective, usable social media platforms. Does the fact that a ubiquitous social media platform is the only way for people to access these affinity groups tell us we need different privacy rules because the context of collection is different? Because real harm can occur from privacy lapses, just like in regular health care.
So where do we go from here?
It is important for consumers to have clear but manageable choices. Too many choices or asking for a choice too often is not manageable. Transparency in plain language is key, perhaps redesigned for what we know from brain science about how consumers read webpages (compared to paper). There may be underutilized principles of fair information practices that we, as the experts, can help policymakers understand as they confront these important choices and rapidly developing technical landscape. We can help data collectors to better understand the actual harms that come from bad privacy practices. I am not talking cellphone robo calls, but health status discrimination, criminal assault and online harassment, among others.
Before we get too far down a path of consent curing all privacy issues, let us remember where informed consent comes from, and why, and then decide if that fits our needs in domains that are about convenience, not life or death.
photo credit: marcoverch Stethoskop auf der Tastatur eines Laptops via photopin (license)