TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Chatbots and privacy rights after death: Once again, life imitates art Related reading: Breaches at our front door: What we can learn from Clearview AI



In January 2021, Microsoft secured a technology patent that went mostly unnoticed. In a story pulled straight from Netflix's "Black Mirror," Microsoft's patent detailed a method for "creating a conversational chatbot modeled after a specific person," by culling the internet for the "social data" of dead people — images, posts, messages, voice data — that could then be used to train their chatbots.

Setting aside the creepy factor, there are some legitimate privacy questions to be answered here.

First, do dead people have privacy rights? Second, what are we looking to protect for the deceased — the content posted, their personal information or both? And finally, who should have a say regarding what happens with this information — the decedent's heirs? The tech company in possession of the personal information?  

Addressing the first question, there are a variety of laws applicable to the U.S. government premised on the idea that privacy protections cease at death. For example, the Privacy Act of 1974 extends its "statutory rights to living as opposed to deceased individuals." Likewise, the authority that governs Intelligence Community civil liberty and privacy protections, Executive Order 12333, extends protections to only "United States persons," which, as a DHS Intelligence & Analysis manual notes, is limited to living beings.

In the governmental context, applying privacy protections only to the living makes sense, considering these protections often come about in response to historical incidents where the government overstepped its authority and impinged upon privacy. For example, during the Vietnam War era, the government engaged in unlawful surveillance and created secret dossiers of people engaged in First Amendment–protected activities ranging from anti-war protests to civil rights marches, leading to the enactment of the Privacy Act. And none of these concerns is relevant to dead people.

Privacy laws in the commercial sector, however, are less consistent on this point. Take, for example, the U.S. Health Insurance Portability and Accountability Act Right of Access Initiative, in which protected health information excludes individually identifiable health information of "a person who has been deceased for more than 50 years," implying that individually identifiable health information of individuals dead for less than 50 years is, in fact, protected.

However, we see a different outcome when dealing with the U.S. Family Educational Rights and Privacy Act. In particular, the Department of Education has opined FERPA rights of students "lapse or expire upon the death of the student." And to the extent FERPA protections continue to apply after the death of an elementary/secondary student, this is because the privacy right exists with the parents until the student is 18.

Interestingly, most privacy laws use some variation of the term "personally identifiable information," often presuming the covered "individual" to be living. For example, the EU General Data Protection Regulation applies only to living beings, protecting neither corporations nor dead people.

This then begs the second question: What are we protecting? The content posted, the personal information associated with the decedent or both? Notably, content appears to be the very data Microsoft is looking to scrape for its chatbots.  

While many of the major platforms allow a person to choose how to handle their account after death, the options are quite limited and at present lie solely within the platform's discretion.  

  • Facebook: Once informed that a person has died, "it's our policy to memorialize the account ... the content the person shared (example: photos, posts) remains on Facebook and is visible to the audience with whom it was shared. If the account holder chose a legacy contact, the legacy contact can control who can post tributes ...." But only "[i]n rare cases" will Facebook consider requests for account information or content.
  • Google: "In certain circumstances" Google may provide content from a deceased user's account, but the ultimate decision rests solely with Google.
  • Yahoo: "Unfortunately, Yahoo cannot provide passwords or allow access to the deceased's account, including account content such as email."

Consider that it took the family of a 15-year-old girl crushed to death by a Berlin metro train years of litigation to gain access to her Facebook account, gaining direct access only after Germany's highest court awarded them that access. Or consider the U.K.'s Molly Russell, who died by suicide in 2017 at the age of 14, after viewing self-harm and suicide material on social media. After Molly's death, her family spent years attempting to secure access to her Instagram account, calling on the U.K. government to "give grieving parents clear legal rights to their children's devices and online accounts" and thus potential closure.

One may ask, why do tech companies want this data? What value is there in the information? 

Obviously, you can no longer sell anything to the dead, although they've been known to vote on occasion. The value lies in the potential to target the survivors of the decedents, such as by designing, training and testing the artificial intelligence and machine learning behind Microsoft's chatbots, which they'll then likely market to the decedent's survivors.  

But testing this tech requires large amounts of personal information, including content. And why give this valuable data to the heirs when you can use it for free with few, if any, privacy limitations?

Consider this. 

Another tech company that struggled with the need for large amounts of real-world data to test its AI/ML algorithms was a company called Clearview AI. For those who may not recall, Clearview AI got in trouble for scraping approximately 3 billion images without peoples' knowledge or consent, then using that data to train, test and sell their massive facial recognition database to private companies, federal agencies and wealthy individuals, resulting in lawsuits by the American Civil Liberties Union, state attorneys generals, etcetera. But what if the data had belonged solely to dead people? Would it have even become an issue for Clearview?  

Of course, one could argue this should be handled at the state level, where estates are traditionally probated. To that point, in 2017, the Florida Supreme Court — in a case brought by Emma Weaver on behalf of her husband — ruled the State Constitution's right to privacy extends beyond death in "any circumstance." But this may be an outlier. Just as importantly, handling this at the state level risks a plethora of different state laws, similar to what we see today in the absence of a federal privacy law.

At the end of the day, legacy is something that people carefully craft throughout their life by what they disclose and what they choose to keep private and not disclose. Therefore, it would seem only right to allow those closest to the decedent to decide how to protect and preserve that legacy after they've passed.

Tech companies, on the other hand, have no incentive to safeguard a person's legacy. Just the opposite, there's great value in large pools of personal data with no corresponding privacy protections.

Nowhere else in the privacy world is happenstance a justification for assigning privacy rights, yet that's precisely how tech companies today acquire access to this data; they just happen to be left holding the proverbial hot potato when time finally ran out for the customer.

And why should we allow such an arbitrary event to dictate how we deal with what might arguably be the most important aspect of personal privacy in one's lifetime, namely, their legacy after death?

Photo by Rami Al-zayat on Unsplash

Credits: 1

Submit for CPEs


If you want to comment on this post, you need to login.