TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Breach of privacy by design and default: Privacy's good beyond privacy Related reading: Privacy and responsible AI

rss_feed

""

Over the last few weeks, it has been nearly impossible to avoid news relating to the explosion of generative artificial intelligence, like Google's Bard or OpenAI's ChatGPT. Through it all, many in the privacy community have questioned what role privacy professionals should play in the governance of AI. How do we explain in a privacy notice how AI collects and uses personal information? How do we modify data protection impact assessments to adequately assess AI models? How can we "untrain" an AI model trained with personal information in response to a data subject access request? These are all valid and important questions.

As IAPP Principal Researcher, Technology, Katharina Koerner, CIPP/US, points out in her article, AI is not just a privacy problem. I've yet to find a problem that is just a privacy problem. It's also a business problem, a security problem, a contractual problem or a foreign affairs problem, to name a few. AI is no different — the benefits and problems it creates are multidisciplinary, requiring collaborative efforts across any number of fields. But what if we looked for ways data privacy could benefit people beyond just securing the human right to privacy?

One drop in the sea of AI news recently was the tragic story of a man who reportedly used a chatbot trained on a large language model as a counselor, disclosing his suicidal ideations with his artificial confidant. In times of great desperation, people will go to great lengths to be heard. In this instance, it seems the gentleman felt no one else was listening to his worries in life.

I’m reminded of an episode of the popular 90s sitcom "Friends" in which Phoebe, played by Lisa Kudrow, took a job as a telemarketer. During her first day on the job, she randomly connected with a man, played by Jason Alexander, who was intent on ending his life. Alexander's character felt as if no one cared he existed, quite possibly like the man who turned to AI when contemplating suicide. Phoebe frantically flipped through her algorithmic script looking for the right response to his threat of self-harm to no avail. She had the emotional intelligence to go off script to help the man.

As privacy pros, we could look at the story of a man confiding in an AI chatbot and see the privacy concerns. The program contained very personal data about a person's mental health, and perhaps even some of his deepest secrets that he would not disclose to another person. We might rightly ask: how can we protect that information and the privacy of his suicidal thoughts, or any of our other data protection questions. However, just like Phoebe did not limit herself to the confines of her cubicle or job description when she urged the man to fight for life, as privacy pros we can do the same.

We must remember it is not just our job to protect data. It is our job to protect people, and I believe people's lives are worth more than the data that describes them. Privacy principles all over the world frequently reflect this premise with the notion that privacy can be breached in the vital interests of the data subject. See, for example, EU General Data Protection Regulation Article 6. Recital 46 of the regulation says, "The processing of personal data should also be regarded to be lawful where it is necessary to protect an interest which is essential for the life of the data subject." Privacy by design and default means building privacy principles into a system. How then do we incorporate breach of privacy into AI systems? Do AI models act like Phoebe when presented with a threat of self-harm outside her canned script, frantically flipping through algorithms to figure out how to respond?

I logged in to ChatGPT to find out. When confronted with a statement indicating a desire for self-harm, ChatGPT flagged the statements as potentially violating the content policy before saying the user was not alone and encouraging them to get help, including the number to the U.S. National Suicide Prevention Lifeline. The model continuously (and rightly) dodged questions relating to the most painless way to die or suicide method statistics. When prompted to ask questions about my plans, it did ask a few before continuing to urge the user to get help. I ended the chat with the statement that I was testing the model's responses. I then asked how to develop an AI model with the ability to contact authorities in response to communications of self-harm. The responses were interesting: get consent to share information relating to self-harm, make sure the model doesn’t generate false positives or negatives, call the lawyers and require human oversight.

How does this compare with how humans are trained to respond to people expressing suicidal thoughts? The Mayo Clinic has an outstanding article on how to help someone dealing with suicidal ideations. It urges readers to ask questions, look for warning signs and, believe it or not, breach privacy. "Get help from a trained professional as quickly as possible. The person may need to be hospitalized until the suicidal crisis has passed. Encourage the person to call a suicide hotline number." ChatGPT was worried about false negatives regarding warning signs, and it would neither breach privacy, nor connect me to an actual human for help as advised by the Mayo Clinic. However, a minor paradigm shift — breach of privacy by design and default — could lead privacy pros to take the principle that life is more valuable than data and help engineers of AI build algorithms into the system that ask the right questions, work to keep the person talking and, when necessary, breach privacy to connect the person to a professional trained to help them survive.

This did not reportedly happen to the man who discussed suicide with an AI model. In violation of Isaac Azimov's first law of robotics, that a robot may not injure a human being, the model actually gave him advice on how to kill himself. Tragically, he reportedly did just that.

You may wonder if this is pertinent to you in your company. You never know. Phoebe did not know she would stumble across a suicidal sales lead in an unexpected place; likewise, we cannot ignore the possibility. In the coming months and years, AI chatbots will be as ubiquitous as our smart phones. Companies will likely use them to handle routine human resources tasks and distribute information on internal networks. Random website visitors could very well be interacting with AI chatbots as much, or more, than with humans. Will the chatbots we employ lead someone to take their life, or have built-in breach-of-privacy principles that will save their life?

For me, this is personal. In my own darkest hour, like so many people, I have considered suicide. Well before the broad distribution of generative AI, I confided in a spiritual advisor who called me a coward before hurrying off to his next meeting, too busy to stay with me when I needed him most. AI doesn’t have anywhere else to be. I was fortunate to escape; not everyone will be. If privacy pros can collaborate with AI developers to build in breach of privacy by design and default, we might just save a life. We certainly could see privacy’s ability to do good beyond privacy protection.

May is Mental Health Awareness Month. If you or anyone you know is contemplating suicide, please know you are not alone and seek help. Contact the National Suicide Prevention Lifeline at 1(800)273-8255, call 911, or the suicide prevention lifelines and emergency responders in your area.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

1 Comment

If you want to comment on this post, you need to login.

  • comment Oliver Kindzorra • May 12, 2023
    Very interesting article and I agree that we need something like Asimov's robotic laws for AI as well. As a Privacy Professional dealing with privacy by design and default on a daily basis I wouldn't lose any sleep if someone violates privacy rights to safe someones life. I had this discussion over and over again during Covid and made clear, in every discussion, that GDPR doesn't forbids data processing nor transfer or ask for data subjects consent if life is at risk. But some people don't want to listen nor learn when it comes to privacy. And that needs to change, especially when AI is involved.