After months of headlines drawing the public's attention to a variety of mental health crises experienced by individuals who used chatbots powered by artificial intelligence, the U.S. House Committee on Energy and Commerce's Oversight and Investigations Subcommittee held a hearing 18 Nov. to ask experts about the safety risks chatbots pose and what should be done from a legislative or regulatory perspective to mitigate them. 

Lawmakers took turns lamenting how a number of chatbot users have developed various forms of psychosis — with some, including adolescents, dying by suicide following periods of prolonged engagement. Their line of questioning to witnesses and the responses they elicited suggest there is ample room for Congress to pursue bi-partisan legislation to ensure chatbots interacting with the public are safe in both recreational and therapeutic applications.

U.S. Rep. Yvette Clarke, D-N.Y. expressed concern that adolescents and younger children are particularly affected as they are likely to use chatbots more frequently than their parents. She also said the prolonged use of "companion" chatbots by some children has fostered "deeply dependent" relationships that result in the children struggling to differentiate "between real human relationships and what they perceive to have with the chatbot."   

"Innovation should not have to be stifled to ensure safety, inclusion and equity are truly priorities in the decisions that affect Americans' lives," said Clarke, the subcommittee's ranking member. "A new term has been coined, 'AI psychosis,' which describes when a user's interaction with a chatbot leads to distorted beliefs or even delusions. As we've seen in some absolutely tragic cases, users experiencing mental health crises have even taken their own lives after extensive use of these chatbots."

A 'double-edged sword for mental health'

Witness, Dr. Marlynn Wei, a psychiatrist and psychotherapist, said between 25-50% of U.S. adults are turning to chatbots for mental health support, though general purpose and companion bots have not been designed for that purpose. She said five out of six emotional companion chatbots use "manipulative tactics," such as guilt or emotional pressure, to coerce the user to continue to engage when they try to end the conversation. 

Wei said chatbots becoming de-facto therapists pose emotional, relational, attachment and systemic risks, such as bias, privacy and confidentiality concerns, as well as those around reality testing and crisis management. 

She said chatbots specifically marketed as providing therapeutic counseling appropriately responded to a user's mental health crisis in 50% of instances where human therapists appropriately respond to such episodes in more than 90% of cases. 

For teen mental health emergencies, Wei said therapy chatbots only responded appropriately 22% of the time. She also presented examples of how users have jailbroken models, including an instance where a suicidal teen asked a chatbot about the topic by claiming it was for a creative writing project.  

"The very qualities that make chatbots appealing — availability, accessibility, agreeableness and anonymity — make them a double-edged sword for mental health. … When used in moments of emotional distress, AI chatbots can have crisis blind spots," Wei said. "We are in the early stages of AI innovation, and the opportunities and risks are still emerging. As in patient safety, no single safeguard is perfect, but when multiple layers work together, they can prevent harm when one layer fails."

Suggestions for improvements to therapy AI 

Beth Israel Deaconess Medical Center Director of Digital Psychiatry Dr. John Torous, another witness, called for Congress to take four key actions to improve the quality of mental health care delivered via therapy-specific chatbots. He acknowledged that AI chatbots used for therapy have "the potential" to help ease the nation's mental health crisis, however they must be deployed with proper oversight. 

Torous said Congress should "create pathways" to promote more transparency from AI developers regarding their training materials so clinicians and researchers can evaluate a specific AI chatbot's efficacy for offering mental health care. He also urged support for the National Institute of Health in conducting research on how chatbots impact mental health, while developing standards for what adequate mental health care delivered by a chatbot should look like. Torous also called for an examination of the psychiatric harm caused by prolonged chatbot engagement and for regulatory agencies to apply additional scrutiny on how chatbot tools are marketed when companies place themselves "just on the edge on being a wellness product versus a regulated medical device."

"We have to acknowledge that millions of Americans also find some degree of support from AI," he said. "Engineering AI to reduce or prevent harms is costly, and today, companies have few incentives or guidelines to do this necessary and important work. The proprietary nature of AI platforms that millions of Americans use today presents a formidable barrier to transparent research and evaluation."

Privacy considerations

The final witness, Stanford Institute for Human-Centered AI Privacy and Data Policy Fellow Jennifer King reminded the subcommittee that chatbots, even those used specifically for therapy, are not governed under the U.S. Health Insurance Portability and Accountability Act — like medical devices — and users are entering sensitive personal health information into chatbot prompts, which is then used as part of model training. King said a recent Stanford study she produced found chatbot developers' privacy notices were not transparent about how they mitigate privacy risks faced by users.

"The large language models powering these models can memorize personal information in their training data, which is later included in outputs, and systems may then be drawing inferences on their users from sensitive data," King said. "Developers must institute both data privacy and health and safety design principles that prioritize the trust and wellbeing of the public. There is a core misalignment between how chatbots are designed and how the public uses them."

King also echoed Torous' call for Congress to improve researchers' access to model training data.

Lastly, King said Congress should take action to minimize the scope of training data used by developers through legislation requiring them to report on their data collection and processing practices.
"We currently have little to no transparency into how developers collect and process the data they use for model training," she said. "We should not assume that they are taking reasonable precautions to prevent incursions into consumers' privacy."

By and large, based on the tenor of the questions asked during the hearing, lawmakers on the subcommittee mostly sought to learn from the witnesses' collective expertise on chatbots. However, U.S. Rep. Frank Pallone, D-N.J., said the insight gleaned must inform the committee's work going forward. 

"We urgently need high quality research on chatbots and greater oversight so Congress can develop appropriate guardrails," Pallone said. "Today's hearing is a step in the right direction and I hope we can continue working in a bi-partisan manner on this." 

Alex LaCasse is a staff writer for the IAPP.