Common ground among children's online safety advocates and artificial intelligence safety stakeholders is growing. 

Three parents, a digital safety advocate and a mental health professional each told a Senate Committee on the Judiciary subcommittee that chatbots need some form of safety testing requirements and potentially age assurance mechanisms to prevent more children from dying by suicide or engaging in self-harm. Their testimony during a 16 Sept. hearing came the same day another lawsuit was filed against Character AI by parents who claim the AI contributed to their 13-year-old child's suicide.

The pressure is mounting on AI companies to respond.

Multiplelawsuits have been filed by parents against chatbot developers arguing their children were encouraged to confide in chatbots designed to be humanlike, resulting in their isolation. Those chatbots sometimes engaged in sexual behavior with their children and, in some cases, encouraged them to end their lives.

"As it stands, I don't think that chatbot technology in its current form is safe for children," said Megan Garcia, the mother of a 14-year-old child who died by suicide after long-term engagement with a chatbot. "So if they could, if they could stop children from going on their platforms — don't make it 12-plus in your app store, have proper age verification so that children under the age of 18 do not have access to chatbots — I think that would save a lot of lives and save families from devastation."

Congress is also exploring measures, like the Kids Online Safety Act, to force some accountability on tech companies. Any regulation would risk contradicting the White House's policy stances that seek to avoid putting rules around AI itself, citing concerns about stifling innovation. A section of the White House AI Action Plan on risks largely focused on cyber incident responses.

What's driving the problem?

Lawmakers on both sides of the aisle appeared frustrated by safety issues cited in witness testimony, with Sen. Josh Hawley, R-Mo., arguing companies must be held accountable legally for harms their products cause. "It is my firm belief that until they are subject to a jury, they are not going to change their ways, and it is past time that they change their ways," he said.

Sen. Richard Blumenthal, D-Conn., criticized some AI companies' attempts to defend their products by using the First Amendment or putting the burden on parents to know how their children are using chatbots.

"What we're dealing with here is a product that is defective, just like an automobile that didn't have proper brakes, and they're saying to you, 'Well, you if you just knew how to break the car been more careful driving, you wouldn't have crashed into that tree,'" Blumenthal said.

The U.S. Federal Trade Commission recently opened an inquiry into how companies study their chatbots' safety when acting as a companion for children and teens and limit their potential use.

According to the American Psychological Association's Mitch Prinstein, the problem may stem in part from how chatbots are trained. Mass scraping of the internet, including online forums, also means taking up content which promotes harmful behavior.

"AI, from my understanding, is built upon information across all of the internet, so it can pull that pro-eating disorder, pro-suicidal, self-injury behavior information and use it to fuel more engagement into their product," Prinstein told lawmakers. The APA earlier this year urged the FTC to limit companies' ability to post as mental health providers.

Remedies

OpenAI and Meta have said they are adjusting their chatbot responses to children's questions. The former also debuted a new policy for teenage users that includes vowing to contact parents or authorities if a user under age 18 is engaging in suicidal ideation.

"We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection," company CEO Sam Altman wrote in a blog post on the same day as the hearing.

More broadly, the momentum to protect children from online harmful content is growing globally. Countries like the U.K. and Australia have put age verification and limitations on online platforms and some U.S. states are adopting similar measures. Those laws have survived legal challenges in the U.S.

Other countries have safeguards on chatbots "to make sure that the experience of a young person is not the experience of an adult, and safeguards are built in by design, by default, so safety comes first," Common Sense Media Senior Director of AI Robbie Torney told the subcommittee. "That's just not happening in the United States, however."

Matthew Raine, whose son also died by suicide after becoming increasingly isolated while talking to a chatbot, told lawmakers protections should not be just for children's sake.

"I think parental controls are the very, very minimum here," he said. "But that doesn't address the systemic problem of — I don't want a 20-year-old to be talked to the way my son was by a chatbot, either. If these things are going to be as powerful and as addictive, they need some sense of morality built into them."

Caitlin Andrews is a staff writer for the IAPP.