In the absence of firm guardrails around artificial intelligence development and use, instances of individual harm are becoming more clear and present. Deepfakes, voice clones and other AI-generated manipulations are being added to hackers and fraudsters' toolbelts.
The U.S. Senate Committee on Commerce, Science and Transportation Subcommittee on Consumer Protection, Product Safety, and Data Security used a 19 Nov. hearing to preview how lawmakers might consider liability assessment when AI is used toward fraud. Subcommittee Chair John Hickenlooper, D-Colo., stressed AI-powered extortion will only become more dangerous as the technology behind the deception improves.
"These AI-enabled scams don't just cost consumers financially, but they damage reputations and relationships, and equally important, they cast doubt about what we see and what we hear online as AI-generated content gets more elaborate, more realistic," he said during his opening statement.
Sen. Marsha Blackburn, R-Tenn., said tackling those problems would likely require a holistic approach, including improving digital hygiene and literacy for consumers. She argued a federal online privacy law — which she has championed unsuccessfully — could help protect people from the data theft which can lend AI-enabled scams more realism, arguing those advances have contributed to USD10 billion lost to fraud in 2023 as recorded by the Federal Trade Commission.
"We know it is catching a lot of people who are really quite savvy when it comes to conducting their transactional life online," Blackburn said.
Prior efforts to put guardrails on tech companies have largely fallen flat. The most recent attempt, the American Privacy Rights Act, ran into major headwinds when Republicans objected to the civil rights protections and algorithmic accountability. The bill has not been revived since a June House committee markup was canceled.
But Dorota Mani, whose child was one of several girls targeted by New Jersey high school classmates making fake sexually-explicit material, said protection from harms may not come without legal direction. She argued Section 230 of the Communications Act, which protects online platforms from liability for third-party content, needs to be adapted to the challenges of today's digital landscape.
Consumers have little recourse otherwise when they are harmed, Mani said. She told Sen. Dan Sullivan, R-Alaska, one woman from Texas had to wait eight months before Snapchat took down a fake image of her daughter after Sen. Ted Cruz, R-Texas, contacted the company.
"It shouldn't take senator or congressman, and it shouldn't take a law," Mani said. "It should be an ethical responsibility of every platform."
To ensure those protections, Consumer Reports Director of Technology Policy Justin Brookman argued the U.S. needs stronger enforcers, saying the FTC does not have the capacity to tackle widespread fraud but more targeted cases as it did in September. He also said platforms and developers of AI need to foresee where AI might cause problems — and consider how accessible such technology should be.
"These companies should have heightened obligations to try to forestall harmful uses. If they can't do that, maybe they shouldn't be publicly or so easily available," he said.
A day after the hearing, President Joe Biden convened allied nations in San Francisco to discuss the AI-generated fraud issues in question. Technologists and scientists from Canada, the EU, Kenya, Singapore and the U.K. reportedly joined members of the Biden administration, marking the first global meeting on AI since May's AI Seoul Summit.
Uncertainty swirls
The subcommittee hearing shed light on another area where federal AI rules could bring positive safety impacts, but the discussion came following the post-2024 election shakeup in Washington, D.C.
The Senate will change hands in 2025, as Republicans are set to lead the Senate Commerce committee and potentially usher in a whole new policy agenda for at least the next two years.
President-elect Donald Trump has added some prominent tech critics, including Vice President-elect J.D. Vance, to his cabinet, and momentum has been growing on Capitol Hill to hold social media companies responsible for online harms done to children. Additionally, a bipartisan group of state attorneys general signed a letter asking Congress to pass the Kids Online Safety Act, which would require stronger safety settings and parental involvement.
It is unknown where the AI policy debate will land. Trump reportedly plans to roll back the Biden administration's AI executive order aimed at preventing safety risks and harms. He called the executive order "dangerous" and "hinders AI Innovation."
Only some companies which signed voluntary safety commitments with the current White House are pledging to uphold them, FedScoop reports.
Caitlin Andrews is a staff writer for the IAPP.