A proposed 10-year ban on U.S. states enacting and enforcing artificial intelligence regulations survived in a revised version of Congress' reconciliation bill passed out of the House by a narrow 215-214 vote 22 May. The bill advanced to the Senate, where the moratorium is already receiving notable bipartisan pushback as currently constituted.

Moratorium provisions were notably tweaked before moving out of the House. One update was made to the moratorium's application, which now covers state laws "limiting, restricting or otherwise regulating" AI systems or automated decision systems "entered into interstate commerce." Exemptions were also added for state laws imposing criminal penalties.

A House Committee on Energy and Commerce spokesperson told the IAPP the updates came after consultation with consumer protection groups. They added the changes are important to address enforcement of child sexual abuse material and to ensure criminal penalties for AI abuses are not diminished.

Rep. Luz Rivas, D-Calif., mounted an effort to strike the moratorium from the bill before passage, but the challenge was defeated in the House Committee on Rules.

The reconciliation measure's effects on taxes and spending garnered far more attention than the moratorium, but its survival in the final House text is one of the stronger indicators of the White House approach to AI. U.S. President Donald Trump's administration has been critical of EU efforts to put guardrails around the technology, arguing such regulation threatens to stifle the industry's ability to innovate.

U.S. tech companies are echoing the White House's sentiments on innovation burdens, urging passage of a federal law preempting states from passing dozens of AI laws — mirroring a similar request made for federal privacy legislation — but to take a light touch in doing so.

Potential Senate hurdles

As the bill moves to the Senate, Republicans are expected to offer some changes. Members of that caucus are initially expressing doubt about the moratorium, citing states' ability to protect consumers from potential harms until Congress acts.

"We certainly know that in Tennessee, we need those protections," Sen. Marsha Blackburn, R-Tenn., said during a Senate Committee on the Judiciary subcommittee hearing on deepfakes. "And until we pass something that is federally preemptive, we can't call for a moratorium on these things."

Blackburn was referring to her state's passage of a law designed to protect artists' voices and images from being used in unauthorized works created by AI. Deepfakes are one of the issues lawmakers locally and nationally have been interested in, with Trump signing the TAKE IT DOWN Act into law earlier this week. The law requires internet platforms to remove sexually explicit images and videos of others taken without consent and includes AI-generated content.

Senate hearing participants were largely supportive of the NO FAKES Act, which would give victims the ability to bring legal action against those who knowingly create deepfakes and profit from them. It also protects platforms from liability if they remove the content, which has garnered the support of companies such as YouTube and Google.

House subcommittee debate

While the moratorium could face bipartisan scrutiny in the Senate, the House Republican majority showed unity through its approval of the measure and during a House Energy and Commerce subcommittee hearing the day before the floor vote on the reconciliation bill.

With the hearing focused on AI regulation and U.S. competitiveness, some witnesses supported Energy and Commerce Republicans' argument that such a moratorium is critical to defending U.S. dominance in the AI sector and preventing uneven regulation for companies to navigate.

"Europe would never allow its member states to go out and regulate AI by themselves," said Sean Heather, the senior vice president of the U.S. Chamber of Commerce.

"We should stop international patchworks and AI regulation. We should not be in a rush to regulate. We need to get it right, and therefore taking a time out to discuss it at a federal level is important," he continued.

Rep. Jay Obernolte, R-Calif., a member of a bipartisan House AI task force, said the moratorium should not be seen as benefiting Big Tech, but as means to ensure small companies can succeed. He criticized state legislatures for getting out ahead of Congress, saying their resistance to passing a federal privacy law last year would translate into a similar hindrance on Congress' AI efforts.

"They feel a creative ownership over their frameworks, and they're the ones who are preventing us from doing this now, which is an object lesson to use here of why we need a moratorium to prevent that from occurring," he said.

Obernolte also pushed back against arguments that the bill would prevent states from regulating AI under consumer protection laws regarding fraudulent and deceptive practices. He said if those bills do not target AI specifically, "the states will be free to do that."

Democrats pushed back, saying states have only acted when Congress fails to do so and that consumers expect lawmakers to take action when they are at risk.

"They expect real action from us to rein in the abuses of tech companies, not to give them blanket immunity to abuse our most sensitive data even more," said Rep. Lori Trahan, D-Mass.

A frequent mention was the potential effects chatbots can have on children, with attention paid to the case of a 14-year-old Florida boy who died by suicide after interacting with a Character.AI chatbot. A federal judge has allowed a lawsuit blaming the company for his death to go forward.

AI Now Institute co-Executive Director Amba Kak argued the case shows why such a moratorium would be dangerous in the long run.

"Prevention is the cure when it comes to a range of AI harms, and what we’re seeing instead is a proliferation of very similar kinds of applications to the ones that caused this tragedy in the first place," she said.

Caitlin Andrews is a staff writer for the IAPP.