As the U.S. government prepares to enforce limits on state artificial intelligence laws via President Donald Trump's recent executive order, White House Office of Science and Technology Policy Director Michael Kratsios indicated to U.S. lawmakers that he could see a scenario where sector-specific AI regulation is viable and necessary.

Kratsios did not offer many details when pressed on specific elements of the new order — including the definition of "onerous" state AI provisions — during a 14 Jan. appearance before the U.S. House Committee on Science, Space and Technology's Subcommittee on Research and Technology. Instead, he deferred to his Trump administration colleagues or previous guidance for answers while encouraging Congress to work with the administration on AI, despite omitting details on the specific role lawmakers should play in crafting federal legislation mandated by the executive order.

"We want to create a regulatory environment that provides a level of clarity and a level of understanding for all of our innovators, and the most important part of that is promulgating and working towards a use case sector-specific approach to AI regulation," he said.

ADVERTISEMENT

Radarfirst- Looking for clarity and confidence in every decision? You found it.

"Creating a one-size-fits-all regulation around AI is not the way that we can best deal with all these new AI technologies," Kratsios continued. "Folks that are developing AI-powered medical diagnostics should continue to be regulated by the FDA, for example. Anyone who's developing a drone should continue to be regulated by the FAA."

There is a robust role for the U.S. National Institute of Science and Technology to play in setting standards for trustworthy AI, according to Kratsios, who added there are areas where lawmakers and the administration can provide clarity. 

His appearance, the first before Congress since the executive order was signed, gives some insight into the administration's views on AI issues heading into 2026.

The scope of the executive order

There has been little movement on the federal side since Trump signed the order mandating agency actions to limit state AI law impacts. The Department of Justice's AI Litigation Task Force to sue states over their laws was established within the allotted 30-day period. Other deliverables from the order are expected 90 days out from its signing.

Meanwhile, New York's Responsible AI Safety and Education Act was signed into law December 2025 and California rules for automated decision-making technology and risk assessments took effect 1 Jan.

At the House subcommittee hearing, lawmakers on both sides tried to figure out next steps following the order. U.S. Rep. Jay Obernolte, R-Calif., said he believed both states and the federal government could regulate AI — but the government should go first and establish its role, so states know theirs.

"I think what everyone believes is that there should be a federal lane, and that there should be a state lane, and that the federal government needs to go first in defining what is under Article One of the Constitution, interstate commerce, and where those preemptive guardrails are," he said. 

Kratsios said he still opposes state-level regulation because it could hurt smaller developers unable to keep up with varying compliance requirements. He reiterated the order stating that “lawful” state actions related to child safety protections, AI computing and data infrastructure, and state government procurement will not be touched under the order.

But Kratsios deferred when Rep. Don Beyer, D-Va., asked what authority his role had to define a state's ability to govern or how burdensome state laws would be determined. Most of that work, he said, would be a Department of Commerce undertaking.

"It's a process to be determined," he said, referring to defining onerous laws.

Additionally, Kratsios restated the White House's desire to create a national framework and encourage lawmakers to reach out to groups like the AI Education Task Force.

The role of NIST, AI standards

Kratsios expressed support for the mission of NIST and its Center for AI Standards and Innovation, formerly the AI Safety Institute, noting the creation of reliable standards is "absolutely important." But he stopped shy of saying the latter should be codified under a forthcoming bill from Obernolte.

The fate of NIST's role has been uncertain as Congress has been debating how much funding it should receive after the agency lost staff early last year. The administration proposed to cut NIST's funding this year in the most recent round of spending bills, but appropriators voted early in January to increase it.

Rep. Suhas Subramanyam, D-Va., said NIST lost 400 staffers last year and asked how Kratsios could reconcile those cuts with the importance of the agency. He also asked what role the government should play in mitigating AI risks. 

Kratsios said he was unfamiliar with those cuts, but said the agency has a "very important role" in setting advanced metrics on model evaluation, which could be used across all industries.

"You want to have trust in them so that when everyday Americans are using — whether it be medical models or anything else — they are comfortable with the fact that it has been tested and evaluated," he said.

Kratsios also said NIST should be "depoliticized," a goal the Trump administration laid out in its AI Action Plan by removing references to bias and discrimination in the agency's internationally-referenced AI Risk Management Framework.

"Inserting political rhetoric into their work is something that devalues and corrupts the broader efforts that NIST is trying to do across so many important scientific domains," Kratsios said.

How AI misuses should be handled

Lawmakers also sought insight into how the administration views AI misuse, with its recent announcement of a partnership with Grok, X's AI chatbot, a frequent focus.

The chatbot has been under fire for its nonconsensual explicit deepfake generation, something X said it would no longer be able to do after investigations were launched by regulators internationally. The U.S. military recently announced a partnership with Grok as it looks to expand its AI usage.

Kratsios deferred questions on that contract to the U.S. General Services Administration, as well as an April 2025 guidance document on procurement within the government.

He said the Trump administration is committed to protecting children's safety and privacy online, but "the misuse of AI tools requires accountability for harmful or inappropriate use, not necessarily blanket restrictions on the use and development of that technology." Any federal employee found to be misusing an AI product would be held accountable, he said.

Caitlin Andrews is a staff writer for the IAPP.