The White House released its vision for artificial intelligence policy 23 July with a heavy focus on breaking down barriers to the technology’s innovation and adoption, including another attempt to stop states from enacting their own AI regulation.
The 28-page "America’s AI Action Plan" stems from President Donald Trump's January executive order on AI and is part of a marked tone shift toward policy aimed at fostering U.S. AI dominance in the face of fierce competition from China. Trump directed agencies to come up with a plan after extensive public comment from academia, civil society and industry. Additional executive orders putting some of the plan's points into action are expected, Reuters reports.
Key features of the plan include leveraging federal agencies to develop new standards and reimagine some existing ones, such as the National Institute of Standards and Technology's AI Risk Management Framework. It includes direction to revisit current regulations to see if any pose a hinderance to AI development and a focus on protecting free speech and fairness in large language models.
"An industrial revolution, an information revolution, and a renaissance — all at once. This is the potential that AI presents," reads the introduction. "The opportunity that stands before us is both inspiring and humbling. And it is ours to seize, or to lose."
Removing regulatory barriers
The first pillar of the plan is aimed at fostering AI innovation through speeding up adoption, investing in worker training and removing red tape while protecting free speech, according to the plan.
It directs the Office of Management and Budget to work with federal agencies with AI-related discretionary funding to consider a state's regulatory landscape when deciding whether to award money. It also recommends the Federal Communication Commission evaluate "whether state AI regulations interfere with the agency's ability to carry out its obligations and authorities under the Communications Act of 1934.5"
The plan also calls on the Federal Trade Commission to review investigations from previous administrations "to ensure that they do not advance theories of liability that unduly burden AI innovation" as well as "all FTC final orders, consent decrees, and injunctions, and where appropriate, seek to modify or set-aside that unduly burden AI innovation."
Both provisions are aimed at limiting states' willingness to regulate AI, an argument promoted by technology companies that do not want to see a patchwork of different laws to comply with.
The Trump administration tried to restrict states' ability to do so during the recessions bill fight this summer through a 10-year moratorium on legislation in the reconciliation bill; this was ultimately defeated in the U.S. Senate. The plan would have tied states' broadband funding to following the law, but it was eventually removed after advocates for states' rights, AI and children's online safety argued consumers would be disproportionately harmed by the provision.
"AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level," the plan reads. "The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states' rights to pass prudent laws that are not unduly restrictive to innovation."
The plan also calls for a revision to federal procurement standards, arguing any AI that does business with the government should "objectively reflects truth rather than social engineering agendas." It directs NIST to revise its management framework to remove references to misinformation, diversity, equity and inclusion, as well as climate change.
New sandboxes, evaluations
But the plan also recommends several ways to evaluate AI and develop standards.
It calls for establishing regulatory sandboxes and "AI Centers of Excellence" around the country to help businesses and researches test AI tools and share data with the government before they go to market. Development of national standards for AI systems and their effect on certain sectors, such as health care, energy and agriculture, would be led by NIST.
To make better-quality data available, the plan recommends incentivizing researchers to release datasets by tying their cooperation to future reviews for funding proposals. It would require researchers with federal funding to disclose "non-proprietary, non-sensitive datasets" used by AI models during experimentation.
A section of the plan is dedicated to building an "evaluations ecosystem" in the U.S., calling rigorous evaluations a "critical tool in defining and measuring AI reliability and performance in regulated industries." It recommends creating guidelines for federal agencies to evaluate their own AI use and convening the NIST AI Consortium to establishment new measurement metrics to promote AI's development.
Preventing risks
The plan does touch the need for better cyber incident response protocols, highlighting potential national security risks and vulnerabilities such as data poisoning and privacy attacks, which can affect AI system’s outputs. This would include working with frontier AI models to evaluate any potential national security risks they might pose.
"Because America currently leads on AI capabilities, the risks present in American frontier models are likely to be a preview for what foreign adversaries will possess in the near future," the plan reads. "Understanding the nature of these risks as they emerge is vital for national defense and homeland security."
It recommends the U.S. Department of Defense works with NIST to continue working on the agency's responsible AI and generative AI frameworks. It charges the U.S. Office of the Director of National intelligence to publish an IC Standard on AI assurance under the auspices of Intelligence Community Directive 505 on Artificial Intelligence.
The U.S. government should also promote the creation of AI incident response plans, the roadmap recommends and incorporate them into best practice standards for both private and public sectors. It calls for modifying the Cybersecurity and Infrastructure Security Agency's Cybersecurity Incident & Vulnerability Response Playbooks to consider AI systems and require chief information security officers to work with AI-related agency officials in developing those updates.
Initial reactions
As news of the plan and orders began trickling out in the media, civil society groups began to rally. Dozens of privacy and AI safety groups banded together ahead of the plan's release to sign the People's AI Action Plan, a joint statement urging the White House to focus on the environmental and social needs of Americans' over the technology industry's desires.
"We can't let Big Tech and Big Oil lobbyists write the rules for AI and our economy at the expense of our freedom and equality, workers and families' well-being, even the air we breathe and the water we drink — all of which are affected by the unrestrained and unaccountable roll-out of AI," the statement reads.
After the plan was released, the Center for Democracy & Technology's vice president of policy, Samir Jain, characterized the plan as "unbalanced," saying its positives of promoting open-source and open-weight systems, supporting evaluations and focus on security did not outweigh preventing state-level regulation and trying to regulate AI's truthfulness.
"The government should not be acting as a Ministry of AI Truth or insisting that AI models hew to its preferred interpretation of reality," he said in a statement. "There is no reason to weaken the AI Risk Management Framework by eliminating references to some of the real risks that AI poses."
Caitlin Andrews is a staff writer for the IAPP.