A smattering of public responses to the White House's call for input on its impending AI Action Plan gives some insight into the priorities of AI developers and advocacy groups in the U.S.
President Donald Trump's second administration quickly rolled back former President Joe Biden's AI executive order and issued a new one committing the U.S. to promoting AI leadership. The order included a mandate for an AI action plan to be drafted by the summer based on stakeholder feedback.
The Federal Register said more than 8,700 comments were submitted as of 18 March. None were visible, as the website said they must be reviewed before being made public.
However, several companies and industry groups have published their full comments online. The topics which repeatedly emerged show where the battlegrounds for AI policy will play out during the Trump administration.
Federal legislation before state-level patchwork
Several private entities and industry groups made their preference for Congress to pass a unifying AI bill loud and clear.
They frequently cited consulting firm Multistate's online tracker stating nearly 800 AI-related bills have been submitted this year, an increase from 2024. Not all bills have requirements for the private sector, but it is clear companies are nervous as laws such as Colorado's are slated to come online next year and bellwether states like California look to finalize rulemaking around automated-decision making technology.
The American Bankers Association pitched federal legislation as establishing trust in AI companies, saying having no standards would leave companies scrambling like "crabs in a bucket" to get an edge over their competitors and said unchecked screen scraping of data could negatively affect internet performance.
"Banks are a model for how other industries can explore AI-enabled use cases in a fruitful and sustainable manner," ABA Vice President and Senior Counsel Ryan Miller, CIPP/US, wrote. "The compliance requirements, model risk management expectations, and supervision by specialized regulators has resulted in an environment of trust and responsible innovation, a prerequisite for prosperity."
But commenters differed on how far such legislation should go. OpenAI proposed "a tightly-scoped framework for voluntary partnership between the federal government and the private sector to protect and strengthen American national security," a plan which would allow the government to learn from industry and protect it from a patchwork of laws.
"This patchwork of regulations risks bogging down innovation and, in the case of AI, undermining America’s leadership position," the company stated.
Whereas the Center for AI Policy, an advocacy research group focused on the catastrophic risks feared from AI, suggested legislation should target large models which can pose a threat to national security.
"Most AI systems — especially the smaller systems that are more likely to be developed by startups, academics, and small businesses — are relatively benign and do not pose major national security risks," the group said.
Risks deserve attention
The White House has signaled it will be taking a different approach to AI than its predecessor. The U.S. did not sign onto an international pledge to promote safe and secure AI during the Paris AI Action Summit, as Vice President JD Vance decried European efforts to put guardrails around the technology.
Still, the White House should look at how to handle risks associated with AI, commenters argued.
IBM indicated any legislation, for instance, should have requirements around transparency and documentation and tackle gaps in existing laws to handle high-risk uses of AI "when appropriate." That would look like voluntary impact assessments and bias testing for high-risk AI, as well as definitions of high-risk and prohibited use cases. When AI is being used and how it was trained should be part of disclosure requirements, the company said.
When determining risk, policymakers should look at those situations in which AI is used to make a "consequential decision," IBM said, such as those which have significant legal or material effects on a person.
The government could use existing frameworks to do this. The Consumer Technology Association pointed to several International Organization for Standardization guidelines, including 42001's AI management system, as prescriptive models already understood by the industry.
While the Center for Democracy & Technology pointed to the U.S. National Institute of Standards and Technology's AI Risk Management Framework. The nonprofit urged the White House to look at standards not only for catastrophic risks, but "also the current, ongoing risks of AI such as privacy harms, ineffectiveness of the system, lack of fitness for purpose, and discrimination."
Support through financing, adoption and advocacy
The breakout Chinese AI startup DeepSeek did not raise concerns just for international regulators — Wall Street took notice of the company's purported cheaper price tag to train its chatbot, raising questions about the investment spree going into U.S. tech companies and their future dominance of the market.
The answer to these challenges is government support of the industry, some commenters said.
"Today, hundreds of billions of dollars in global funds are waiting to be invested in AI infrastructure," wrote OpenAI, which focused its letter heavily on competition and security risks posed by China's AI market. "If the US doesn't move fast to channel these resources into projects that support democratic AI ecosystems around the world, the funds will flow to projects backed and shaped by the (Chinese Communist Party)."
Many proposals said the federal government should lead by quickly adopting AI into its workstream and streamlining the process to do so. It should also back efforts to increase AI infrastructure, like data centers and energy sources.
Anthropic, for example, said the government should supercharge its energy production by committing to building 50 gigawatts of power just for AI by 2027. It could do so by tasking agencies with reforming the permitting process and working with states to do so at the local level and expediting transmission line approvals.
The U.S. should also take a more aggressive role in protecting U.S. companies from foreign regulation, some said. Google called on the government to promote pro-innovation policies and tie its digital trade stance into future trade agreements.
"Foreign regulatory regimes should foster the development of AI technology rather than stifle it. Governments should generally not impose regulatory checkpoints on the development of underlying AI models or AI innovation," Google wrote.
Copyright law hinders innovation — or protects it
The practice of AI companies scraping the internet for data to train its models has been a hot-button issue, with several newspapers and authors taking companies to court over whether that use violates copyright laws. Voice actors and artists have raised instances where their work was allegedly used to train image generators without permission.
Both Google and OpenAI said they should be allowed to train on copyrighted works under the fair use doctrine, saying such data is needed for competition and growth of the industry.
Those arguments were opposed by newspaper chain Alden Global Capital and members of Hollywood. An open letter sent to the White House said those companies' positions risk damaging the creative industries — and they have the resources to enter into use agreements if they wish.
"There is no reason to weaken or eliminate the copyright protections that have helped America flourish," the letter said.
Caitlin Andrews is a staff writer for the IAPP.