More jurisdictions are expected to try to settle on artificial intelligence policies and rules in 2025, but how those conversations will include AI safety is not yet clear.
The world has seen several major attempts at putting guardrails on the AI industry as the technology continues to change and accelerate. A common theme in those discussions — whether it was the EU's AI Act, California's ill-fated Senate Bill 1047, South Korea's AI Framework Act and the yet-to-be-passed AI Bill in Brazil —was the balance between protecting AI safety versus promoting innovation.
But AI stakeholders say these debates sometimes miss the complex nature of AI and what safety can mean, noting precise details from the introduction of any policy or legislation are necessary to generate more productive conversations.
Precision could be crucial as the AI policy outlook in the U.S. looks uncertain. President Donald Trump rescinded the Biden administration's AI executive order and replaced it with his own, calling for a new action plan within the first 180 days. Remarks from Vice President JD Vance at France's AI Action Summit this week signaled the administration views little to no regulation as crucial to fully harness AI's potential, while also noting other global jurisdictions "tightening screws" on U.S. AI developers will not be tolerated.
"I think there's real uncertainty still about what direction we'll see on AI policy from the new administration. One of the ways to think about that is to recognize that you simultaneously have a lot of different voices that are engaging in increasingly volatile conversation about AI safety and AI regulation over the last three or four years," said Gillian Hatfield, a computer science research professor with John Hopkins University.
"I do think it's very clear, no matter the conversation about regulation, the development of AI regulation, the question of AI regulation, is not going away," she continued.
Safety through a broad scope
AI safety broadly refers to the idea of practices and principles surrounding the development and use of AI in a way that prevents harmful outcomes to humans, according to entities including IBM, Securiti, the U.S. National Institute of Standards and Technology and the Center for Security and Emerging Technology. The Organisation for Economic Co-operation and Development, an intergovernmental group promoting sustainable growth and development, counts security and safety as part of its core principles for ensuring AI is trustworthy.
"AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety and/or security risks," the OECD principle reads.
Part of the challenge around understanding AI safety is how universal the technology has become, according to Duncan O'Daniel Eddy, a Stanford University Center for AI Safety research fellow who was speaking on his own behalf. AI has a variety of use cases across sectors, which makes it more challenging to talk about safety without specifying what application of the technology is being addressed.
"It needs to be an informed conversation of what are outcomes that you are looking to prevent and what are the benefits you’re looking to gain," he said. "So, on the innovation side, what are the exact problems we are looking to solve with AI?"
"As you start pushing the boundaries of new technologies, the safety conversation should be had simultaneously," Eddy added. "What are the potential harms we see? How do we guard against them?"
Metaverse Law founder Lily Li said a common challenge with AI safety bills is they attempt to address many harms at once. She said legislation that tries to manage automated decision-making technologywhile simultaneously addressing bias can bring confusion for businesses, ultimately creating more debate around a given bill.
"A lot of the regulatory language around AI safety is trying to package those disparate concepts together, and so that's why it's often confusing for businesses that haven't been as regulated before to suddenly have something that sounds more like a quality control management system or industrial style control system for a software product or another product that's never had to think about these things," she said.
Littler Mendelson Shareholder Zoe Argento, CIPP/US, said she has heard from businesses who consider dropping their use of AI altogether based on how a state might regulate it. But that does not mean those companies are not concerned about the safety of their products, she said.
"Companies want to ensure that their AI achieves their goals, which include accuracy, equity, and effectiveness," Argento said. "There's definitely alignment."
But there are some common aspects of AI safety bills which can make using AI not worth the time for companies, she added.
"If they're required to provide an opt out, in many cases, that undermines the benefits of using these technologies. Effectively, the employer has to create two systems — the new AI system and the old labor-intensive system," she said. "If you have to create two systems, it's probably just easier to have one system, even if it's the older more labor-intensive system."
What's been signaled so far
With federal policy, Trump made good on his promise to reverse his predecessor's executive order on AI upon entering office, replacing measures aimed at ensuring the federal government uses AI safely and requiring big producers to report their safety results with a directive to "enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security."
Trump's new order "revokes certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence."
The order also asks the White House's technology leaders to come up with a plan in 180 days to accomplish this task. A request for information has been filed seeking public comment toward shaping an AI action plan.
While the order does say AI systems must be developed to be "free from ideological bias or engineered social agendas," there was little detail on what that might mean under the Trump administration, compared to former President Joe Biden's order.
It is still too early to know where Trump will go on AI, but a focus on U.S.'s competitiveness in the industry is clear. He signed an executive order in his first term boosting research investments and setting technical standards. Trump kept Biden's executive order making it easier to build data centers on federal land.
In Congress, some Republicans have been skeptical AI needs safety regulations. Sen. Ted Cruz, R-Texas, now the chair of the Senate Committee on Commerce, Science & Transportation, sent a letter to then-U.S. Attorney General Merrick Garland in November asking him to investigate the U.K.-based Centre for the Governance of Artificial Intelligence as a foreign agent trying to influence U.S. policy.
"AI has vast potential to improve human welfare and society. But some technocrats and academics believe AI poses severe 'risks' to 'safety,' which they define as 'disinformation,' 'bias and underrepresentation,' and 'risks to the environment,'" Cruz wrote.
The letter later reads: "People who peddle such hyperbole are doing so not just to gain control over AI's development, but to also gain control over information flow and a citizen's ability to communicate with others free from government intrusion."
But Caitriona Fitzgerald, deputy director of the Electronic Privacy Information Center, said it is wrong to assume those who care about safety do not also care about innovation. She said those arguments are used to block regulations which could ultimately create products the public trusts later down the road.
Fitzgerald likened it to how Congress has yet to pass laws regulating social media, despite loud concerns about those platforms harm people, especially children's mental health. Now, 10 states have limited children's use of social media and the public's view on social media is decidedlymixed.
"Those systems were built in such a way where safety and privacy were not kept in mind," she said, "and now you're seeing how people feel about those systems, and it's not good."
Cruz's statements highlight an ongoing congressional standstill on federal AI rules, which leaves state legislatures to continue laying the groundwork for legislation to balance safety and innovation.
State-level proposals covering AI safety in a broader context have seen mixed success.
Colorado became the first state to put requirements on high-risk systems — those involved in decision making in areas like employment, health care, education andhousing, but how the law will work will be hashed out by a task force over the course of this year. That process has also seen pushback from businesses.
California and Connecticut's bills ultimately floundered due to a governor's veto or the threat of one. In both cases, governors cited concerns the bill could stymie AI development as reason for their hesitancy.
Laws with more targeted approaches saw more success. Utah, for example, passed a law clarifying how AI works under its consumer protection law. The National Conference for State Legislatures reports over half the states passed laws around using AI to create false images, audio or videos, called deepfakes.
Caitlin Andrews is a staff writer for the IAPP.