The White House marked the one-year anniversary of its landmark executive order on artificial intelligence by reporting federal agencies have completed all tasks it laid out in the ambitious plan to address aspects of AI governance, safety and security.

The White House touted some of the achievements stemming from agencies' respective implementation of the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, noting there were "more than one hundred" tasks departments collectively took on. The initiatives included those with the goals of "standing up for workers, consumers, privacy, and civil rights" as well as "managing risks to safety and security" brought on by the rise of AI deployments.

"President (Joe) Biden instructed me and leaders across the Administration to pull every lever to keep pace with rapid advancements in AI to mitigate the risks so we can harness the benefits," U.S. Secretary of Commerce Gina Raimondo said in a statement. "Over the last year, that is precisely what we’ve done at Commerce, building a national AI safety institute, collaborating with leaders in industry, academia, and civil society, and working with partners and allies around the world to write the rules of the road on AI.”

The order was introduced at a time when AI was reshaping how individuals view technology and the guardrails it should have around it. A year after OpenAI's ChatGPT showed the capabilities of generative AI and the EU was in the home stretch of its major AI regulation, the order aimed to make clear the White House was taking seriously the safety and bias risks associated with AI while also supporting the research and investment needed to make the country competitive.

Now a year removed from the introduction of the order, the U.S. regulatory AI landscape has not changed much on the federal side. Comprehensive privacy legislation to govern data collection practices, a key element of the order, failed to gather enough congressional support to make it off the ground. More than 120 AI-related bills were introduced to Congress this year, the MIT Technology Review notes, but few have made it out of House or Senate committees so far.

In the absence of legislation, the Biden administration recognizes it must continue to chart a path toward safety, security and trust with the expectation AI will only inject itself further into everyday life.

"We've made tremendous progress over the last year, but we're clear-eyed on the work that remains," Raimondo said. "We’re going to continue charging ahead to fulfill the goals of this historic EO to spur the safe development and deployment of AI in our societies."

Much of the order was focused on developing guidance for federal agencies which have little effect on the private sector. Still, the order gives companies and governance leaders a window into how the administration may look to handle AI, members of the legal, academic and technical parts of the AI industry say.

The standards and guidance produced by federal agencies puts the government in a better position to manage itself and put the U.S. more in line with international practices.

"When the government sets its own standards, that starts to define the field, even if it's not directly applicable to private companies," WilmerHale Partner Kirk Nahra said.

"I say to my clients, you need to anticipate that you're going to be judged in a couple of years on standards that don't exist now, but they will exist in a couple of years,” he continued. "And you need to be thinking about how once the government starts to set out its approach, even if it's for the government, you're going to see that carry over to the regulatory side."

What the order did

The order came after Biden secured voluntary safety commitments from several leading AI companies and had released a Blueprint for an AI Bill of Rights. It vowed to tackle AI safety risks related to biological materials, software vulnerabilities and national security. It included over 100 directives to various agencies within a year, according to a fact sheet released by the White House.

The initiatives run the gamut, from requiring developers to report redteaming or safety testing results to the U.S. government using Defense Production Act powers, a national security memorandum on AI, best practices for AI in the workplace and designating a person in charge of AI for every federal office.

Civil rights groups praised it as a good first step, but said that also did not go far enough to address some harms.

The Center for Democracy and Technology, for instance, argued in a release that the national security memo and its framework do not require enough transparency and independent oversight for significant human rights issues. It still praised the order for guiding agencies on other issues like employment and housing.

"This one-year anniversary of the AI Executive Order isn’t the end of the road on AI governance — we’re still at the beginning," Alexandra Reeve Givens, CDT president and CEO, said in a statement. "The work ahead will involve moving beyond foundational guidance to implementing durable, effective AI governance, in both the private and public sector."

Aaron Cooper, the senior vice president of global policy for the BSA | The Software Alliance, said perhaps the most significant development out of the order was getting the entire federal government to look at AI and how it might affect their different areas at the same time. Such coordination can help set the U.S. up for more substantive policy going forward, he said.

"Often AI gets siloed to different substantive areas of the government, in which one agency or another might just look at AI and copyright, or discrimination, or AI and national security," Cooper said. "There are lots of substantive issues that touch on AI issues."

Many of the risk and safety tasks were delegated to the Department of Commerce, which houses the National Institute for Standards and Technology and National Telecommunications and Information Administration. The department stood up the U.S. AI Safety Institute and built on prior frameworks to release guides focused on generative AI and foundation model risks as well as the redteaming requirements. A rule has been proposed to make the latter a quarterly requirement.

Georgetown University Center for Security and Emerging Technology Research Analyst Mina Narayanan said standards can be set by some of the actions even if they are largely focused on the government, like the Office of Management and Budget's memo to federal agencies on procuring AI responsibly. That action lets the government leverage its role as a buyer to set standard for what kind of AI products can be developed for federal contracts, she said.

"The EO is really kind of the center of gravity when it comes to AI governance in in the U.S.," Narayanan said.

Perhaps the order's most important impact was to start conversations around what AI regulation could look like. Hugging Face Machine Learning and Society Lead Yacine Jernite said part of the AI's complexity is that it is used in a variety of ways across society. Having the government look at specific applications and weigh in — such as the NTIA support open-weight models in a report — can lay the groundwork for any domain-specific regulation down the line, he said.

"All of those conversations are going to be much better informed from having tried to grapple with AI in those specific settings," Jernite said.

What's next

The order's directives stretch out nearly two years past its passage. But what kind of action happens next will largely depend on who wins the U.S. presidential election 5 Nov. Former President Donald Trump has said he wants to "cancel" the order, whereas Vice President Kamala Harris used it to demonstrate why AI can present existential risks, the Associated Press reported.

There has been some movement at the congressional level. The Senate released a working group report on the technology, whereas various House of Representatives’ committees have released individual reports.

But most AI policy action in the U.S. has come from the states, ranging from stopping unauthorized use of another's likeness in Tennessee and criminalizing the creation of child sexual abuse material through AI in Alabama. California, Utah and Colorado passed bills putting various requirements on private-sector development and use.

Cooper stressed regulations around impact assessments and safety guardrails should come from Congress to create consistency.

"I think a lot of the work done by Congress does create the building blocks for action," he said. "It's not that any one piece of legislation is going to be the basis for action next year, but the work that different committees have done sets a foundation for action."

Caitlin Andrews is a staff writer for the IAPP.