After months of speculation, U.S. President Joe Biden released the federal government's first comprehensive action around artificial intelligence.

Among the top priorities in the executive order released 30 Oct. are standards around privacy, security and safety, according to a fact sheet released prior to the text of the order. The sweeping order also seeks to prevent discrimination in systems while trying to protect workers' rights when the technology is used in the workplace.

"Biden is rolling out the strongest set of actions any government in the world has ever taken on AI safety, security and trust," White House Deputy Chief of Staff Bruce Reed said. "It's the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks."

Biden promised action on AI in August as other countries, such as China and the European Union, pushed to put their own regulations in place. In the months leading to the order, Biden secured voluntary commitments from several companies to abide by a set of safety metrics as more formal regulations were crafted by U.S. Congress. Earlier this year, the White House also released a Blueprint for an AI Bill of Rights. 

Speaking before signing the order, Biden appeared solemn as he outlined the enormous task that faces the federal government in trying to set guardrails around a technology that can take many forms and continues to evolve at a rapid pace. He said the nation is at an "inflection point in history" where the decisions of today will have long-lasting consequences. Those consequences could be dangerous, Biden said — but they could also help cure cancer and fight climate change.

"One thing is clear, to realize the promise of AI and avoid the risk we need to govern this technology," he said. "And there's no other way around it — in my view, it must be governed."

While there have been hearings and bills introduced by U.S. lawmakers to rein in AI, progress on Capitol Hill has been difficult as lawmakers grappled with spending bills as well as a three-week shutdown in the U.S. House as it elected a new speaker.

Privacy requirements take priority

Chief among the points made in the order is that privacy and AI must go hand in hand.

"Without safeguards, AI can put Americans' privacy further at risk," the White House wrote in its fact sheet. "AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems."

The order goes on to make an explicit call to Congress to push forward with comprehensive privacy legislation. Those sentiments are on the radar for federal lawmakers, evidenced by a recent U.S. House subcommittee hearing delving into why privacy legislation should ultimately set the stage for AI regulation.

That need was reiterated by Senate Majority Leader Chuck Schumer, D-N.Y., who also called for Congress to move on legislation during a Washington Post summit last week.

"I don't know if the AI (executive order) will be the most comprehensive in history, but it's the most comprehensive in my experience," said Cam Kerry, of the Brookings Institution, who has focused on getting the U.S. to pass federal privacy legislation in recent years. 

"The call for legislation in the White House Fact Sheet reiterates the call for legislation in State of the Union addresses, but makes it clear the president is talking about comprehensive legislation," Kerry said in comments provided to the IAPP. "The (executive order) is aimed at directing federal agencies. The support for privacy-enhancing technologies should help energize that field, especially since I understand there will be resources available for this." 

Kerry also said the "scale of federal government data use" will "make a real difference," and that "the review of government use of commercially available information is long overdue."

Under the order, federal agencies are called to develop techniques to protect peoples' privacy and study how effective current privacy protections are. The idea is to create a balance between making data accessible for training purposes while ensuring that data is secure and anonymous.

The order also directs the National Science Foundation to work with researchers on how to develop stronger cryptography protections, something seen as vital to protecting privacy and national security as quantum computing advances. It requires federal agencies to study how they collect and use information, including what is purchased from data brokers, to understand how to safeguard that data.

The Future of Privacy Forum backed the call on Congress to pass bipartisan privacy legislation and said the "AI plan is incredibly comprehensive." Though the executive order "focuses on the government's use of AI, the influence on the private sector will be profound due to the extensive requirements for government vendors, worker surveillance, education and housing priorities, the development of standards to conduct risk assessments and mitigate bias, the investments in privacy-enhancing technologies, and more." 

The Center for Democracy and Technology also applauded the order. "The (White House's) forthcoming (AI executive order) represents a remarkable, whole-of-government effort to support the responsible development and governance of AI," said Alexandra Reeve Givens, the president and CEO of the CDT.

Developers will have to share their data

The order relies on the Defense Production Act to require developers share their safety test results with the government before they go public to identify any potential risks. It specifically asks for red-team safety test data, or training simulations that are run to test the vulnerabilities of a software.

But it also sets standards for those tests and requires agencies such as the Departments of Energy and Homeland Security to look at AI's risks to critical infrastructure. It orders the National Security Council to develop a national memorandum on how the military and intelligence community use AI and how to counteract adversaries' actions.

It is a significant requirement of companies as more scrutiny is applied to how generative AI algorithms create content. Researchers who have had a chance to look at open software have found the protections companies build in are not always foolproof, The New York Times reported. And even the creators of some AIs do not understand how their algorithms make decisions, which makes it difficult to regulate.

"At the end of the day, the companies can't grade their own homework here," White House Chief of Staff Jeff Zients told NPR. "So we've set the new standards on how we work with the private sector on AI, and those are standards that we're going to make sure the private companies live up to."

Protecting workers, equity while encouraging competition

The order builds on Biden's Blueprint for an AI Bill of Rights by requiring agencies to issue guidance to landlords, federal benefits programs and contractors on how they can use AI in an attempt to reduce discrimination. That comes after studies such as a recent one from the Stanford University Institute for Human-Centered Artificial Intelligence showed banking algorithms can hurt low-income borrowers.

The order also requires the Department of Justice to train civil rights offices on best practices on prosecuting civil rights violations related to AI, as well as setting standards on using AI in the criminal justice system.

Suresh Venkatasubramanian, a professor at Brown University who helped write the White House's AI blueprint and who sits on the IAPP AI Governance Center Advisory Board, said the order suggested a strong approach to protecting civil rights around AI.

"I think that there's a lot ... in the fact sheet that is very encouraging and suggests a strong approach to addressing civil rights and concerns about algorithmic discrimination," he said in comments provided to the IAPP. "I welcome the broad scope of the work indicated in the fact sheet and hope that the (executive order) will also include strong protections for people subject to the technologies used within the criminal justice system."

"In general I think this is a powerful message to tech companies that the administration wants to place people before profit," he added.

Also included are provisions meant to protect workers' abilities to collectively bargain and requirements to address how AI could affect job displacement and labor standards. It asks that companies put watermarks on generated AI content to distinguish it from human-created works.

But the order also focuses heavily on promoting U.S. competition. It calls for the creation of a National AI Research Resource to make research tools more readily available and give small businesses technical assistance. It seeks to make it easier for AI-skilled individuals to study and work in the U.S. by streamlining the visa process.

"This is perhaps the most significant action that will supercharge American competitiveness," Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists, told The Washington Post.

As the world moves, pressure mounts on Congress

While the scope of Biden's order is ambitious, it makes clear there are limits to what the White House can do alone. And though Congress has shown strides and motivation, the rest of the world isn't waiting.

On Monday, the G7 countries signed a code of conduct for companies looking to develop AI tech. That voluntary code will affect Canada, France, Germany, Italy, the United Kingdom and the U.S.

Members of the Global Privacy Assembly released a resolution that tackled similar ground to the White House's order. It aims to balance the rapid development of AI technologies by setting legal standards around data collection, minimizing the data needed to train system and sets standards on how AI should be governed to ensure accuracy.

The U.K. Department for Science, Innovation and Technology also released a paper exploring the capabilities — but also the dangers of misuse — that can come from using AI tech. The paper's release will likely set the tone for the upcoming U.K. AI Summit later this week.

Editor's note: IAPP Editorial Director Jedidiah Bracy contributed to this reporting.