The U.S. Office of Budget Management completed one of the more ambitious tasks the White House set before its agencies last year by completing policies on use of artificial intelligence by federal agencies.
The final memorandum, released 28 March, requires agencies to follow specific actions to test, monitor and address risks of AI usage on the public to ensure it is not infringing on their rights and safety. Purposes which are presumed to be safety- or rights-impacting are laid out, touching on everything from election infrastructure, self-moving vehicles, medical usage and biometric identification, to name a few.
The guidance also offers advice on managing risk when procuring AI systems, requires agencies to increase transparency of AI use and strengthen AI governance, including by designating chief AI officers to coordinate AI use across their agency and setting up AI governance boards to govern use across agencies.
The requirements come 150 days after U.S. President Joe Biden first directed agencies to assess their AI usage and risks under a draft memorandum. That notice was within the White House's sweeping executive order on AI.
"Federal agencies have a distinct responsibility to identify and manage AI risks because of the role they play in our society, and the public must have confidence that the agencies will protect their rights and safety," the fact sheet read.
So far, agencies have been meeting the order's benchmarks around company "red team" reporting and launching a research lab, according to the government.
Chloe Autio, an independent consultant on AI governance policy, said the policies are striking because it sets hard rules for AI usage in the government, unlike the voluntary risk mitigation frameworks major tech companies agreed to in 2024. She also pointed to the requirements that each agency hire an AI officer and language detailing their specific roles.
"Anyone who knows or has worked on corporate AI governance issues knows that just putting someone in charge is not enough," she said. "You need to really tell that person and give them the resources and a roadmap for how to do their job effectively."
Increased transparency is a theme throughout the memorandum. Agencies will have to report how they use AI in an annual inventory. If an agency cannot implement the safeguards, agencies have to stop using the technology unless leadership can explain why doing so would constitute a safety or rights risk, or impede the agency's functions. Defense and intelligence agencies are exempt. There is a 1 Dec. deadline for agencies to put those safeguards in place.
"These are the actions that will ensure that when we use AI for all the reasons we need to use it in government ... we are protecting individual rights and we are protecting the liberties that are the core values of this country," Arati Prabhakar, director of the White House Office of Science and Technology, said during a press conference commemorating the order's release.
Safeguards could look like allowing for travelers to opt out of facial recognition at airports without delay as the Transportation Security Administration plans to expand its use of the technology. Others would be having a human oversee AI being used to detect fraud in government services, according to the fact sheet.
The policies also require the government to consult with the public, researchers and the federal workforce before those guardrails are put in place. Those requirements show the government is taking the need for human review and oversight of the technology seriously, according to Center for Democracy and Technology Policy Counsel for Privacy and Data Ridhi Shetty.
"It emphasized the importance of making sure the public understands the systems that are affecting them," she said.
But Shetty added it is unclear what checks will be in place for when agencies request a waiver to not disclose their AI usage and how they will report metrics for that use. She indicated the policies should be considered a floor for agencies to build their AI governance on and hoped future revisions would provide greater clarity.
To make AI usage easier, the memorandum also directs agencies to ensure their AI projects have adequate access to IT infrastructure and data, with the latter being assessed prior to use for any possible quality or bias issues. Agencies are told in particular to focus on the assessment of generative AI and make sure its use does not pose a risk.
The memorandum also aims to boost the U.S. competitive edge on AI. It promises to hire an additional 100 AI workers by this summer. Talent will be trained up with an additional USD5 million dedicated to upskilling under the president's proposed fiscal year 2025 budget. The deputy secretary or their equivalent will be in charge of each agency's AI Governance Board, which will be stood up by 27 May.
Federal procurement practices will also get attention under the policies. The OMB plans to review agency contracts with AI providers later in the year to ensure they line up with the memorandum philosophy. There will also be a request for procurement on how to responsibly obtain AI in government to inform future purchases.