A pair of recent U.S. Office of Management and Budget memos on how the federal government should use and purchase artificial intelligence offered a first look at how the White House may handle the technology.
The guides are presented as a new approach compared to prior recommendations under the Biden administration. The White House fact sheet on the memos indicated the policies "fundamentally shift perspectives and direction from the prior Administration, focusing now on utilizing emerging technologies to modernize the Federal Government."
In a statement, White House Office of Science and Technology Policy Principal Deputy Director Lynne Parker said the memos "offer much needed guidance on AI adoption and procurement that will remove unnecessary bureaucratic restrictions, allow agencies to be more efficient and cost-effective, and support a competitive American AI marketplace."
Despite the aim for fresh perspective, the guidance for chief AI officers, how to manage "high-impact" AI within the government, and tracking AI performance and managing risks in procurement share the same DNA as previous actions taken under Biden's now-rescinded AI executive actions.
The similarities show there is a growing consensus around AI best practices in the federal workforce, according to Quinn Anex-Ries, a senior policy analyst with the Center for Democracy and Technology.
"Because this is a refinement and iteration of what was already in place, agencies shouldn't be in a position to throw everything out and use it as an opportunity to refine what they’ve already been working on," he told the IAPP.
Agencies have a year from the issuance of the memos to show they are following the administration’s risk management practices, including pre-deployment testing, impact assessments and human oversight training.
Anex-Reis added the federal government maintaining a position on AI governance issues serves as a guidepost for companies, which will have to develop products adhering to the government's rules if they want to participate in the procurement process.
The guides could also serve as blueprints for the public sector at the state level, too. State legislatures have shown more willingness to regulate how their governments use AI while requiring transparency around AI use, with California,Kentucky, Texas, Vermont, Virginia and others doing so either through legislation or executive actions.
Some old, some new
The memos' lineage can be traced to President Donald Trump's first term, when a 2019 executive order charged agencies with ensuring U.S. competitiveness in the field while building trust in the government's deployment of AI years before the rise of chatbots brought AI governance into the forefront of the conversation.
In the White House statement, OMB Chief Information Officer Greg Barbaccia said the memos capture those prior competition goals and help address "a widening gap in adopting AI and modernizing government technology."
"OMB's new policies demonstrate that the government is committed to spending American taxpayer dollars efficiently and responsibly, while increasing public trust through the Federal use of AI," Barbaccia added.
High-risk AI has been redefined as high impact uses when its output "serves as a principal basis for decisions or actions that have legal, material, binding, or significant effect on AI rights or safety." There are several examples, including safety functions for critical infrastructure, transporting chemical agents, enforcement of trade policies, certain law enforcement activities and when protected speech is removed.
Emphasis on issues like environmental impacts and algorithmic bias do not make appearances, and agencies are not charged with prioritizing enforcement against biased AI.
Ohio State University Moritz College of Law Program on Data and Governance Director Dennis Hirsch, AIGP, pointed out agencies adopting AI are still charged with delivering services while "maintaining strong safeguards for civil rights, civil liberties, and privacy." Those policies are perhaps a signal for the private sector to not move away from AI governance either, he said.
"This begins to tell us that responsible AI governance is not completely out the window," Hirsch said. "There are clearly some differences and important things left out, but there is more overlap than I expected."
Anex-Reis noted the Biden guidance to federal agencies created a tiered system of handling high-risk AI depending on its designation as being safety or rights-impacting, whereas the Trump administration instead creates a singular system for all high-impact cases.
"This will likely be significantly easier for an agency to figure out, because you’re only having to make one determination," he said.
Chief AI officers are also carried over as the heads of the technology's usage within departments, with the memo asking agencies to retain or designate a person to promote innovation, adoption and governance. They still have to coordinate with each other and ensure use cases and risk processes are maintained.
According to Anex-Reis, there is "clear tension" between the overarching goals of the memos and the reported use of AI by the government under the new administration. He pointed to reports of the Department of Government Efficiency allegedly using AI to determine if employees should keep their jobs and feeding it sensitive data in its efforts to reduce waste and fraud.
"Really the significant, critical open question is, how is the rubber going to hit the road here?" Anex-Reis said.
Caitlin Andrews is a staff writer for the IAPP.