U.S. Department of Justice officials said a process to recreate how the federal government acquires artificial intelligence has made the whole federal government more collaborative and flexible in how it pursues contracts.

Government agencies received revised memos of how to handle AI procurement and AI use in April after U.S. President Donald Trump issued the first AI-related executive order of his second term. The former gave the Office of Management and Budget and the General Services Administration 100 days to develop a guide to assist in procurement and 180 days for each agency to comply with the memo; all future contracts would be governed by those rules afterwards.

That memo's deadline is next month. Christina Baptista, senior counsel for the DOJ, speaking at the IAPP AI Global Governance North America 2025 conference, said her agency has developed cross-functional working groups which include privacy subject matter experts and are actively reviewing contract clauses ahead of the memo's enforcement date. She said a key goal for agencies is to make sure vendors — who are often juggling multiple compliance requirements across the globe — are not overly burdened.

"We're very cognizant of all of the other laws and regulations and requirements that the vendors we work with have to deal with and comply with," she said.

Baptista noted the orders also required the government to be agile — something she was not sure the government's acquisition process was ever described as — and more coordinated than ever before. But the changes would be worth it, she said, because AI changes so quickly that the government needs to adapt to keep up.

"We are happy to see this push, because it we cannot keep up with the technology we continue the way that we have continued in our acquisition processes today," she said.

Michelle Ramsden, CIPP/E, CIPP/G, CIPP/US, CIPM, senior counsel for the DOJ and a member of the Emerging Technology Board, said the revision process has also given more decision making to the agencies and internal structures. This has in turn allowed for more flexibility to work with operators to achieve compliance.

Governance, too, remains a high priority for her agency, which Ramsden said will further the White House's goal of promoting innovative AI.

"We would tell you that any responsible governance empowers innovation because it creates a stable foundation and protects the organization from the harms that can undermine the whole thing," she said.

More collaboration with vendors also helps with risk management and accountability, Ramsden said, especially when handling people's data.

"Because we do have the information of pretty much everyone in our hand, but we have to make sure that we're taking care of it, but we definitely don't want to hamstring the innovation or prevent us from being able to achieve valuable services for the public," she said.

Baptista and Ramsden's comment come as another Trump executive order, "Preventing Woke AI In The Federal Government," also approaches implementation. That order — part of a package of orders released alongside the AI Action Plan — gave the OMB 120 days to develop guidance on how agencies could account for technical limitations of the order, how vendors can reach compliance and provide specific factors for agencies to consider when looking for unbiased models.

The order requires models used by the government to be "truth seeking" and "be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI." It requires large language models to prioritize historical accuracy and "shall acknowledge uncertainty where reliable information is incomplete or contradictory."

The order has been criticized by groups like the Center For Democracy & Technology, which said the order runs counter to best practices on procurement and the administration's prior acquisition directives.

"What distinguishes these responsible procurement practices from the Trump administration's current effort to prevent federal agencies from procuring 'woke AI' tools is that they are defined, measurable, and actionable," wrote the CDT's Becca Branum and Quinn Anex-Ries. "By contrast, rooting out 'woke AI' isn't something that agencies can objectively implement."

Senior White House Advisor on AI Sriram Krishnan, AIGP, speaking at an Axios event earlier in the week, defended the order, saying the government does not want "any thumbs on the scale" for large language models the government uses.

As for how ideological bias will be measured, Krishnan said OMB and other agencies are still developing those evaluations.

"This is taxpayer money being spent on these models. And we are saying, listen, like you are free to do whatever you want, but if you want the American taxpayer to buy your model through the USG, we don't want any ideology or bias," he said. "If you have biases, just tell us what they are."

Caitlin Andrews is a staff writer for the IAPP.