As the months have flown by since the release of the U.S. Executive Order 14110, there have been dozens of notable deadlines and milestones. We have witnessed new regulations and implementation guidelines across sectors, sometimes covering large portions of the economy and sometimes going deep on high-risk use cases.

One lasting impact of the Biden-Harris administration's focus on trustworthy AI will be the maturation of structures for AI governance across the federal government.

At times, this will have a direct impact on the commercial sector. This is true of one of the newest milestones, the Office of Management and Budget's release of guidance for government acquisition of AI in a new memo, M-24-18.

Acquiring minds want to know

The long-awaited procurement guidance directly affects the private sector because it mandates federal agencies to negotiate "appropriate contractual requirements and evaluation processes to ensure vendors provide sufficient information for agencies to evaluate vendor claims, identify and manage risk, conduct impact assessments, and fulfill requirements to notify impacted individuals and implement appeals."

This echoes the policy debate currently raging in Washington, D.C., and around the world as developers and deployers determine how to properly share responsibility for the outcomes of AI systems. Contractual safeguards, downstream documentation and impact assessments have all become clear best practices, which will now be required when interacting with the U.S. government.

Another policy challenge continues to be the determination of which AI systems should be subjected to the highest levels of oversight and guardrails. The executive order and OMB guidance embraced the concept of rights-impacting and safety-impacting AI systems. Agencies are under a special deadline to provide documentation and embrace "minimum practices" for systems that meet this criteria by 1 Dec.

The new memo adds to this requirement by specifying how agencies must document all contracts that relate to vendor-provided rights-impacting or safety-impacting systems by the same deadline.

Another notable feature of the new memo is its focus on privacy. The OMB specified that "agency privacy officials and programs have early, ongoing involvement in AI acquisition processes so that they are able to identify and manage privacy risks and ensure compliance with law and policy."

This high level of privacy office involvement dovetails with the governance structures IAPP has observed in the commercial sector, as we recently described in our Organizational Digital Governance Report 2024.

A plan for everything and everything in its plan

The newest memo builds on the government-wide binding guidance the OMB released in March, M-24-10, requiring agencies to strengthen governance, innovation and risk management for the use of AI systems.

Another deadline was set in motion by the publication of the OMB M-24-10 memo, which required agencies to publish in a public form a compliance plan 180 days later. As it happens, this deadline passed last Tuesday.

Federal agencies of all stripes have dutifully complied with the requirement, though a few have reportedly missed the deadline.

A wide variety of maturity of structures and depth of explanation are on display in the posted compliance plans. Some, like the Department of Veterans Affairs, are posted to a simple web page and designed for brevity. Other agencies have issued lengthy formatted documents with charts and detailed explanations of governance processes. To compare them yourself, check out the list collected by FedScoop.

Trucking right along

One example of an agency compliance plan that caught my eye was published by the Department of Transportation. The department's acting chief AI officer role was provided to Deputy Chief Data Officer Mike Horton. I met Horton at an AI governance event in Washington right before the memo was due, and he told me a bit about the effort and creativity his agency was exerting to implement the OMB guidance in an effective and lasting way.

This approach shines through in the DOT memo, which runs to 15 acronym-packed pages and details a layered and ongoing governance structure that builds on the minimum expectations of the OMB guidance.

The DOT has even deployed adorable nerdy acronyms to label the governance structures within its "AI Accelerator," in ways only a federal bureaucrat could have dreamed up.

For example, the platform where the DOT tracks "AI use case development, maturity, assessments, clearances, risk evaluations and mitigations, and authorities to operate from conception through retirement" is called TrUCKR, the Transportation Use Case Knowledge Repository.

Only after multiple steps of governance, risk management and budget clearance does the developer of a use case tracked in the TrUCKR gain access to the central resources of TrAIN, the Transportation AI-Enabled Network. Get it? Trains because transportation, but also training AI models? Brilliant.

Anyway, the TrAIN is just the structure for describing how all operational environments for approved AI use cases will be housed under an umbrella for continuous compliance and risk-management monitoring by the office of the chief AI officer.

Another notable feature of the DOT structure is the use of an oversight panel. Boards and committees are common features of AI governance in the commercial sector, so it is interesting to see how similar structures can work in a federal agency context.

In the DOT case, the agency has added the function of AI Governance Board to an existing well-established panel. The Non-Traditional and Emerging Transportation Technology Council was established "as an internal DOT vetting body for new and emerging transportation technologies that are not yet established enough to fit into obvious modal categories or require new policy approaches."

As part of implementing the OMB guidance, the DOT updated the NETT Council's membership and charter and added two subcommittees, including an AI Safety, Rights and Security Review Advisory Committee, dubbed the SR2 Committee.

The SR2 Committee, chaired by the chief AI officer, will conduct the security reviews required by Executive Order 14110 before AI models are released to the public. It also serves a vital function, in collaboration with the CAIO, as the agency finalizes its "public use case inventory" of high risk systems. The committee will make the final determinations about which systems to exclude from the public disclosures required by the end of this year.

The DOT describes an ongoing multilayered approach to responsibility for AI governance.

At the staff level, "operating administration representatives" are the first line of defense for evaluating systems. Specifically, they are responsible for "adequately identifying, evaluating, and continually monitoring each AI use case for its potential and realized impact on safety and rights and sufficiently documenting those assessments and reassessments in TrUCKR for CAIO initial determination."

This recommendation makes its way to the SR2 Committee, which advises the CAIO on the final determination for each use case prior to deployment. Operators can appeal the determination to the full NETT Council.

The DOT even explains how systems will be shut down if a safety or rights-impacting use case fails to meet the minimum risk-management compliance, as documented and implemented by team-level operators and overseen by the CAIO. "Use cases out of compliance with minimum risk standards, as determined by the CAIO through advisement with the SR2 Committee, will suspend operations and revert to the non-AI process until compliance is reinstituted and the use case is cleared to resume operations by the CAIO or terminated if minimum risk standards cannot be met."

Standardize and harmonize

Initial agency documentation exercises like this are critical for establishing lasting governance structures at the U.S. federal level.

In the next couple of months, we will see far more detail publicly released, as agencies make determinations about which use cases to publish in their inventories.

In the meantime, agencies will no doubt be closely watching each other — and the watchful eye of the OMB — as they continue to benchmark their governance structures.

In its compliance report, the DOT also highlights its efforts at harmonization, as an "active collaborator with the Office of Science and Technology Policy, the National Institute of Standards and Technology, the Chief Artificial Intelligence Officer Council, and other federal entities that seek to interpret and implement AI management requirements consistently across federal agencies and create efficiencies and opportunities for sharing resources and best practices."

Such work is vital in both the public and private sector as we collectively work to right-size AI governance.

Please send feedback, updates and bureaucratic acronyms to cobun@iapp.org.

Cobun Zweifel-Keegan, CIPP/US, CIPM, is the managing director in Washington, D.C., for the IAPP.