Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Following a public consultation, the U.S. President Donald Trump's administration released its AI Action Plan late last month to accelerate the development and use of AI. The key objective is included in the title, "Winning the Race, America's AI Action Plan."   

My IAPP colleagues and other insightful voices in the AI ecosystem have already provided significant analysis. Below, I have linked to many of these resources in case you are looking to understand the plan and its various implications. I would encourage you to read or listen to at least one or more of these. My intent here is not to summarize this analysis but identify the components that I think are particularly important for AI governance professionals. While not a law, this Action Plan provides important direction and reveals the current administration's priorities for AI policy.  

AI governance professionals are essential

It wont come as a surprise to many who have been following the actions of the Trump administration closely is that one of the key themes in the plan is to avoid regulation at the federal and state level. A moratorium is not explicitly discussed in the plan, but it proposes that funding to states will be limited if states seek to regulate AI. This is aligned with some of the concepts we heard through the proposed AI moratorium discussions.  

While it appears that federal AI legislation under this administration will be unlikely, the AI Action Plan provides direction to several governance functions to advance the development of AI and undergird its infrastructure. This includes, but is not limited to, the development of regulatory sandboxes, support and leadership of AI standards, and a federal Chief AI Officer Council, to name a few.  

Additionally, as we have seen in other global AI strategies and pieces of legislation, there is a recognition that people are an essential part of the plan. These priorities are outlined with a six-point plan to "Empower American Workers in the Age of AI," including improving AI literacy through training and up-skilling.  

AI adoption challenge

For me, the most interesting part of the plan was the recognition that the U.S. has significant barriers to AI adoption. The bulk of the 28-page document is focused on how the U.S. is going to support the development of AI. The report states, "AI's full potential is not necessarily the availability of models, tools, or applications. Rather, it is the limited and slow adoption of AI."

Adoption is the crux of the challenge. The section "Enable AI Adoption" recognizes that AI systems must be trustworthy. However, as I shared last month, the U.S. along with other advanced economies have a trust problem when adopting AI in a significant way. For example, the Board of Governors at the Federal Reserve System identified that AI adoption in the U.S. ranges between 20-40%. And while it is increasing, it doesn't speak to the significance of adoption impacting the workplace or services that are transforming society.  

The adoption dependency is important to note. As the Global Trust Report identifies, emerging economies are adopting AI at faster rates than the U.S. and its allies. Furthermore, it ties healthy adoption to high rates of AI literacy. In the U.S., the rates of AI literacy are measurably lower than other nations surveyed. While an explicit connection is not made in the AI Action Plan, I believe that AI governance professionals are a critical dependency to improve the adoption and use of AI in the U.S.  

State AI agendas continue — for now

While the full impact of the AI Action Plan will emerge in the months to come, it will be interesting to see which states move forward with proposed AI legislation. Combined, there are hundreds of AI related laws in flight across the states. At the IAPP, we track comprehensive AI legislation, and at present, there are only four that have passed and ten that remain active. Time will tell if some states choose to forgo funding from the federal government to create rules, or if the direction in the AI Action Plan will create a chilling effect.  

US vs. the rest of the world

Many of the resources below provide great analysis on the implications of the U.S. direction in relation to global AI politics. It is important to note that in the days following the release of the AI Action Plan, China delivered its own Action Plan on Global Governance of Artificial Intelligence. China's version emphasizes the importance of global collaboration to advance AI for the whole world, which is a stark contrast to the recent AI sovereignty agendas we have seen around the world. It harkens back to early AI governance conversations where there was a recognition that global collaboration on common principles and policies would benefit all nations. Again, in the coming months, it will be interesting to see which components of each of these plans will come to fruition.  

The same week the U.S. AI Action Plan was released, the EU approved the General-Purpose Code of Practice with clear guidance on how GPAI model providers can comply with the comprehensive EU AI Act. The passage of this code also provided clarity on the debate about whether the EU would choose to pause the AI Act. European Commission President Ursula von der Leyen said in no uncertain terms that the AI Act will not be paused.  

Where do we go from here?

While there is now more clarity on the U.S. AI agenda, many questions on how the action plan will be prioritized and implemented remain. It appears that while there is significant divergence on formal legislation for AI across the world, many of the underpinning governance mechanisms remain important across all global AI strategies and plans.  

Key governance mechanisms like standards, training, sandboxes, and councils will become necessary functions to increase the adoption of AI, providing the necessary demand for the amount of AI that the Trump administration wants to build. This simply cannot be done without competent and trained AI governance professionals working to build AI systems that are trusted and responsible.   

More on this topic:

Ashley Casovan is the managing director for the IAPP AI Governance Center.

This monthly column originally appeared in the AI Governance Dashboard, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.