Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

In his first day back in office, U.S. President Donald Trump signed an executive order promising to remove "barriers to American leadership in Artificial Intelligence." The order revoked the Biden-Harris administration's Executive Order 14110 and began a process of reviewing the cross-governmental efforts achieved under the auspices of that order.

To buy time to develop AI policy objectives of its own, the Trump administration tasked the White House Office of Science and Technology Policy with undertaking this review in coordination with other agencies, consulting with stakeholders, and releasing an "AI action plan" within 180 days.

On 23 July, right on schedule, the White House released this long-awaited plan, finally clarifying the one-sentence policy statement in the executive order, which promised "to sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security."

Like any major OSTP report, the action plan is an advisory document without force of law and without any direct authority over federal government actions. Nevertheless, the report provides the most comprehensive view yet into the second Trump administration's priorities for AI policy.

Previously, these priorities have only been visible in the legislative efforts supported by the White House — such as the successful passage of the TAKE IT DOWN Act and the unsuccessful inclusion of the AI moratorium in the budget reconciliation bill — and in other administration actions, such as a recent USD90 billion investment in Pennsylvania to support data centers and energy infrastructure, which provided a glimpse into the White House's view that AI dominance is inextricably linked to energy production capabilities.

Winning the race

Some of the main AI risks the administration highlights include national security, foreign competition, domestic economics and a continued concern over digital replicas and deepfakes. The AI Action Plan appears to include an array of values and action items inspired by various corners of the Trump administration but divides its recommended actions into three pillars: accelerating innovation, building American AI infrastructure and leading in international AI diplomacy and security.

Alongside the action plan, and after a major AI speech from the president, the White House released three executive orders implementing aspects of the plan:

  • "Preventing Woke AI in the Federal Government"
  • "Accelerating Federal Permitting of Data Center Infrastructure"
  • "Promoting the Export of the American AI Technology Stack"

In contrast with the action plan, these executive orders do have binding authority on federal agencies, subject to the limits of presidential powers.

Comparing public sentiment with White House priorities

To help contextualize the significance of the AI action plan, the IAPP undertook an analysis of the comments OSTP received in response to its request for information. The vast majority of these 10,000 comments came from concerned individuals around the country, while some 755 were filed by private-sector organizations, nonprofits, nonfederal government entities, professional or scientific associations and academics.

Focusing on the subset of comments from public and private-sector organizations, we have divided the recommended actions in the final OSTP report into two categories: those that appear to align with significant trends in the comments, and those that appear to derive primarily from the administration's own internal guideposts.

Wishes granted

A few major trends among the 755 comments were clearly reflected in the White House's 28-page AI Action Plan. Organizations were generally split about their attitude toward AI governance, with some aligning fully with the administration's focus on driving AI innovation to maintain American AI leadership, and others pushing for a greater emphasis on guardrails at the federal level.

AI literacy and workforce empowerment

There were countless calls for investments in AI literacy and education among the comments submitted to OSTP. Many organizations, such as Carnegie Mellon University, advocated for "integrating AI education across disciplines rather than treating it as a standalone field" in K-12 and higher education curriculums.

Others such as Amazon and Google focused more on reskilling and upskilling. Several organizations took calls for general AI education a step further and advocated for AI education that encourages human-centric principles.

In its first pillar focused on innovation, the plan cites the White House's previous executive orders 14277 and 14278 that ordered furthered investment in both AI education for youth and preparing the American workforce for an increased demand in trade jobs as a result of technological innovation, respectively.

In its second pillar regarding infrastructure development, the White House shares its plan to invest in an American workforce. The plan tasks agencies like the Department of Education, the Department of Labor and the National Science Foundation with these developments, yet it is unclear how these goals will comport with the recent focus on agency restructuring.

Despite the stark divide between the two administrations' general ideologies, both the Biden administration's executive order on AI and the new AI Action Plan recognized the importance of both training individuals in how to be literate in AI to join the workforce in the future and upskilling the current workforce to address job displacement.

Investments in data center infrastructure and energy

Commenters also frequently mentioned the need to expand existing infrastructure and energy grid capabilities to meet the demands AI development imposes, including support for investments in a resilient and stable energy infrastructure. This focus reflects the findings of a Department of Energy report finding that AI would contribute to energy needs tripling from 2023-28.

In fact, the plan promises investment in a revamped energy grid that could manage the increased pressures of AI training and compute. It also goes further than the suggestions offered by commenters, vowing to support streamlined permitting for data centers and semiconductor manufacturing facilities, partially via reductions in environmental regulations.

Public-private collaboration

In a time of deregulation, most commenters also highlighted the inevitable need for collaboration between the public and private sectors.

Comments highlighted the need for federal agencies "to foster collaborative partnerships with academia, industry, and civil society" to establish national AI standards, share data and generally serve as "catalysts for innovation."

The AI Action Plan recognizes this imperative and assigned the federal government with the task of collaborating with both the private tech industry and agencies like the U.S. National Institute for Standards and Technology and the National Science Foundation to connect researchers to resources and facilitate an open-source and open-weight research scheme. The White House championed this type of collaboration for its positive impact on smaller businesses and startups' ability to contribute to AI development and America's AI strategy.

Interestingly, NIST and its existing AI Risk Management Framework are a frequent presence across commenters. A significant number of organizations recognized the AI RMF as a workable standard in the U.S. for AI and supported a continuation of the framework or even the expansion of NIST as an independent regulatory body.

Even as NIST has suffered from staffing and budget cuts, the Trump administration heavily relies on the agency in its AI Action Plan to help enact many of its policy goals, including a revision to the RMF that would "eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change," as well as call-outs to the agency's efforts toward sector-specific standards, AI evaluations and other ongoing efforts.

New ideas in the action plan

The White House addressed a few notable themes in its plan that deviated from the themes seen in OSTP comments.

First, the administration's rhetoric against "woke AI" and the accompanying executive order were only prioritized by a few of the organizations that submitted comments to the OSTP. But the plan makes this a top priority.

Similarly, it goes further than the public sentiment that was expressed on semiconductor manufacturing, export controls and the Department of Defense's involvement in AI development and highlights these issues to be of top importance.

Continuing the recent policy debate about some form of federal moratorium on state AI regulations, the action plan incorporates the idea as far as possible without legislative action. The plan directs the Office of Management and Budget to work with any federal agencies that have "AI-related discretionary funding programs" to "consider a state's AI regulatory climate when making funding decisions and limit funding if the state's AI regulatory regimes may hinder the effectiveness of that funding or award." It is unclear whether this would apply to any existing federal funding programs, but it could still as a warning shot for state AI legislative efforts.

Independent agencies make a surprising appearance in the plan, which includes a direction for the U.S. Federal Communications Commission to "evaluate whether state AI regulations interfere with the agency's ability to carry out its obligations and authorities under the Communications Act of 1934."

It also mentions a review of all investigations commenced under the prior administration's U.S. Federal Trade Commission to "ensure that they do not advance theories of liability that unduly burden AI innovation." Even more surprising, the plan appears to open the door for the modification of prior FTC "final orders, consent decrees, and injunctions," requesting that "where appropriate" someone should "seek to modify or set-aside any that unduly burden AI innovation." The process to review and adjust consent orders is complex and requires judicial involvement, but this part of the plan could signal a new focus on such revisions at the FTC.

Not part of the plan

Of some of the most common priorities identified by those organizations that submitted comments to OSTP, there were three notable issues missing from the AI Action Plan: guidance on copyright issues raised by AI, federal privacy legislation and guardrails on high-risk AI use cases.

Numerous organizations and nearly all of the more than 9,000 individuals who submitted comments voiced their concern about the risks of copyright infringement posed by AI model training, while recognizing its entanglement with innovation. Many fear the repercussions on creatives and the disincentive to create new content that underenforced infringement could create.

However, some shared the views that President Trump voiced in his address when announcing the plan that copyright enforcement should not go too far as to inhibit innovation, and rather that the “fair use” doctrine of existing U.S. copyright law is meant to be flexible to promote such innovation. The president's discussion of copyright was in stark contrast to the lack of any copyright mentioned in the action plan.

New privacy protections do not make an appearance in the action plan, though commentators had frequently highlighted the need for a comprehensive national privacy law to preempt state laws and serve as a foundation for data use restrictions that would undergird AI governance practices. The need for privacy legislation was linked to the desire for safeguards on AI technology for some organizations.

However, the plan departs from many prior OSTP reports by avoiding the question of data privacy altogether — though it mentions the principle of privacy with respect to model security, scientific research and the deployment of AI across the federal government.

Legislative priorities, including for any privacy law, are not a part of the action plan.

Finally, many commentators identified a need for a risk-based regulatory framework and some type of guardrail on high-risk use cases, somewhat akin to the EU AI Act, and building off NIST's AI RMF.

Yet, the AI Action Plan contains no guardrails or clear guidance on any risk-based regulation, apart from calling on some new and ongoing work from NIST. While many commentators supported some regulation generally, even those that aligned more with the administration's focus on streamlined development also recognized some needs for guardrails.

The plan includes only slight nods in that direction by, for example, encouraging frontier model developers to control for "ideological bias," pushing for investments in understanding how AI systems work to be better able to control them, and by supporting evaluating and assessing AI performance and reliability in all sectors.

Looking ahead

Consistent with prior comments by the administration, the AI Action Plan is pro-innovation and pro-tech — reflecting many of the comments from industry stakeholders while avoiding legislative fights and guidance on controversial topics like copyright.

In the coming months, it is likely the Trump administration will actively pursue the tasks laid out in the action plan. There are still some areas of uncertainty surrounding the impact that changes to the structure and funding of federal agencies will have on their ability to implement the administration’s priorities, including NIST, the NSF and the Department of Education.

The next steps are not all clear, but now there is at least an action plan.

Stephanie Forbes is an IAPP Summer Privacy Fellow. Cobun Zweifel-Keegan, CIPP/US, CIPM, is the managing director, Washington, D.C., for the IAPP.