Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

The AI Action Plan released this week by the White House sets out over 97 proposed actions. While each proposal is not necessarily made up of a single concrete task, given that the overall report is called an "action plan," it probably makes sense to call its components "actions." For a broad overview of the plan, with comparisons to the trove of public comments that preceded it, see IAPP's prior analysis.

Some of the bulleted actions are relatively clear and achievable, while others tend toward loftier or more amorphous language. A handful have already been pursued via the three executive actions signed by the president on the same day the plan was released. The rest simply signal the administration's goals on AI policy, serving as direction for government agencies to carry out this vision moving forward.

Unlike an executive order, which carries the force of the president's power and often includes specific deadlines to achieve goals, most of the actions in the plan remain in the idea phase. Some may move forward within executive agencies without further direction, while others may remain just a twinkle in the eyes of the report's authors, Michael Kratsios, David Sacks and Marco Rubio, until the president chooses to breathe life into them.

One highly unusual feature of the action plan is the inclusion of proposals for actions to be taken by independent agencies. Historically, agencies like the Federal Communications Commission and the Federal Trade Commission are not subject to the direct control of the president. But part of President Trump's agenda has been to challenge this state of affairs, asserting high levels of control over independent agencies, including the power to fire commissioners at will. Accordingly, both the FCC and FTC make appearances in the action plan.

No undue burdens

The below FTC bullet point is worthy of particular attention. Depending on the extent to which it is pursued, the policy directive could have profound implications for the practice of privacy and AI governance in the U.S. In full, it reads:

  • Review all Federal Trade Commission (FTC) investigations commenced under the previous administration to ensure that they do not advance theories of liability that unduly burden AI innovation. Furthermore, review all FTC final orders, consent decrees, and injunctions, and, where appropriate, seek to modify or set-aside any that unduly burden AI innovation.

Notably, the grammar of these actions deviates from most listed in the plan. Unlike the majority of the bullet points, which begin by stating which agency will lead the effort, this bullet is written in passive voice. It is unclear from its structure whether the intention is for the FTC itself to carry out the review — though that is the most likely result, via the office of Chairman Andrew Ferguson — or if another part of the government, such as a White House department, would take this on.

It is also worth comparing this language directly against the Biden-Harris administration's approach to AI at the FTC. President Biden's executive order 14110 mentions the importance of existing consumer protections for AI. The language clings closely to the custom of avoiding the appearance of directing the FTC's actions. While presidents' chosen chairs of the FTC generally do work in harmony with the priorities of the administration, the appearance of directly ordering the agency to do something was, until now, taboo.

Instead, Biden's executive order merely provided general direction for how to prioritize consumer protection enforcement in the area of AI systems:

"Use of new technologies, such as AI, does not excuse organizations from their legal obligations, and hard-won consumer protections are more important than ever in moments of technological change. The Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI."

Khan's AI legacy

At least to some extent, the FTC under Chair Lina Khan pursued this objective. The most notable AI-related order during the Khan years was Rite Aid, which resulted in a detailed set of governance prescriptions — as well as some major injunctive remedies — reflecting industry-standard safeguards to test and mitigate bias and privacy harms from biometric AI systems.

Most other AI related cases during the Khan period, including those that were part of the recent roundup known as "Operation AI Comply," were concerned more with deceptive claims in the marketplace about AI capabilities than with improperly governed AI systems. These cases received bipartisan support, although then-commissioner Andrew Ferguson made clear in separate statements that his support should not be taken as "the regulation of AI qua AI," but merely "holding generative-AI companies to the same standards for honest-business conduct that apply to every industry."

A notable exception to this bipartisan line of cases was Rytr, a settlement that could be ripe for re-examination under the new White House policy. Both Republican commissioners dissented from the matter. Much of their concern centered around the enforcement against a developer for the potential fraudulent use of its technology by others. As Ferguson wrote in a dissent signed onto by Commissioner Melissa Holyoak, "Treating as categorically illegal a generative AI tool merely because of the possibility that someone might use it for fraud is inconsistent with our precedents and common sense."

But there is no reason to believe the review of consent decrees that "unduly burden AI innovation" will be limited to those matters focused on AI companies. Every big tech company is subject to the ongoing oversight of the FTC via a consent decree. Will they begin to argue that these orders are hampering their ability to innovate?

The public interest so requires

Under Section 5(b) of the FTC Act, the agency may modify a prior order whenever it believes that conditions of fact or of law have changed sufficiently to require such action or if the public interest so requires.

The best recent example of a contested modification proceeding is the FTC's ongoing proceeding with Meta.

Defendants may also petition to have orders modified, as Scott Zuckerman recently did, kicking off a process whereby the FTC has sought public comment on whether to modify or vacate the consent order regarding his former "stalkerware" app.

But what if the modification is not contested? What if both parties agree that a lesser set of injunctive terms would be appropriate in the case?

Under the FTC's rules of procedure, after the commission delivers an "order to show cause" to the company explaining why it believes a modification is appropriate, the company has 30 days to respond. If the order is unopposed, the commission — that is, the sitting commissioners — have the power to approve the modification to the terms of the consent order.

In short, much depends on FTC commissioners' views on whether existing consent decrees unduly burden AI innovation. From Big Tech on down, companies may soon see the opportunity to request a review under the new regime.

Please send feedback, updates and undue burdens to cobun@iapp.org.

Cobun Zweifel-Keegan, CIPP/US, CIPM, is the managing director, Washington, D.C., for the IAPP.

This article originally appeared in The Daily Dashboard and U.S. Privacy Digest, free weekly IAPP newsletters. Subscriptions to this and other IAPP newsletters can be found here.