Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains. 

Last week, U.S. President Donald Trump signed a long-threatened executive order instructing his administration to take steps toward "ensuring a national policy framework for artificial intelligence." IAPP News Editor Joe Duball reported on initial reactions. 

While the order's impacts on the policy conversation in Washington are immediate, any effect it may have on the AI governance compliance landscape will take a long time to materialize. As David Stauss, CIPP/E, CIPP/US, CIPT, FIP, explained in a detailed post, the existence of the executive order is unlikely to increase compliance certainty any time soon, resting as it does on legal ground that is contestable from nearly every angle. Indeed, if the administration takes actions in line with the executive order's instructions, they are almost certain to result in protracted legal battles.

On the other hand, at some level the executive order is self-effectuating. If a goal of the administration is to tamp down on the spread of AI-specific laws and otherwise dampen the effect of already passed laws, the existence of the executive order is itself a small victory. State lawmakers, lobbyists, product teams and executives can now all point to the order as a signal that legal risk around AI is waning.

ADVERTISEMENT

Radarfirst- Looking for clarity and confidence in every decision? You found it.

As far as federal policy goes, it is too early to tell whether the order represents an implicit acknowledgement that parallel legislative efforts to pass a moratorium on state enforcement of AI laws are doomed, as Alan Butler argued, or simply means what it says when it asks Congress to work toward enacting a single preemptive standard for AI governance.

Here I'll focus on the executive order's mention of the U.S. Federal Trade Commission. But it also includes instructions to the Department of Justice — the much-discussed "AI Litigation Task Force" — alongside other tasks to be carried out by the Federal Communications Commission, the National Telecommunications and Information Administration, and the Office of Science and Technology Policy.

Operating as it does under the unitary executive theory, it should no longer be a shock when the White House directly orders specific policy outcomes from its FTC, a practice that was historically avoided, unless you count strong hints, as I previously explained in analyzing the administration's instructions to the FTC under the AI Action Plan.

This time, under the heading "Preemption of State Laws Mandating Deceptive Conduct in AI Models," the FTC is tasked with issuing a policy statement on the application of Section 5 of the FTC Act to AI, in consultation with David Sacks, the President's Special Advisor for AI and Crypto. Specifically, "That policy statement must explain the circumstances under which state laws that require alterations to the truthful outputs of AI models are preempted by the Federal Trade Commission Act's prohibition on engaging in deceptive acts or practices affecting commerce." Sacks also explained the reasoning behind the order in a post on X.

For purposes of explaining the scope of the FTC's authority to carry out this order, I will set aside any questions of which state laws might trigger this test. I wrote previously about the White House's focus on "truthful" outputs from generative AI models.

The more fundamental question the order raises is whether the FTC Act preempts state laws. In general, no, the FTC Act was specifically designed to allow for states to have their own, often more specific, "Little FTC Acts" or UDAP statutes. And, in fact, these have flourished, existing in overall harmony with the federal law.

But wait, can the FTC preempt state laws under the FTC Act if it wants to? Definitely. The agency has the authority to promulgate trade regulation rules when it notices a widespread pattern of deceptive or unfair industry behavior. Such rules can have preemptive effect if they run up against conflicting state laws, especially if those laws are less protective of consumers. Courts have consistently found, however, that the FTC can't "occupy the field" of a certain area of law. Rules need specificity in order to spot real inconsistencies with state laws.

An example of this from the 1980s was the FTC's Credit Practices Rule, which banned certain aggressive debt collection tactics. Creditors sued to enjoin the rule, arguing the FTC was improperly overriding state laws that explicitly allowed some of the same tactics. But because the FTC had carefully constructed its rule to only preempt states when direct conflicts arose, the D.C. Circuit upheld the rule.

Here, the FTC is not instructed to promulgate a trade regulation rule, probably for few reasons including the fact that it takes about seven years to do so. Instead, the executive order asks for a "policy statement," which is a nonbinding guidance document explaining the agency's interpretation of its authority to facilitate compliance. Though a policy statement could not create new preemptive effect, the administration seems to indicate the document would merely explain how the FTC Act already preempts certain state laws.

Given the lack of specificity in the FTC Act's general prohibitions against unfair and deceptive acts or practices, and the clear congressional intent against preemption, the federal government would have an uphill battle to enjoin a state law based on such an interpretation. This may even be true if a state law is discovered that literally mandates deceptive outputs. In such a scenario, one could imagine compliance with both laws via a prominent user disclosure.

Of course, setting aside its advisory role, the FTC is primarily an enforcement agency. At any time, the commission can enforce its existing authorities against companies. So, if the agency finds actions that companies have taken resulting in deceptive outputs from AI systems, it could bring an enforcement action under the FTC Act, even if the outputs are mandated by a state law. 

To do so, the FTC would need to meet its longstanding requirements for a deception claim: The untruthful AI output would need to be shown to be likely to mislead a reasonable consumer in a manner that materially affects the consumer's decision or conduct. 

Deception can always be avoided through clear and conspicuous consumer disclosures. If a state law does mandate certain AI outputs, organizations would be well advised to properly educate end users on the factors involved, in line with the general principle of transparency in AI governance, leaving it to policymakers to debate preemption.

Please send feedback, updates and policy statements to cobun@iapp.org

Cobun Zweifel-Keegan, CIPP/US, CIPM, is the managing director, Washington, D.C., for the IAPP.

This article originally appeared in The Daily Dashboard and U.S. Privacy Digest, free weekly IAPP newsletters. Subscriptions to this and other IAPP newsletters can be found here.