Just as U.S. state legislative sessions wind down with more artificial intelligence laws on the books, federal policymakers appear to be seeking limits on state AI enforcement.
In advance of a committee markup scheduled for 13 May, the U.S. House Energy and Commerce Committee published proposed text for its part of a draft budget resolution, which is on a fast-track to passage by the full Republican-controlled Congress.
In relevant part, the proposal seeks to block the enforcement of state AI laws during a time-limited moratorium. Specifically, it reads, "no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act."
IAPP's Editorial team reported on the development 12 May, including initial reactions from stakeholders.
A targeted strike?
At first blush, this language may seem expansive, as it incorporates prior broad legislative definitions of AI. Laws with the effect of regulating AI are innumerable.
However, the moratorium is presented alongside a "rule of construction," which limits its operation explicitly to those state and local rules that apply differently to AI systems than to other technical systems. Thus, the ban on enforcement appears designed to avoid applying to any technology neutral law, which means it would potentially not affect civil rights, consumer protection, privacy or other laws that treat harmful outcomes the same whether facilitated by AI systems or not.
The language takes some effort to parse, especially as it includes double-negative nested exceptions to exceptions, but the end result appears to be as follows. If a state or local law imposes a "substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement" on covered AI systems, it would be subject to the 10-year enforcement moratorium, unless the requirement (A) is imposed under federal law or (b) is imposed by a generally applicable law that treats non-covered models and systems "that provide comparable functions" the same as covered AI models or systems.
Also exempt are state and local laws that do not "impose a fee or bond." And in the case of laws that do impose a fee, they are still exempt if they treat covered systems the same as noncovered systems and only impose "reasonable and cost-based fees."
By its terms, the rule of construction also excludes those laws designed to facilitate the development and deployment of AI, i.e., "removing legal impediments" or "streamlining" regulatory procedures.
Preempting AI exceptionalism
The moratorium would seem to target laws like the Colorado AI Act that singles out high-risk AI systems. It would also proscribe enforcement of other common types of legislative proposals, such as those in the employment sector and those governing the development of foundation models. Its intended effect on the dozens of state AI laws that apply to the states’ own internal governance of such systems is less clear.
But the intended effect of the provision on comprehensive state privacy laws is immediately apparent. The majority of the 19 states with such laws include certain provisions that apply only to automated decision-making technologies. As currently drafted, the moratorium explicitly calls out "automated decision systems" and appears to echo the relevant definition under the current draft California regulations — systems that "replace human decision making" — while also capturing a broader category of systems that "materially influence" human decision-making.
If given effect, the proposed moratorium would therefore restrict the enforcement of ADMT rules, while preserving the ability of states to enforce the rest of their comprehensive privacy laws — assuming the applicability of the remainder of these laws is severable from the effect of the moratorium.
Less clear is any possible effect on other privacy laws, such as those focused on biometrics. If the scope of legal protections for biometric technologies does not extend beyond the bounds of the federal definition of AI systems, could the enforcement of laws like Illinois' Biometric Information Privacy Act be impacted?
The lead-up to an end-run
The proposed AI moratorium is just one small act of a much longer play.
Congressional Republicans are working to come together around a large package of deep budget cuts and other initiatives that they are seeking to advance as part of a budget reconciliation package, which is not subject to the Senate filibuster, meaning it could pass without Democrat support.
As part of the usual course of the reconciliation process, Congress instructed various committees to submit recommendations on changes in laws within their jurisdiction to help reduce the budget by specified amounts.
The Energy and Commerce committee was asked to reduce the budget by USD880 billion over 10 years. Pursuing this obligation, the committee is entering a markup session 13 May to discuss whether to move forward with four committee "prints" covering broad areas within its jurisdiction: energy, environment, health and communications. Though, ostensibly, a legislative vehicle for the budget, bills like this can include new policy directives, so long as they do not violate strict parliamentary rules.
Among other proposed changes — including authorizing the auctioning of wireless spectrum currently reserved for government uses — the communications print includes the AI moratorium alongside a USD500 million budget for the U.S. Department of Commerce to "modernize and secure Federal information technology systems through the deployment of commercial artificial intelligence, the deployment of automation technologies, and the replacement of antiquated business systems."
Byrd is the word
It is not by accident that the AI moratorium language occurs as part of a provision authorizing new spending by the Department of Commerce to acquire commercially available AI systems. There are strict congressional rules on the types of policy provisions that can be included in budget reconciliation bills and committees must carefully design their component legislative text to survive possible later challenges.
Most importantly, once the reconciliation bill reaches the Senate, it is subject to what is known as the Byrd rule, which prohibits any matter that is "extraneous to the instructions" sent to the committee. Nonbudgetary provisions are prohibited as are provisions where the budgetary provisions are "merely incidental" to their non-budgetary components.
If the language reaches the version of the reconciliation bill considered by the Senate, Senators can bring objections under the Byrd rule, which allows for the surgical removal of any offending provisions.
Here, though not explicit in the text itself, the committee appears to be setting up the argument that a moratorium on the enforcement of complex and diverging state AI laws will facilitate the Department of Commerce’s acquisition of AI systems. Though critics will see this as contrived, the logical connection here seems to be that the federal government is unable to modernize systems effectively and efficiently if the spread of commercially available AI is slowed by a patchwork of state AI laws.
Cross-chamber coordination
Such arguments echo a well-timed op-ed by Kevin Frazier and Adam Thierer in Lawfare calling for a "learning period moratorium" if Congress is unable pass its own preemptive AI governance rules. In turn, the op-ed cites R Street's recent comments to the Energy and Commerce Committee's request for information on privacy and security legislation. The comment concludes:
"A pro-freedom, pro-innovation national policy framework for AI and advanced computation is essential if the United States hopes to win the race to be the global leader for the 'most important general-purpose technology of our era.' A learning period moratorium can help ensure that result by preventing a patchwork of rushed regulatory solutions from undermining AI opportunity in America."
The AI enforcement moratorium also has powerful allies in the Senate. Sen. Ted Cruz, R-Texas, who chairs the Senate Commerce Committee, has already voiced public support for a moratorium. In comments at a hearing last week, he adopted R Street's language and questioned witnesses about their thoughts on a "learning period moratorium."
What sounded at the time to be a relatively off-hand question now shows that lawmakers have been coordinating across chambers to prepare for this de-regulatory initiative.
The dream of the '90s is alive
As supportive evidence for the moratorium idea, its champions routinely cite the success of Clinton-era policies that encouraged the regulation-free growth of the internet economy. Specifically, they point to the Internet Tax Freedom Act, a 1998 law that prohibited states from implementing new taxes on internet access or e-commerce transactions for an initial three-year moratorium period.
Commenters — and probably state attorneys general — will raise constitutional questions about the ability of the federal government to limit state's ability to enforce their own laws.
Such questions are similar but not entirely contiguous with the complex area of federal preemption. Since preemption requires the federal government to commit to positive policies over the areas of law that it preempts, the moratorium idea seems to be a fallback option, and one that is potentially more defendable under principles of federalism.
This argument held water with the internet tax moratorium. But there are significant differences here. Both moratoria rely on the strong interstate commerce powers of the federal government. But taxes on e-commerce transactions are so inextricably tied to interstate commerce that it is difficult to make an argument otherwise. The connection to the development of commercially available AI systems will rely on similar arguments, but is almost certainly more attenuated.
If the momentum in support of a moratorium results in its final passage, it is almost certain to result in legal challenges. It will take some time to feel its full impact.
But one thing is clear: Republicans are serious about their commitment to a de-regulatory moment for AI.
Cobun Zweifel-Keegan, CIPP/US, CIPM, is the managing director, Washington, D.C., for the IAPP.