xAI v. Bonta: A constitutional clash for training data transparency

Litigation tests AI transparency law while highlighting compliance risks and constitutional questions for developers.

Contributors:
William Simpson
AIGP, CIPP/US
Westin Fellow
IAPP
Editor's note
On 9 April, 2026, xAI filed a complaint against Colorado Attorney General Philip Weiser seeking to enjoin enforcement of the Colorado AI Act, citing constitutional violations.
Late last year, xAI, which owns and operates the social media website X and the artificial intelligence chatbot Grok, filed a lawsuit against California Attorney General Rob Bonta in the Central District of California to enjoin enforcement of AB 2013 "Generative artificial intelligence: training data transparency." The law, which went into effect 1 Jan. 2026, requires developers of generative AI systems that are made publicly available to California residents to disclose a high-level summary of the datasets used to develop such systems.Â
Asserting First Amendment, Fifth Amendment and 14th Amendment claims, this lawsuit pits AI transparency against various constitutional protections. While the District Court denied xAI's motion for preliminary injunction, there is much litigation ahead, including an appeal to the Court of Appeals for the 9th Circuit. As such, this case could still set a precedent that invalidates or, at the very least, defangs all kinds of transparency laws across the nation. At present, transparency requirements are core to regulating AI technology as political will to do so otherwise appears to ebb.
The complaint and the contested law
In its complaint, xAI notes the datasets used to develop AI models are oftentimes uniquely curated to confer a competitive advantage upon one model versus another. The company argues that "these datasets are valuable precisely because they are not public." xAI says it works hard to maintain this secrecy through technical controls, arguably making them trade secrets under California and federal law. Accordingly, xAI's complaint describes AB 2013 as "a trade-secrets-destroying disclosure regime" that provides competitors a sneak peek at xAI's proprietary secret sauce while also compelling xAI to speak when it would rather not.
Specifically, AB 2013 requires covered developers to publicly disclose, at a minimum, 12 categories of information pertaining to the datasets used to develop their AI systems. These include the "sources or owners of the datasets," the "number of data points included in the datasets," "whether the datasets include any data protected by copyright, trademark, or patent," "whether the datasets include personal information," and "whether there was any cleaning, processing, or other modification to the datasets by the developer."
Importantly, the law exempts three types of AI systems from its scope: those that ensure security and integrity, those that operate aircraft in the national airspace, and those developed for national security, military or defense purposes and made available only to a federal entity.Â
A closer look at the legal issues
xAI filed a motion for preliminary injunction to prevent enforcement of the law while the lawsuit plays out. As a threshold matter, the court found xAI has standing to bring the pre-enforcement action because it alleges that disclosure could harm its constitutional interests and the attorney general has refused to disavow enforcement regarding xAI's existing disclosure. With standing established, the court considered the decisive question of whether xAI is likely to succeed on the merits.Â
Fifth Amendment claim
The Fifth Amendment's takings clause prevents the government from appropriating private property without properly compensating the owner. Determining an unconstitutional taking requires a multistep analysis.
Protected property interests
The first merit-based question, and the only one reached by the court under the takings clause claim, is whether xAI's datasets are trade secrets. xAI cites a case finding "combinations of public information from a variety of different sources when combined in a novel way can be a trade secret." But that does not mean all such combinations are trade secrets, nor does AB 2013 require developers to disclose how datasets are combined. A helpful analogy is food safety laws, which require ingredients to be listed on food packaging but allow companies to retain the methods of combining those ingredients in distinctive ways as trade secrets. Transferring that analogy, it seems that AB 2013 requires disclosure of an AI system's ingredient list, not its unique recipe.Â
The court came to a similar conclusion, finding xAI did not allege that it uses unique datasets, its datasets are a different size than those used by competitors or that it cleans and processes its datasets in a unique way. Therefore, the court determined xAI was not likely to succeed on its takings clause claim as pled.
Nonetheless, if xAI can better demonstrate its datasets are trade secrets, the next question is whether trade secrets are protected against governmental appropriation. The simple answer is yes: They are protected insofar as they represent a property interest under state law, as declared in Ruckelshaus v. Monsanto.
Per se or regulatory taking
The next issue is whether a per se taking is present when disclosure of a trade secret is required. xAI argues that government intrusion that "interfere[s] with the owner's right to exclude others from [their property]" is a per se taking that automatically triggers the right to compensation. Indeed, binding case law agrees with this assertion, albeit in the context of physical property. On the other hand, by proceeding directly to a regulatory taking analysis, Ruckelshaus implied per se takings are not applicable to trade secrets.Â
Similarly, a Texas state supreme court case concluded that government appropriation of a copyrighted work does not qualify as a per se taking because there is no physical taking of a tangible object that deprives the owner of all their associated property rights; however, this reasoning is less applicable to a trade secret, which is essentially destroyed once it is made public.Â
If not a per se taking, AB 2013's required disclosure may constitute a regulatory taking, depending primarily on the interference with reasonable investment-backed expectations. xAI contends that state and federal trade secret protection regimes enabled it to invest in datasets under the assumption the information would be protected. But Ruckelshaus notes such regimes are "not a guarantee of confidentiality to submitters of data" and "provide no basis for a reasonable investment-backed expectation that data submitted … would remain confidential." Moreover, Ruckelshaus found that such an expectation "must be more than a 'unilateral expectation,'" and only where the government "explicitly guaranteed" confidentiality of a trade secret could one arise. Here, the state may assert it never made such a guarantee.Â
Bonta also cited Ruckelshaus, which held that investment-backed expectations are reduced in areas that are traditional subjects of public concern and government regulation. Bonta argued that generative AI is clearly a subject of public concern and its world-changing consequences should alert AI developers to the likelihood of regulation.Â
The other relevant factors are the character of the governmental action and its economic impact. In this case, these are related to the question of whether xAI's datasets are indeed trade secrets. After all, the severity of the action and its consequences depend on whether the real value results from how datasets are compiled, culled and processed or in the raw data itself. xAI contends the size, contents and sources of the datasets hold value, but the state argues disclosure is limited to portions of a dataset that do not destroy their economic value. Moreover, disclosure promotes the common good by informing consumers about how their data is used to train AI and allows them to evaluate training data for biases.
Public use requirement and proper remedy
Two final details on the takings clause claim are worth noting. First, while xAI argues AB 2013 fails the "public use" requirement of the takings clause because it effectively benefits private competitors, Ruckelshaus holds that "so long as the taking has a conceivable public character, 'the means by which it will be attained is … for [the legislature] to determine.'"Â
Here, the law's implied purpose of assisting users in evaluating the qualities of AI systems arguably meets that threshold for conceivable public character. Second, Ruckelshaus holds that an injunction is not available to prevent an alleged taking "when a suit for compensation can be brought against the sovereign subsequent to the taking." xAI could initiate an inverse condemnation suit in state court or, under Knick, initiate a federal claim under 42 U.S.C. § 1983 to enforce its Fifth Amendment right without first seeking relief in state court.Â
First Amendment claim
Among other rights, the First Amendment protects free expression. As such, the threshold question to any First Amendment violation is whether the government is attempting to "restrict expression because of its message, its ideas, its subject matter, or its content."Â
Compelled speech and standard of review
The right to free speech extends not only to speech restrictions, but also to compelled speech. Indeed, AB 2013 compels AI developers to disclose certain information about their datasets, speech that xAI would rather avoid.Â
A law that compels speech is a content-based form of regulation. This typically triggers strict scrutiny, "a demanding standard" which invalidates a law unless it is narrowly tailored to advance a compelling government interest. While xAI maintains AB 2013 fails strict scrutiny, the court held that at this preliminary stage xAI's supporting case law is distinguishable from the facts at hand and AB 2013 is not subject to strict scrutiny.
For example, in National Institute of Family and Life Advocates v. Becerra, the government required clinics to provide a government drafted notice that addressed the very issue that the plaintiff opposed. In Riley v. National Federation of the Blind of North Carolina, a statute required fundraisers to disclose to potential donors the amount of contributions collected over the past year that were actually given to charity. In addition, other, less restrictive statutes were not challenged. And in NetChoice v. Bonta, the 9th Circuit found that compelled data protection impact assessment reporting amounted to a content-based law that required businesses to opine on and mitigate the risks that harmful online content poses to children.
The court found this case to be different: It does not involve a government script in conflict with the plaintiff's mission, it does not require a conversation-by-conversation disclosure where other options are available, and it does not mandate disclosure of controversial opinions. Even so, the court noted Pharmaceutical Research and Manufacturers of America v. Stolfi holds that a statute that requires private parties to make disclosures to other private parties, such as AB 2013, must meet strict scrutiny unless it qualifies as commercial speech.Â
Commercial speech exception  Â
While xAI contends the commercial speech exception applies only to speech that "propose[s] a commercial transaction," the carveout is arguably somewhat larger. The court referred again to Stolfi, which found commercial speech even where an advertisement was not at issue. And like the statute in Stolfi, AB 2013 does not require xAI to opine on the role of or make ideological statements about certain datasets. Moreover, Stolfi found commercial speech in compelled disclosures that provided parties to an actual or potential transaction with information about that transaction, which is just what AB 2013 does.
Under the Central Hudson standard, which the court found applies here, commercial speech passes intermediate scrutiny if it directly advances a substantial governmental interest by means not more extensive than necessary. The court found AB 2013 meets this standard since some consumers are likely capable of using the dataset disclosures to evaluate the plaintiff's models. The court conceded, however, that xAI may be able to demonstrate that AB 2013 fails intermediate scrutiny through litigation.
Viewpoint-based regulation
xAI also argues AB 2013 is a viewpoint-based regulation because it exempts AI systems developed for certain uses — like security and integrity or operating aircraft — from disclosure requirements. In other words, xAI asserts that, under the First Amendment, the state cannot decide that certain ideas — in this case, datasets — are more important to keep confidential than others. Viewpoint-based regulations impose a distinction based on a "specific motivating ideology, opinion or perspective," and the government cannot compel a party to espouse a particular viewpoint.Â
Here, AB 2013 does not necessarily compel disclosure of one particular viewpoint, but it does give certain AI applications and their raw data components a free pass. While there may be disagreement whether discrimination based upon the ultimate purpose of an AI system equates to discrimination of the developer's viewpoint, Citizens United v. Federal Election Commission did find that restrictions based on a speaker's identity were unconstitutional. This analysis seems relevant where the purpose of an AI system seems to relate more to a developer's identity than their viewpoint. But whether Citizens United, which discussed the political speech rights of corporations, applies to the speech of AI — let alone equates a developer's product with their identity — remains a vexing question.
Further, xAI contends the law's purported purpose of identifying and mitigating biases cements its status as a viewpoint-based regulation because biases comprise "particular ideas and messages that the state disfavors." This argument appears more in line with viewpoint-based discrimination, though the court noted this intent came from a private supporter of the bill, not a legislator. Likewise, the state may argue the law applies to datasets whether their disclosures uncover biases or not. Finally, bias is not prohibited from a dataset; it requires disclosure of a dataset instead, allowing users to determine whether their understanding of bias is present.
In a footnote, the court dispelled with xAI's viewpoint discrimination argument at this stage since it was not convinced by the case law that viewpoint discrimination triggers strict scrutiny in commercial speech cases. This issue is likely to return in future proceedings and where the court lands will be significant.
14th Amendment claim
Finally, the complaint submits that AB 2013 is unconstitutionally vague, failing to provide fair notice as to what information must be disclosed by developers and authorizing discriminatory enforcement.Â
Indeed, important definitions are missing: It is unclear whether the law applies to datasets that were used to develop a system, train a system, or both. Furthermore, the law requires a high-level summary of the datasets, including enumerated categories of information, but implies this list is not exhaustive. Therein, some listed disclosures require a yes or no answer but fail to clarify whether additional information is necessary. Perhaps this imprecision seeks to avoid any suggestion that developers "recast [their datasets] in language prescribed by the state," a level of specificity which subjected the law in X v. Bonta to strict, rather than intermediate, scrutiny.
To survive a vagueness claim, a law need not have "perfect clarity and precise guidance." That said, unconstitutional vagueness may arise if a law sets out an indeterminant legal standard or relies on "wholly subjective judgments without statutory definitions, narrowing context, or settled legal meanings."
In its order, the court embraced the flexibility of these rules and denied the preliminary injunction on the vagueness claim. For example, the court found the language "high level summary" was adequate since it was followed by a list of information to be disclosed. Furthermore, while definitions may be missing from the statute, the court found xAI perfectly capable of understanding and using "dataset" throughout its complaint. Moreover, the fact that the statute's enumerated disclosures are nonexhaustive does not automatically make it vague.
Still, these rules may yet pose a problem for AB 2013, which entails subjective judgments as to how much disclosure is necessary. This is especially true considering the free speech issues at play, which require that the regulation in question speak "only with narrow specificity."
How do similar laws address these issues?
The issues raised by xAI in its complaint are not atypical in the context of AI transparency laws. In fact, most state laws in this realm explicitly exempt trade secrets or information protected by the First Amendment from disclosure.
For example, California's SB 53, or the Transparency in Frontier AI Act, requires developers of frontier models to publish a transparency report on their website that includes, among other things, the model's release date, its intended uses and any restrictions or conditions on its use. Furthermore, developers must disclose summaries of catastrophic risk assessments conducted as part of their governance framework. Even so, the law permits a developer to redact from its disclosures anything necessary to protect its trade secrets and to comply with any federal or state law.Â
Colorado's SB24 205, also known as the Colorado AI Act, requires developers of high-risk AI systems to provide deployers of their systems with documentation disclosing "high-level summaries of the type of data used to train the ... system." However, the law exempts developers from disclosing trade secrets and clarifies that no obligations are imposed that would adversely affect the freedom of speech of any developer or deployer.
Likewise, Massachusetts' Bill H97, active in the state legislature, currently has identical disclosure and exemption language to SB24-205.
New York's A6453B, or the Responsible AI Safety and Education Act, requires developers of frontier models to "conspicuously publish a copy of the[ir] safety and security protocol with appropriate redactions." Those appropriate redactions include those which "protect trade secrets" and those which "prevent the release of information otherwise controlled by state or federal law."
On the other hand, Massachusetts' Bill H94, also an active bill in the state legislature, requires developers to provide deployers with "information on the datasets used for training, including measures taken to mitigate biases," but provides no exemptions for trade secrets or constitutional rights.Â
Whether the transparency obligations contained in these laws do in fact violate the First or Fifth Amendment remains to be seen, but it is clear that many of these laws were designed to circumvent any such conflict. Of course, AB 2013 does not include such a derogation, which may force a court to invalidate the law in part or as a whole should certain provisions prove unconstitutional. Â
Practical guidance for governance professionalsÂ
In response to the recent order from the District Court, how should developers and governance professionals prepare for compliance with AB 2013 while ensuring disclosure of proprietary information is limited?
First, it's important to realize that AB 2013 applies not only to developers of foundation models, like OpenAI or Anthropic, but also to any entity that "substantially modifies" a generative AI system, including the results of retraining or fine-tuning. Even if your organization uses a foundation model, any internal tampering that causes material changes to its functionality or performance may bring you within scope as a "developer."
For those entities in scope, data mapping efforts should seek to identify what datasets have been used to train AI systems and their relevance to the disclosures required by AB 2013, documenting such under privilege if possible. Furthermore, datasets should be categorized according to their degree of novelty. In other words, is a given dataset merely a raw assemblage of values acquired from an outside source or has it been combined with other datasets, selectively culled or specially curated by your organization to fulfil the unique training strategy of your system? This analysis will help determine whether your datasets qualify as trade secrets or not.Â
In tandem with the previous step, technical and organizational measures to limit the exposure of information both internally and externally, such as access controls and non-disclosure agreements, will buttress the claim that datasets have remained confidential and are, in fact, trade secrets.
Organizations should also determine the grounds supporting their expectation that their trade secrets would not be subject to public disclosure. xAI is relying on trade secret protection regimes, but that may not be enough absent an explicit guarantee from the state that trade secrets would remain confidential.
Furthermore, organizations should prepare varying degrees of compliance with AB 2013's disclosure requirements. One version of disclosure might be limited to the black letter of the law, keeping responses as high-level and general as possible, limiting responses to the enumerated categories, providing only a yes or no answer where sufficient, and keeping trade secrets, well, secret. Another version may more closely follow the transparent spirit of the law, sharing unenumerated categories of information and providing an explanation behind any yes or no answer. A third version might serve as a hybrid between these two poles.Â
Finally, depending on the outcome of this case, expect similar lawsuits against other AI transparency laws. These issues are inherent in many disclosure laws, as evidenced by the derogations that other state AI laws provide to sidestep free speech and property conflicts. But even the existence of derogations doesn't preclude the emergence of declaratory actions in order to clarify whether those laws do contain unconstitutional provisions.Â
Although the litigation process between xAI and Bonta is in its early stages, this is an important case to watch. Not only could the case disrupt a primary means of AI regulation, but it also exposes the constitutional questions fundamental to transparency obligations surrounding AI and the clash between fair information principles and individual liberty.Â

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
William Simpson
AIGP, CIPP/US
Westin Fellow
IAPP



