The breakneck pace of artificial intelligence development is transforming our world, impacting everything from health care to finance and entertainment. This progress, however, presents a fascinating paradox.  

While AI holds immense potential to improve our lives, its innovation and operations often clash with established data privacy principles

Many have claimed AI is a legal "Wild West," calling to mind images of lawless chaos. While AI innovation is certainly exploring new technological frontiers, it is hardly unrestrained. The reality is a bit more mild than wild.  

AI must still adhere to all existing laws, even if it is a somewhat clumsy fit at times. Without clear and comprehensive regulations, navigating this new frontier requires a nuanced, practical approach.  

Existing privacy laws and governance functions can apply to AI and can be leveraged to create a framework for mitigating exposure while fostering responsible AI development.   

Privacy principles help corral AI

How might AI follow established privacy principles enshrined in law? The tensions around data minimization, notice and choice, and data subject rights demonstrate how existing laws already help wrangle AI.  

Data and the insatiable AI. Data minimization is a core privacy principle, but AI is incredibly data-hungry, thriving on vast amounts of data. So much data, in fact, that we are beginning to see signs of a data shortage.  

The more data an AI model is trained on, the better its performance. They do not call it a "large language model" for nothing. The need for so much data is partly because quality is generally low. Like gold panning, models need tons of data to find the good stuff.  

While you could remove specific data from a dataset, you would have to retrain the model to erase the data, and this is not always practical or possible. So, data might be kept indefinitely.  

This insatiable appetite for data directly contradicts the minimization principle.   

Notice and choice vs. unlimited outputs. Meaningful consent is a cornerstone of data privacy laws. The EU General Data Protection Regulation and California Consumer Privacy Act, for example, require users to understand how their data is collected and used. This use must be tailored to a specific purpose.  

Purpose limitation becomes problematic with AI, especially generative AI. Artificial intelligence models can generate unlimited and unforeseen outputs, creating new data points and user profiles beyond the scope of the original consent.  

This new data, or profiles, can be unique and sensitive depending on the use case. It will be challenging to provide meaningful consent limited to any particular purpose when the potential uses are uncapped.  

Data subject rights and black boxes. Many AI models are complex and opaque, especially deep learning models with several layers of artificial neurons. These layers are interconnected in intricate ways, making it difficult to trace the exact path data takes through the model and how it influences the output.  

Unlike traditional programming, where a clear cause-and-effect relationship exists between code and output, AI models often learn nonlinear and statistical relationships within data. This means minor changes in input data can lead to significant changes in the output data, making it challenging to pinpoint the reasoning behind a specific result.  

This makes it difficult for individuals to understand how their data is used within the model or exercise their data subject rights, such as access and rectification under laws like the GDPR or CCPA.   

Privacy by design: Where privacy principles meet AI innovation. To responsibly develop, deploy, use and innovate by and with AI, one must leverage existing privacy principles. Privacy by design, the concept of integrating privacy considerations throughout the AI life cycle, is the cornerstone of this framework.

Privacy enhancing technologies. To innovate effectively, the problem we face today is not so much that we need more data but that we need more high-quality data. Techniques like federated learning, tokenization, homomorphic encryption, edge computing, and multiparty computation can protect data while enabling AI functionalities.

Processing using PETs can require little to no personal data, leading to better protection. Further, companies can use input and output filters to help keep AI models reined in. Companies could gain user trust and access better-quality data by using PETs.  

All of this could, however, mean heavier data wrangling — the process of cleaning, transforming and restructuring data to make it suitable for a certain use.  

Companies should continue researching and developing PETs for AI. Privacy enhancing technologies can help companies focus on data relevance and accuracy by sifting out unnecessary personal information, which leads to enhanced model performance and reduced privacy risks. 

Filter data through the "law" first. Notice and choice, fundamental to consent, are paramount. Crafting clear, concise and actionable consent notices is crucial to avoid "notice burnout" while ensuring users understand how models use their data. 

Meaningful notice means appropriate detail on how the AI should work. Users should not be surprised. A good training dataset cannot be built without appropriate rights to that data.  

It is wise to perform a privacy impact assessment or the more involved data protection impact assessment on the underlying training data before AI models use it. That way, the PIA or DPIA can map data flows, identify potential privacy risks, and propose mitigation strategies.   

Put the black box in a sandbox. Sandbox environments can be a great tool to safely train and test AI models and risk mitigations. The sandbox can be the proving ground. In the sandbox, companies can safely experiment with innovative techniques, such as synthetic data.  

Synthetic datashows promise for data augmentation to reduce bias and scale but currently has challenges leading to model collapse. The sandbox can maintain innovation while preventing harm.   

Shining some light into the black box. Transparency is crucial in building trust with AI. Privacy laws already require PIAs and DPIAs on personal data. Algorithmic assessments mirroring PIAs and DPIAs can help practitioners understand and manage potential biases and risks within AI systems.  

It may be impossible to explain all aspects of AI operations, but companies must strive to do so as much as possible. The most practical way is to leverage well-established privacy governance practices.   

The path ahead

Drawing on the above, a loose framework of reliable best practices emerges. Companies and practitioners can draw on that loose framework to implement privacy by design in novel AI operations to operate within a justifiable legal framework despite the current absence of a robust AI regulatory regime in the U.S. — that is, to operate AI responsibly in the "Mild West."  

Algorithmic assessments are a consistent theme in emerging AI regulations. In October 2022, the federal Blueprint for an AI Bill of Rights encouraged entities "responsible for the development or use of automated systems" to " provide reporting of an appropriately designed algorithmic impact assessment."  

The blueprint proposes such reports to "include at least: the results of any consultation, design stage equity assessments (potentially including qualitative analysis), accessibility designs and testing, disparity testing, document any remaining disparities, and detail any mitigation implementation and assessments."   

Although that blueprint is persuasive only and does not require algorithmic assessments of any AI operations as a matter of law, several state comprehensive privacy laws do formally require some sort of assessment for automated decision-making or profiling activities. Where they exist, these laws apply when these operations significantly affect consumers and/or involve sensitive data.  

Existing privacy laws in Colorado, Connecticut, Delaware,Indiana, Iowa, Montana, New Jersey, Oregon, Texas and Virginia require covered businesses to conduct DPIAs to accompany profiling activity that presents a reasonably foreseeable risk of unfair or deceptive treatment, financial or physical injury, or an intrusion upon the private affairs of consumers.  

California's Privacy Protection Agency is amid rulemaking that, if successful, will require businesses employing automated decision-making technology to conduct risk assessments before using such technology in a manner that will produce legal or similarly significant effects on consumers, will profile consumers in their capacities as employees, independent contractors, job applicants, or students, or will profile consumers in a publicly accessible place.  

Separate California draft regulations would grant consumers the right to opt out of automated decision-making technology that is used for such purposes. Although this patchwork of privacy laws will likely be implemented and adjudicated with nuances across jurisdictions, the underlying principles are sufficiently consistent to inform businesses' privacy by design approach to this shifting regulatory landscape.  

Shifting sands

In April 2024, another potential puzzle piece in the "Mild West" of AI regulation arose as a discussion draft for an overarching federal privacy law known as the American Privacy Rights Act. As originally drafted, the APRA would have created slightly more structure for AI regulation nationwide — without being the final word on AI regulation — by creating the right to opt-out of consequential AI decision-making and requiring certain companies to conduct annual algorithmic assessments reportable to the U.S. Federal Trade Commission.  

In May 2024, however, the U.S. House Committee on Energy and Commerce Subcommittee on Data, Innovation, and Commerce released a new draft of the APRA that removed the proposed regulations squarely addressing AI.   

If enacted, the APRA would still impact AI operations generally by broadly regulating the processing of valuable personal information that AI requires to thrive. For instance, exercising one's right under the APRA to opt out of covered data transfers to third parties could limit one's exposure to AI decision-making, as could using a universal opt-out mechanism.

This recent removal of AI-specific regulations, however, begs the question of whether more granular regulation of the technology will be left to states for the foreseeable future. Indeed, given the latest draft's effective silence on the subject, the APRA's preemptive scope would not preempt granular state-level AI regulation.  

Perhaps uncoincidentally, on 17 May, Colorado's governor signed a first-in-the-nation comprehensive AI law that goes well beyond the APRA's former AI-related terms. Effective 1 Feb. 2026, Colorado's new AI law recognizes "developers" and "deployers" of AI systems — like the EUArtificial Intelligence Act's "providers" and "deployers" — and requires both to take measures to avoid algorithmic discrimination.  

In signing this law, Gov. Jared Polis, D-Colo., expressed hopes of furthering discussions regarding federal AI regulations.  

Notably, the law also creates an affirmative defense for developers and deployers that comply with designated national or international risk management frameworks for AI systems, which currently are the U.S. National Institute of Standards and Technology's AI Risk Management Framework and ISO 42001.

Harnessing the 'Mild West'

Even if some iteration of the APRA becomes law, privacy by design according to the principles above is still essential to design and deploy AI to smoothly transition and scale alongside the inevitably forthcoming AI regulations in the U.S.  

While federal privacy legislation that comprehensively addresses data privacy and AI would kill two birds with one stone, legislative efforts do not appear to be trending in that direction currently. On the ground, state privacy regulators and the FTC are already positioned to share enforcement duties once the applicable legal frameworks mature. 

While a legal Wild West is chaotic, unpredictable, and undesirable, the current situation is more appropriately called the "Mild West." Existing privacy frameworks, such as this one, allow for responsible AI innovation.  

This delicate yet exciting frontier can be confidently explored by applying privacy by design and fostering algorithmic accountability.

Welcome to the Wild West? No. Welcome to the "Mild West." 

Michael Cole, AIGP, CIPM, CIPP/C, CIPP/E, CIPP/US, CIPT, FIP, PLS, is Mercedes-Benz R&D North America Managing Counsel.
John Rolecki, CIPP/US, is a partner at Varnum.