Who you gonna call (and who's gonna call you)?

With artificial intelligence-powered products flooding the market and new draft AI regulations emerging worldwide, regulators are scrambling to clarify their fields of competence, enforcement, and call for or initiate reforms to better marshal their resources to meet the AI governance challenge. Whether the task will fall to existing agencies or newly created, potentially supranational enforcement bodies, is an issue playing out in real time. It is certain nations are signaling distinct approaches to enforcement, which often reflect their broader philosophies on AI governance.

Some AI governance enforcement regimes worth following closely are the U.S., the EU, the U.K., Canada and China. These jurisdictions offer helpful insights, demonstrating important points on the spectrum of enforcement methodologies that are taking shape. Furthermore, they serve some of the largest markets across the globe, meaning companies worldwide will be subject to their enforcement measures.

The U.S. approach

Without comprehensive AI legislation in place, AI governance enforcement in the U.S. relies on existing agencies to clarify and to enforce prevailing guardrails for AI deployment. Specifically, these agencies include the Federal Trade Commission, Department of Justice, Consumer Financial Protection Bureau and Equal Employment Opportunity Commission. Wider AI governance policy also includes efforts by the White House and Congress.

These regulatory bodies recently unveiled a collective pledge to combat discrimination and bias in automated systems. Vigilant of the increasing prevalence of AI in individuals' daily lives and its possible harmful outcomes, the pledge reaffirms that "existing legal authorities apply to the use of automated systems" and commits to protecting "individuals' rights regardless of whether legal violations occur through traditional means or advanced technologies."

The FTC has been particularly vocal about its authority to enforce federal law in the context of AI. In a May 2023 New York Times op-ed, FTC Chair Lina Khan wrote, "Although these tools are novel, they are not exempt from existing rules, and the FTC will vigorously enforce the laws we are charged with administering, even in this new market."

Section 5 of the FTC Act, which prohibits "unfair or deceptive acts or practices in or affecting commerce," is a key mechanism in the FTC's AI enforcement toolbox. "The FTC Act is and has broad applicability and was actually designed by Congress for exactly this to confront new technologies and new emerging markets," FTC Division of Privacy and Identity Protection Acting Associate Director Ben Wiseman told Cyberscoop. "Its breadth and scope provide the ability to ensure that consumers are protected when these new technologies hit the marketplace."

For example, the FTC recently opened an investigation into ChatGPT developer OpenAI over whether the company "engaged in unfair or deceptive privacy or data security practices."

In his remarks before the IAPP Global Privacy Summit 2023 in April, FTC Commissioner Alvaro M. Bedoya affirmed the same, expressing that in addition to the FTC Act, civil rights laws, torts, and product liability laws also apply to AI. Put simply, "AI is regulated" according to Bedoya.

To that effect, algorithm disgorgement is emerging as an especially potent enforcement tool employed by the FTC to corral harmful AI systems. This enforcement tool requires AI developers to erase algorithm models and products trained on illicitly acquired or repurposed data.

Since 2019, the FTC has proposed five disgorgement orders including against Cambridge Analytica and Amazon. This enforcement strategy is daunting to companies operating sophisticated but entrenched algorithmic models likely not designed to be unraveled to prior points in time. The efficacy and longevity of such an enforcement tool has critics, some of whom point to the absence of a federal privacy law as a sufficient legislative basis for such powers.

Aside from FTC efforts in AI enforcement, the agencies listed above are also doing their part. For example, the CFPB issued a circular confirming creditors using algorithms must provide specific reasons for adverse credit decisions. Likewise, the DOJ's Civil Rights Division filed a statement of interest in federal court explaining the Fair Housing Act applies to algorithm-based tenant screening services. Finally, the EEOC published a technical assistance document describing the applicability of the Americans with Disabilities Act to automated decision-making in the employment context.

In general, the U.S. is taking an ad-hoc or use-case-specific approach to the regulatory enforcement of AI governance, bolstered by the FTC as the default regulator. This enforcement strategy reflects the broader federal government response to AI governance, as incremental and anticipatory, without becoming precautionary.

The EU approach

As the draft EU AI Act moves through the trilogue process, questions around enforcement remain contentious and unsettled. While EU-wide application of the law will be overseen by a proposed AI Board, the individual member state-level structure is less certain.

Two state-level enforcement structures in particular are being floated by EU bodies. Parliament proposed a single, centralized AI oversight agency in each member state known as a national surveillance authority, whereas the Council and Commission proposed each member state should be permitted to establish as many market surveillance authorities as they like.

In either case, data protection authorities, such as France's Commission nationale de l’informatique et des libertés and Italy's Garante (which temporarily banned ChatGPT), are boosting internal capacity and capability as well as projecting regulatory responsibility for AI governance enforcement.

The European Parliament proposal allows for certain sector specific MSAs (e.g., finance and law enforcement) but generally requires member states to establish exclusive NSAs for enforcing the AI Act. On the other hand, the Council and Commission proposal would enable member states to expand their existing regulators, say for employment or education, to the realm of AI in those sectors.

Both proposals have benefits and drawbacks. The Parliament's centralized proposal, for example, would put AI regulators in a better position to hire talent, build internal expertise, ensure effective enforcement of the AI Act and coordinate between member states. At the same time, a centralized system would separate AI regulators from regulators that police human actions in the same domain, a drawback which is not present in the MSA proposal.

Aside from official regulators, the AI Act also fosters an evaluation ecosystem, whereby independent "notified bodies" can review high-risk AI systems and approve those deemed compliant with the act.

However, this approval process may be undermined by quality standards to which AI developers can self-attest in order to demonstrate compliance. With independent evaluation never strictly required, there is minimal incentive for developers to choose evaluation over self-imposed standards, though large companies may opt for evaluations that come packaged with ongoing monitoring services.

In terms of individual redress, the Parliament proposal includes a provision that individuals must be informed when they are subject to high-risk AI systems, as well as a right to an explanation if they are adversely affected by such systems. Under this proposal individuals may appeal to their enforcement authority or seek enforceable judicial remedy if their appeals go unanswered.

Finally, a new proposed AI Liability Directive attempts to resolve confusion over civil liability for damage caused by AI systems. The directive would make it easier for parties injured by AI-related products to levy claims against the relevant AI developer or user by:

  1. Empowering national courts to compel providers of high-risk AI systems to disclose pertinent evidence to claimants regarding the AI system in question.
  2. Permitting class action lawsuits.
  3. Introducing a presumption of causation between the defendant's fault and the resulting injury, where certain conditions are met.

If lessons from the EU General Data Protection Regulation's "one stop shop" are anything to go by, these issues may go to the eleventh hour of the trilogue negotiations.

The U.K. approach

Across the English Channel, a mere 51 miles away, the U.K. is pursuing a somewhat different approach. Under a white paper issued in March, the U.K. argues that its established online platform and digital services regulators are equipped to head off risks posed by AI systems.

Under the banner of the Digital Regulation Cooperation Forum, the U.K.'s four digital regulators — the Information Commissioner's Office, Competition and Markets Authority, Office of Communications and Financial Conduct Authority — joined hands to provide a "more coherent, coordinated and clear regulatory approach" regarding the digital landscape and AI systems in particular.

Notably, in a demonstration of the cross-cutting ubiquitous nature of privacy and data protection governance, the chair of the DRCF is the Information Commissioner. As this fact makes clear, privacy regulators are increasingly responsible for enforcement and, oftentimes, infringement on privacy and data protections lands AI developers in hot water.

These regulators are especially vigilant of the dangers and benefits of generative AI, and each has released literature on how they intend to control these outcomes.

For example, Ofcom is combatting scams, phishing and "fake news," threats which are easily scalable via generative AI, through various initiatives. These include working with companies under the purview of the Online Safety Bill that develop generative AI tools to assess safety risks and implement effective mitigation procedures; monitoring the public's media literacy; reviewing detection techniques which distinguish between real and AI-generated content and publishing information on how generative AI may affect the sectors it regulates.

Similarly, the ICO is reminding generative AI developers of applicable data protection laws and laid out a host of questions for developers to answer when processing personal data. As with any entity covered by the U.K. GDPR, developers need a lawful basis for processing, a determination of whether they are a controller or processor, a prepared data protection impact assessment, a plan to ensure transparency and mitigate security risks, a data minimization policy, a procedure to comply with individual rights requests, and an awareness of obligations if using generative AI to make solely automated decisions.

Moreover, the CMA is conducting a review to "produce guiding principles to support competition and protect consumers as AI foundation models develop." CMA CEO Sarah Cardell said in a press release, "it's crucial that the potential benefits of this transformative technology are readily accessible to U.K. business and consumers … Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection." The CMA review will be published September 2023.

Lastly, the FCA is actively developing its approach to AI regulation in the financial services industry. In a recent speech, CEO Nikhil Rathi outlined the FCA's emerging strategy on AI regulation, including regulating cloud service providers, ensuring fraud prevention and cyber resilience among financial services firms using AI products, establishing a Digital Sandbox to support safe innovations in fintech, improving financial supervision technology, and collaborating with regulators globally to share insights into complex AI regulatory issues.

U.K. regulators also affirm that many of the risks posed by generative AI are already covered by existing laws and regulations. In line with its "pro-innovation" AI approach, the U.K. is hesitant to introduce any AI-specific regulations which might hamper such developments. Furthermore, the U.K. is keen on projecting itself as a global convener and leader on AI governance, spear-heading the first global AI safety summit later this year.

The Canadian approach

Like the EU, Canada is in the midst of passing comprehensive AI legislation, known as the Artificial Intelligence and Data Act. As a means of mitigating harm and biased outputs by AI systems, the AIDA includes a formidable enforcement mechanism.

To begin, focus will be on establishing guidelines and assisting AI developers to achieve compliance with the law through voluntary measures.

Thereafter, the Minister of Innovation, Science and Industry will be responsible for all enforcement of the law that does not involve prosecutable offenses. This minister would be complemented by the statutorily created AI and Data Commissioner, a post that will ensure consistent regulatory capacity across different contexts and monitor systemic effects of AI systems.

Where an AI system might result in harm or a biased output, the minister is authorized to take action, such as ordering the production of records to demonstrate compliance or ordering an independent audit. Where the chance of imminent harm exists, the minister may order cessation of the use of the system or make the violations publicly known. Additionally, the minister maintains the power to impose administrative monetary penalties on AIDA offenders to elicit compliance.

Aside from injunctions and penalties, regulatory offenses may be prosecuted on a discretionary basis by the Public Prosecution Service of Canada following referrals by the minister. Moreover, AIDA creates three new criminal offenses to directly address intentional conduct which causes serious harm. These are:

  1. Knowingly possessing or using unlawfully obtained personal information to design, develop, use or make available for use an AI system.
  2. Making an AI system available for use, knowing, or being reckless as to whether, it is likely to cause serious harm or substantial damage to property, where its use actually causes such harm or damage.
  3. Making an AI system available for use with intent to defraud the public and to cause substantial economic loss to an individual, where its use actually causes that loss.

Finally, compliance expectations may be graded based on the size of the firm in question. This allows smaller enterprises to enter the market without facing the same regulatory hurdles as large and established companies.

The Chinese approach

Chinese AI governance legislation contains explicit reference to enforcement mechanisms, though the mechanisms themselves are a web of overlapping authorities.

For example, the Internet Information Service Algorithmic Recommendation Management Provisions are to be interpreted and enforced by the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security and the State Administration of Market Regulation. These agencies may enforce the law "on the basis of their duties and responsibilities . . . according to the provisions of relevant laws, administrative regulations, and departmental rules," and may issue warnings, impose fines between CNY10,000-100,000 (approximately USD1,400-14,000) or levy criminal charges when a crime is committed.

Providers of algorithmic recommendation services that provide false materials during the filing process for their services are subject to fines between CNY10,000-100,000.

Adding to the complexity, the Interim Measures for the Management of Generative Artificial Intelligence Services imposes penalties against violators in accordance with China's Cybersecurity Law, Data Security Law and Personal Information Protection Law. Where relevant provisions do not exist within these laws, regulators may circulate criticisms, order corrections or suspension of services, or prosecute criminal charges.

Additionally, the interpretation of the draft Internet Information Service Deep Synthesis Management Provisions is the responsibility of the CAC.

Because the agencies responsible for enforcing AI regulations are the same as those which enforce data and cybersecurity regulations, it is illuminating to see how these agencies operate under adjacent frameworks.

The CAC is responsible for overall "coordination, supervision and management" under the CSL, DSL and PIPL. Under PIPL, the CAC maintains broad rulemaking authority, oversees enforcement by sectoral regulators and local governments, and administers security assessments for cross-border data transfers.

On the other hand, the MIIT is the regulator for the industry and IT sectors. Of note, MIIT enforces cybersecurity and data protection regulations against telecommunications and internet providers that violate information privacy regulations.

Finally, the MPS enforces national cybersecurity schemes, including the CII Protection Framework and the Multi-Level Protection Scheme. More specifically, it is authorized to supervise protection efforts and prevent criminal activity related to cybersecurity. The MPS delegates the authority to perform inspections, conduct investigations, and impose fines or criminal liabilities to public security bureaus.

These agencies often work in tandem to exercise enforcement actions, with the CAC acting as the de facto enforcer. Enforcement mechanisms of AI regulations may arise from cyber and data security laws, as opposed to the AI regulations themselves. Therefore, AI developers, and those concerned with compliance, should be aware of AI-specific legislation within the context of China's wider information privacy and cybersecurity landscape.

Conclusion

AI regulatory enforcement is taking various forms from one jurisdiction to another. Still, a few recommendations can be made around compliance:

  • Maintain fluency in data privacy and cyber security laws. Many enforcement bodies are looking to these related types of legislation to bring enforcement actions against AI developers and users.
  • Know the existing laws that regulate your sector. Even if a law does not explicitly apply to AI systems, chances are regulators have interpreted its ambit to include AI.
  • Beware of algorithm disgorgement. While fines can be a nuisance, compulsory erasure of data sets may unwind the core efficacy of a company's business model.
  • Remember that multiple agencies may be keeping tabs on your operation, often in conjunction. Agencies may have overlapping jurisdictions or your AI system may implicate numerous enforcers. Either way, it is not uncommon for regulators to work together.
  • Maintain engagement with and invest in research on privacy regulators that occupy influential regulatory positions. Privacy authorities have been the first to move on AI governance. They are building out privacy capabilities, and privacy protection remains a cross-cutting, core issue to AI governance. Savvy professionals in the AI space will pay close attention to developments in privacy and data protection

In any event, AI regulatory enforcement is in a relatively nascent stage, but this will not last for long. As the integration and application of AI permeates our societies and economies, enforcement efforts and the mechanisms which empower them will too.