While only a few jurisdictions have passed laws specific to artificial intelligence governance, the regulatory application of existing laws to the governance of AI technologies has happened at a much faster pace. Some of the first and fastest movers have been global data protection authorities, which are turning to playbooks drawn from current laws to regulate the new technology. Indeed, the task of understanding how to apply current law to AI is being undertaken on a global scale and may even take priority over efforts at passing new AI-specific legislation.
In its October 2023 session, the Global Privacy Assembly adopted a resolution "recalling that data protection and privacy principles and current laws, including data protection and privacy laws, bills, statutes and regulations, apply to generative AI products and services, even as different jurisdictions continue to develop AI-specific laws and policies." Of all the questions AI has raised, exactly how existing law should apply to it is perhaps one of the most complex but pressing for AI actors.
Unsurprisingly, a consensus has yet to be forged around these questions, although various approaches are emerging. For example, George Washington University Law School Professor Daniel Solove argued in an article that, "Existing privacy laws aren't well-designed to address AI's privacy problems." His call for a "privacy rethink" in the age of AI entails things such as moving beyond individual control and self-management as the basis for regulation, undertaking more nuanced harm and risk analysis, and instituting both internal and external accountability mechanisms for AI providers/deployers.
Europe
To promote compliance with the EU General Data Protection Regulation and national data protection laws, DPAs in Europe have given thought to the unique challenges brought about by AI systems. Overall, much of their guidance for AI operators centers around implementation of the principles of data protection and privacy by design, purpose specification, impact assessments, transparency, and individual rights.
Data protection and privacy by design
DPAs in Europe have uniformly stated AI systems should be designed, developed and deployed with data protection and privacy principles at the core of their operations. In an article, the U.K. Information Commissioner's Office urged organizations not to underestimate the level of resources required for these tasks, insisting AI providers "must be able to demonstrate, on an ongoing basis, how you have addressed data protection by design and default obligations."
Purpose specification
AI systems should collect and use personal data only for specified purposes. For example, in its guidance on AI compliance with the GDPR's Article 5 principle of purpose specification, France's data protection authority, the Commission nationale de l'informatique et des libertés, noted the learning phase and the production phase of an AI system have distinct purposes and each should be "determined, legitimate and clear."
Data protection and privacy impact assessments
Like many of their global peers, European DPAs have made it clear data protection and/or privacy impact assessments should be carried out for AI systems. Impact assessments should be executed for each stage of the AI life cycle. As the CNIL explains, there should also be consideration about the possible effects of the AI system on individuals' mental health and whether it can lead to addictive behaviors and/or facilitate harassment. Any major changes to the functionality of an AI system necessitate a renewed PIA or DPIA.
Transparency
AI operators should inform individuals about data collection and their rights in a clear, concise and easily accessible manner. At a high level, the purposes of processing, retention periods and people with whom personal information will be shared are the minimum categories of privacy-related information that should be disclosed. As the CNIL notes in its guidance, individuals should also be made aware when they are interacting with a machine.
Exercise of privacy and data protection rights
European regulators have made it clear AI systems should allow individuals whose data is being processed to exercise their data protection and privacy rights. At a minimum, these include the rights to access data, rectify inaccurate data, request erasure of personal data and not be subject to automated decision-making. To enable individuals to exercise their rights, AI providers should understand the impact of their systems on the rights and freedoms of individuals. They should think through the consequences for both groups and individuals based on protected characteristics such as gender, religion, and political or religious opinions. They should also make it clear whether AI is being used to supplement decision-making or solely for automated decisions.
US
Echoing many of the points of European authorities, the U.S. Federal Trade Commission has been proactive in issuing guidance at the intersection of privacy compliance and AI — guidance that has also served to foreshadow its enforcement priorities. For example, in an Office of Technology blog post, it underlined the importance of AI companies abiding by the privacy commitments they have made. It also cautioned against practices such as "quietly changing" privacy policies to make room for personal data collection and use by AI. As explained in another blog post:
"It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers' data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy."
Other federal agencies have also taken a proactive position on the applicability of current law to AI in the absence of federal omnibus AI legislation. In April 2023, a joint public statement issued by the Department of Justice, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission and FTC emphasized that each of the agencies' "existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices." A year later, several more federal agencies — including the Department of Housing and Urban Development, Department of Education, Department of Health and Human Services, Department of Homeland Security, and Department of Labor — signed onto an updated statement, indicating the growing swath of federal entities concerned with the intersection of AI and existing law.
New Zealand
The Office of the Privacy Commissioner of New Zealand also released specific guidance on the intersection of AI and the 13 information privacy principles enshrined in its Privacy Act 2020. Building upon its earlier statement of expectations around the use of generative AI, the OPC's full set of guidance is based on the principle that "privacy is a good starting point" when an organization is considering uptake of any new AI tool. Before turning to AI, organizations should do a preliminary assessment of necessity and proportionality and consideration of alternatives. The OPC also expects senior leadership to approve the use of AI tools "based on full consideration of risks and mitigation." Conducting PIAs, being transparent with people about the use of AI tools and ensuring human review before acting on AI outputs are all emphasized. The OPC's guidance also requires Te Ao Māori perspectives on privacy to be considered, including concerns about bias from AI systems developed overseas that may not work accurately for Māori and their exclusion from decisions to build and adopt AI tools.
South Korea
South Korea's Personal Information Protection Commission produced a set of guidelines, the personal information protection self-checklist, for developers and operators of AI systems. Relevant primarily, but not strictly, for compliance with the country's Personal Information Protection Act, it consists of 16 legal obligations each containing multiple items for verification for organizations. The checklist proceeds through numerous stages, from planning and design, which involves applying privacy by design principle and conducting impact assessments, to the final stage of working on the constant improvement of AI ethics within an organization.
Canada
Aimed at AI developers and providers, the Office of the Privacy Commissioner of Canada released principles for ensuring generative AI technologies contain privacy protections. They align with guidance from other global authorities in emphasizing the importance of obtaining valid and meaningful consent; limiting collection, use and disclosure to appropriate purposes; and being transparent to individuals about potential privacy risks, among other things.
The Regulator's AI Toolbox
Some regulators have gone beyond the issuance of compliance guidance and developed various tools, templates, guides and other practical resources for AI operators.
- The U.K. ICO created an AI and data protection risk toolkit to provide operational support to organizations reducing risks involving personal data created by AI systems. For each stage of the AI life cycle, it provides a series of practical steps organizations can take to reduce risk, relevant references to the U.K. GDPR and further guidance from the ICO on the principle of purpose limitation, ensuring individual rights in AI systems and explaining decisions made with AI.
- The CNIL's self-assessment guide for AI systems consists of a series of fact sheets that aim to help organizations specify a clear objective, implement best practices for GDPR compliance, analyze risks and prevent attacks.
Practical recommendations for AI providers, developers and deployers
Across the numerous sets of guidance issued by global DPAs, a list of practical recommendations for AI providers can be distilled:
- Integrate privacy and data protection by default and by design principles at the planning and design stages of any AI project.
- Conduct PIAs and DPIAs before AI tools are made available for public use.
- Process personal data only for specific, explicit and legitimate purposes, and refrain from further processing that is not in line with individuals' reasonable expectations.
- Have a system in place for human oversight and review of both AI inputs and outputs.
- Provide transparent information on what, why and how personal data is collected and used. Providers should make information on privacy risks associated with use of the AI system available to deployers.
- Have data governance and technical safeguards in place for review and filtering of personal data that is inaccurate or misleading.
- Develop a time-bound plan for retention and deletion of any personal information collected.
- Implement baseline cybersecurity controls and controls that prevent attackers from extracting personal data from AI systems.
- Maintain up-to-date technical documentation and be able to demonstrate compliance with privacy and data protection laws and policies.
- Communicate closely with DPAs and privacy authorities.
Global regulators' priorities
As evidenced by the EU AI Act, many regulators are taking a risk-based approach to AI regulation. The focus has been overwhelmingly on high-risk AI systems. Given their public-facing nature, generative AI tools have also been a top concern. In particular, the greatest priority has been placed on AI and automated decision-making that could lead to unfair, unethical or discriminatory treatment, or could affect vulnerable populations, namely children. Although all AI systems that collect and use personal data have obligations under existing privacy and data protection legislation, the greatest scrutiny will likely be upon those deemed to have the greatest effects on individuals' fundamental rights.
Lastly, new modes of interagency collaboration on regulatory matters are emerging to address the complex legal challenges brought about by AI. For example, in the U.K., the Digital Regulation Cooperation Forum brings together the ICO, Competition and Markets Authority, Office of Communications, and Financial Conduct Authority to regulate online safety, particularly the use of algorithms. In the U.S., interagency cooperation has been led by the Justice Department, EEOC, CFPB, HHS, FTC and numerous other federal agencies on issues such as advancing equity in AI. As regulators continue to confront challenges at the nexus of AI, privacy, data protection, competition, civil rights and numerous other priorities, AI operators can expect to see more toolkits, recommendations and guidance on how existing legal protections should apply.