The explosive growth of ChatGPT and other generative artificial intelligence platforms has highlighted the promise of AI to every business leader and investor. ChatGPT is a generative AI language model — a type of machine learning — that allows users to ask questions and receive answers in a manner that mimics human conversation.
The irony is that while ChatGPT and similar generative AI applications have captured the headlines, other types of AI have been in development and use for quite some time. Many powerful AI business applications, for example, rely on other forms of machine learning that have seen recent dramatic improvements in performance. Regardless, the arrival of ChatGPT marks a significant inflection point in the development of AI technologies and technological development overall, given how quickly and broadly ChatGPT or similar AI applications have been adopted by developers and users worldwide.
From a legal and policy perspective, the rapid rise of this type of transformative innovation places stress on existing legal and privacy frameworks. New interpretations will certainly be needed, but legislators, regulators and policymakers may believe that there is a need to rush to act quickly to adopt entirely new and potentially prescriptive laws, regulations and policies.
Although such new regulatory efforts may be motivated by genuine concerns about the protection of citizens from unknown or poorly understood potential harms, the reality is that — in the context of data privacy — existing data privacy laws, regulations and policies already directly regulate many key data-related aspects of AI.
The early adoption of new and prescriptive data privacy regulations may unintentionally serve to stifle innovation and inhibit the realization of the beneficial aspects of AI for citizens, the economy and society.
Generally stated, data privacy laws directly regulate the collection, use, disclosure, cross-border transfer and other processing of data about identified or identifiable individuals, and also confer various data privacy rights on individuals.
AI development often depends on the ingestion of large sets of data (input) that are used to train algorithms to produce models (output) that assist with smart decision-making. To the extent that any of the input for an AI model involves personal data, or any output is used to make decisions that affect the rights or interests of individuals, the AI model and its applications are probably already directly subject to various data privacy laws.
Most notably, if any of the personal data at issue is sensitive or involves special categories of personal data (e.g., biometrics, health, credit/financial, race/ethnicity, children's data, or the like), or if the output involves decisions with legal effects for individuals (e.g., credit, employment, discrimination), data privacy laws may already impose significant restraints.
Several examples of how existing data privacy laws apply directly to the development and use of AI include the following:
- Advertising technology/consumer marketing. AI has, and promises to have, a dramatic impact on adtech and consumer marketing as it enables advancements ranging from trend analysis and sophisticated behavioral targeting based on large amounts of consumer data to the creation of human-like chatbots to interact with potential consumers. Virtually every data privacy law already regulates these activities. In fact, certain data privacy laws, such as the California Consumer Privacy Act/California Privacy Rights Act, and various other similar comprehensive U.S. state privacy laws, appear to have been specifically adopted to address these concerns, and include rigorous notice, consent, data subject rights and other requirements. Industry-specific data privacy laws, such as the Health Insurance Portability and Accountability Act, establish rigorous requirements to obtain patient authorization to use and disclose protected health information for adtech and marketing purposes. Outside the U.S., all comprehensive data protection regimes, such as the EU General Data Protection Regulation, the China Personal Information Protection Law and Brazil's General Data Protection Law (collectively, non-U.S. privacy laws) will apply a broad range of privacy protections. Depending on the geographies and specifics, adtech and consumer marketing activities can attract a broad suite of such requirements, including everything from notice and consent (express or implied) through to data subject rights, mandatory documentation of data privacy impact assessments, cross-border data transfer restrictions and more.
- Authentication/biometrics. A subset of data privacy laws establish rigorous protections regarding the collection and processing of biometric identifiers and information (e.g., fingerprint, facial geometry, retinal scans and the like). Whether an AI application captures biometric information for entertainment (e.g., photo tagging), workplace authentication (e.g., fingerprint for time clock), security (e.g., facial recognition), or other purposes, biometric privacy laws typically impose express written consent, data deletion and other rigorous privacy protections. Penalties can be significant for non-compliance, including some laws, such as the Illinois Biometric Information Privacy Act, that confer private rights of action, statutory damages and attorneys' fees. Many non-U.S. privacy laws also contain specific provisions on biometrics.
- Evaluations for credit, employment, insurance and housing. Any use of personal data, AI algorithms or automated decision-making to support credit, employment, insurance, housing or similar decisions can be subject to federal and state privacy regulation under the Fair Credit Reporting Act, the Equal Credit Opportunity Act and other U.S. federal and state privacy laws, as well as non-U.S. data privacy laws. Key obligations can include access and correction rights, duties to explain decision-making and other required actions.
- Cybersecurity monitoring. AI-enhanced cybersecurity monitoring of company networks and user activity, including on personally owned devices, is often subject to data privacy regulation in the context of employee monitoring. Applicable data privacy rules can include notice and consent requirements under the Electronic Communications Privacy Act, the Computer Fraud and Abuse Act (for personally owned devices), U.S. state privacy laws, and non-U.S. privacy laws.
- Other AI applications that impact individuals. Any AI applications that involve decision-making with legal effects, high risk or potential unfair bias towards individuals are likely to be subject to some form of privacy regulation. The Federal Trade Commission has broad authority to take action against activity in or affecting commerce that is deceptive or unfair. Other U.S. federal agencies, including the Consumer Financial Protection Bureau, the Department of Health and Human Services’ Office for Civil Rights and the Equal Employment Opportunity Commission, have also asserted their authority to regulate aspects of the development and use of AI. U.S. state attorneys general also have comparable authorities and initiatives. Various emerging U.S. state privacy laws establish requirements to conduct privacy impact assessments related to certain high-risk processing activities. Outside the U.S., many non-U.S. privacy laws establish rules on automated decision-making, privacy by design, data protection impact assessments and other obligations.
To be sure, we may learn over time that data privacy laws need to be updated to address specific features or risks posed to personal data in the context of AI.
Moreover, data privacy laws clearly do not address all of the multifaceted and multilayered legal, regulatory and policy issues arising from AI. Among others, AI poses significant legal challenges in areas such as intellectual property, health care and financial regulation, competition/anti-trust, commercial contracting and more.
AI also raises broad policy and ethics issues. It remains important, however, for business leaders, policymakers and other stakeholders to be mindful that — in the area of data privacy in particular — the existing laws, regulations and policies already directly apply to AI. This should help focus the implementation of any new data privacy requirements only as strictly needed and in a manner that would help facilitate the realization of the beneficial aspects of AI innovation across business, health care, financial services and more.