The long-awaited first draft of the California Privacy Protection Agency's rulemaking on automated decision-making technologies dropped 27 Nov., setting the stage for some of the most consequential state-level U.S. artificial intelligence laws.
Automated decision-making technology includes systems that use machine learning, statistics or data processing to evaluate personal information to help humans make decisions. Such technology can also include individual profiling capabilities.
According to a draft provided to the IAPP, the proposed rules are broken down into three major sections: how to provide notice of the technology's use, when and how opting out is allowed, and how consumers can access information used by the business. It also carves out key areas of discussion for the CPPA Board, including how businesses might approach profiling children under 16 and how consumer information can be used to train a given system.
The proposal will be discussed by the CPPA board at its 8 Dec. meeting. The formal rulemaking procedure is not expected to start until next year.
CPPA Executive Director Ashkan Soltani did not shy away from the enormity of what the draft rules represent for California, the U.S. and potentially the world. He told TechCrunch the proposal is "by far the most comprehensive and detailed set of rules in the 'AI space'"
In a statement, CPPA Board member Vinhcent Le said the proposed rules are another instance where California is leading the country in privacy protection. "These draft regulations support the responsible use of automated decision-making while providing appropriate guardrails with respect to privacy, including employees' and children's privacy," he said.
Pre-use notice requirements
The proposed rules would require businesses that use personal information in their automated decision-making systems to clearly outline to consumers how the information is used and their rights to opt out of it. If the initial notice is not sufficient for customers, more information must be made available through a hyperlink that explains why the information is important to the business' systems.
The additional information must also include a description of whether the technology has been evaluated for reliability or fairness, and the outcome of such information.
There are a few exceptions to the disclosure requirements, including businesses that use automated decision-making software for security reasons, to detect fraudulent or illegal actions, to protect consumer safety, or because the information is critical to the business's ability to provide its goods or services.
However, if they are an exception, businesses must inform consumers about why they cannot opt out of the use of decision-making technology. Businesses that use such systems for behavioral advertising are not included in the exception and must provide an opt-out mechanism.
Opting out requirements
The proposal makes it clear there are several instances where consumers have the right to opt out. Those cases include legal determinations, evaluation of their performance as a student, job applicant or employee, and when they are in public places.
In the workplace, this includes when employers deploy keystroke loggers, attention monitors, location trackers or web-browsing monitoring tools for productivity tracking. Businesses that operate in publicly accessible places, like shopping malls and stadiums, and profile consumers using technologies like Wi-Fi or facial recognition must also provide opt-out options.
A critical point of discussion for the board will be how to handle instances where consumers are children. If a business knows it is profiling a consumer is under 13, it must come up with a mechanism for parents to provide consent for that monitoring. Children between 13 and 16 must be informed of their right to opt out of profiling in the future.
Access right requirements
Under the proposed rules, consumer have the right to ask businesses what automated decision-making technology is used for and how decisions affecting them were made. Businesses must provide details on the system's logic and the possible range of outcomes, as well as how human decision-making influenced the final outcome.
That information can only be provided if a customer's identity can be verified. Otherwise, a request may be rejected.
Like in other aspects of the rules, exceptions can be made to that disclosure if an access request could compromise the safety of a consumer, if a business uses the information for security purposes or to prevent fraud. But consumers have the right to appeal that decision to the CPPA and the California attorney general's office, and businesses must provide links to those complaint websites.
Mixed reception
Initial reactions to the draft rules show stakeholders may be gearing up for a fight, or preparing to meet requirements that are increasingly becoming adopted around the world.
The CPPA's rules include broad definitions of the technology that could cover many different systems, according to Future of Privacy Forum Director of U.S. Legislation Keir Lamont, CIPP/US. That broadness goes beyond what other states have proposed, particularly when it comes to systems that help humans make decision.
"It could sweep up common-place technology like calculators and spreadsheets, which may not be the intended target," he said. "How that's interpreted is going to be very important."
Baker McKenzie Partner Lothar Determann called the proposal's scope "excessive," noting how e-commerce would be especially impacted if customers can opt-out of using computer systems that help businesses make sales.
On the other hand, Mayer Brown Partner Dominique Shelton Leipzig, CIPP/US, argued the broad nature of the proposal, as well as the idea of consumers being allowed to opt out of having their information shared, is in line with what other countries have been proposing, pointing to legislation in Australia, the proposed EU AI Act and the GDPR.
"The rules are really synergistic with what we've been seeing in 78 countries and six continents," she said. "I think it's incumbent on the business community right now to pivot from a reactive to a proactive approach."
Juggling compliance programs could be a daunting task as well. Determann opined the draft rules might be easier to digest without the international patchwork of privacy requirements already under consideration.
"Businesses have been so overwhelmed with the barrage of unreasonably lengthy, complex, restrictive, and prescriptive data processing regulations in California and elsewhere that they see hardly any viable alternative to pushing back with litigation," Determann said.
The CPPA itself is attempting to balance its own rulemaking efforts under the California Privacy Rights Act. Additionally, the agency is currently appealing a delay on implementation of finalized CPRA regulations following a court ruling on a proper grace period between rules finalization and enforcement.
Absent federal legislation
The CPPA's rulemaking comes as policymakers around the world are feeling the pressure to regulate AI. In the U.S., President Joe Biden released an executive order on AI at the end of October that marked the first significant piece of legislative action. The EU is also in negotiations to finalize the proposed AI Act, which could serve as a global model for AI regulation.
While the executive order put in place some strong requirements — such as requiring developers to share safety test results with the government before going public — in place and spurred the majority of U.S. agencies to examine and issue guidance around AI use, the most striking feature was a call for Congress to pass privacy legislation, noting the two go hand in hand.
Absent federal legislation, U.S. states are having their own discussions about AI. California laid the groundwork for ambitious AI regulation when Gov. Gavin Newsom, D-Calif., signed the state's own AI order in September.
Editor's note: This story has been updated to include reaction to the CPPA draft rules.