It's going to be awhile before the California Privacy Protection Agency's anticipated regulations around automated decision-making technology are complete. If the CPPA Board's discussion at its 8 March meeting is any indication, the agency may need as much time as possible.
The board engaged in discussion around revisions to its draft ADMT and risk assessment rules that resulted in a 3-2 vote to move closer to a formal rulemaking process on both topics. The sometimes contentious dialogue among board members showed how far apart they are on what the state's finals regulations should look like, as CPPA Chair Jennifer Urban, CIPP/US, Vinhcent Le and Jeffrey Worthe voted in favor of advancing while Alastair Mactaggart and Lydia de la Torre voted against.
The vote was meant to be procedural, giving agency staff the go-ahead to draft the paperwork that would allow final approval to begin formal rulemaking. The board's final review on whether to begin formal rulemaking is not expected to begin until this summer, with a plan to solicit additional stakeholder feedback right up until a possible rulemaking approval. CPPA General Counsel Phil Laird said the rulemaking process may not be finished until sometime in 2025.
For almost two years, the agency has been working on the rules governing how developers of automated decision-making technology, including artificial intelligence, and the businesses using it could obtain and use personal data. They would detail how to opt-out of collection and when businesses would need to do risk assessments.
What's different
A significant portion of the revisions to the draft rules worked to expand risk assessment requirements, or what would trigger a developer to study an AI product for potential harm.
The proposed update clarified what operational elements should be identified in an assessment, what to consider with potential impacts to consumer privacy, and what safeguards a business would need to ensure automated decision-making technology is working correctly. Risk assessments would be required when training ADMT or AI models that will be used for significant decisions, profiling, generating deepfakes or establishing identity.
Minors' personal information would be included under "sensitive personal information" and the term "systematic observation" — or when consistent tracking by use of Bluetooth, Wi-Fi, drones or livestream technologies that can collect physical or biometric data — would be included under "extensive profiling" when used in a work or educational setting.
Some updates focus on definitions and clarity. Specifically, what does not count as ADMT was expounded upon, including web hosting, networking, caching, data storage, firewalls, antivirus, robocall-filtering, spellchecking, calculators and databases. Those instances are exempt so long as they are not replacing or facilitating human decision-making. The definition for "significant decisions" was updated to include educational and employment opportunities.
A new definition for behavioral advertising would cover marketing that is based on information collected from a user's activity, both across businesses and their partners and within businesses' own products.
The thresholds for when pre-use notices, opt-outs and access requirements were revised to include specific instances of ADMT. Businesses have to disclose they will not retaliate against consumers for opting out in the pre-notice and will be required to provide streamlined information.
Friction points
The board's split vote was representative of the cosmic dilemmas facing regulators on AI: How far regulations should go, and how much time should be spent finding a solution as AI technology advances at a rapid pace.
Chief among the skeptics was Mactaggart, the chief architect of the California Privacy Act and founder of Californians for Consumer Privacy. He said staff's proposals went too far in how they defined who should be doing a risk assessment. He was particularly critical of when opt-outs are allowed and the definition of profiling, arguing the "dramatic scope" would encompass a large part of the economy and hamper businesses.
"We will basically be requiring every covered business to do a risk assessment, which I don't think is what we want to do, because we should be focusing on really where the heightened risk is," he said.
Mactaggart consistently referred back to Colorado's profiling rules and the EU General Data Protection Regulation to illustrate California's scope, saying it went further than both standards in some cases.
Backing Mactaggart was de la Torre, who said it would be unwise to move the regulations forward if there was disagreement on the board, given that stakeholders had been warning the board about the scope for months.
"And I don't think that's going to go away," de la Torre said. "I think it's highly likely these regulations will be litigated, whether we tailor them better or not."
Instead, de la Torre proposed CPPA staff should take another two months to rework the rules to get all board members on board. She suggested the additional drafting work could save the agency lawsuit headaches down the road.
But Le said strong protections may be appropriate in California to protect residents and their privacy from harm. He said Californians deserve stronger rights than what is in place in Colorado, making clear he felt the current scope of the rules was appropriate.
"I also think a lot of the organizations that say it's not (in scope), they have a vested interest in finding that it's not in scope," he said. "And that's just one side of the story."
Advancing the rules allows staff to begin to seek additional feedback, with the CPPA planning to roll out a statewide solicitation tour. Urban said feedback could be more crucial than anything in shaping the agency's final rules package.
"I think extra time is not as valuable as extra information," she said.