The California Privacy Protection Agency's proposed automated decision-making technology regulations aim to increase transparency around when and where automated decisions are employed while allowing consumers to exercise their rights.
Stakeholders have a role to play in shaping the final rules, as industry, civil society and others will have until 14 Jan. 2025 to submit comments to the formal rulemaking process.
ADMTs are deployed in various consumer-focused areas, notably in the employment and housing contexts. The CPPA's proposed rules would require covered entities under the California Consumer Privacy Act to conduct risk assessments to ensure consumers' personal data is protected.
Advocates and academics recently came together on a joint webinar by the Electronic Privacy Information Center and the Privacy Law Section of the California Lawyers Association to begin unpacking the implications of the proposed ADMT rules and the overall landscape for how ADMT and artificial intelligence could impact consumers both positively and negatively.
TechEquity Chief Program Officer Samantha Gordon said the regulations are a really important signal of what it looks like to have a real and true privacy standard at a state level and come at a time when setting up an accountability framework that ensures that companies are responsible for the use and collection of personal data is critical to advancing consumer rights ahead of further AI advancement.
Consumer protections
Consumers are becoming increasingly aware of the significant developments of AI technology, including generative AI tools. Despite the awareness, transparency about how the technology is used, or what consumer data is collected to support AI models, may not be keeping pace with innovation.
Gordon highlighted her organization's Screen Out of Housing report, which found two-thirds of California landlords used algorithms to make decisions during tenant screenings while just 3% of tenants knew specifics about the practice. The study also found 20% of respondents used predictive scoring — the practice of making conclusions about individuals based on personal data collected — in tenant decisions despite the potential of unintended bias or discrimination against certain groups of people.
These regulations intersect with people's real-life needs and economic realities. I think this is a really poignant case of why we need to make sure there's a really clear framework and guardrails for how these tools are deployed upon the public,” Gordon said.
The CPPA's regulations could also impact companies' use of ADMTs to measure employee metrics. Under the proposed rules, companies would also be required to allow employees the right to opt-out of the use of predictive or metric scoring, and push back against inferences made about them by ADMTs.
UC Berkeley Labor Center Director of the Technology and Work Program Annette Bernhardt claimed the CPPA's proposed regulations would work to advance workers' rights, noting rules could establish the public interest in ensuring workers ability to control both the collection and use of their data.
Ethical AI concerns
Groups that oppose the CPPA's proposed rules argue the context and definitions outlined by the CPPA are broad and could conflict with the California State Legislature's potential work on AI legislation. However, Gordon claimed narrowing the definition of ADMTs is mostly for the benefit of organizations and not consumers.
We want to make sure that the definitions align with how the technology shows up in people's everyday lives, Gordon said. A key concern surrounding the use of ADMTs in crucial industries is the potential for bias within AI systems and the concern that the technology could negatively impact individuals' lives.
While the technology poses risks for discrimination based on training data, there are arguments supporting ADMT as being capable of eliminating human bias. Van Lier Law Attorney Danielle Van Lier said it is important to navigate bias by allowing both a technological and a human element while using ADMT for important decisions.
She noted, while laws and regulations can't necessarily navigate ethics, the CPPA's regulations could be a good step forward to provide transparency and control.
Stanford University Institute for Human-Centered Artificial Intelligence Privacy and Data Policy Fellow Jennifer King said the use of ADMTs for economic and social decisions have become very pervasive. King suggested that if the ADMT regulations are drafted properly, they could support consumer trust in future AI tools.
I would like to see that we are having experiences that serve us. ... Some more actual control so that we're not just being mined for data. King said. When we're using an AI assistant, it's doing something for us that we really want it to do, instead of just being a pass-through for all the data you can collect.
Compliance views
When the rules are finalized, companies that collect, store, or sell more than 100,000 California consumers' data would be required to inform consumers about the use of the systems through labeling.
Organizations obligated to comply with the regulations could struggle to meet the cost of compliance. The CPPA estimated the overall cost to California businesses to comply with risk assessment requirements will be between approximately USD354 million and USD1.43 billion.
I do sympathize with the companies, especially in those smaller companies on the spectrum that are going to have to comply with this. I do sympathize that there are, you know, costs to this,” Van Lier said. But I do think that the benefits to society…outweighs that, and I hope that this does take us a little bit further towards restoring some of that balance of power.”
Lexie White is a staff writer for the IAPP.