If you are dreading the impending end to the relative lull of August, just be glad you aren't the governor of California. The day after Labor Day marks the beginning of an autographic marathon for Gov. Gavin Newsom, D-Calif., who starts each September with an average of a thousand bills on his desk.

By 30 Sept., if Newsom fails to act on any one of the avalanche of bills passed before the end of the session in August, the bill will automatically become a statute. Instead, the governor’s office attempts to thoroughly review each bill, placing them before Newsom to sign or veto and hopefully providing a few breaks to stretch his signing hand.

Perhaps they even give the governor a little treat between ceremonial pens, but definitely not Skittles.

Mixed among the legislative screenings this year will almost certainly be a few artificial intelligence gemstones. California has considered more than a dozen pieces of AI governance legislation this term, some of which have been reflected in the IAPP Assembly Bill 2930 and Senate Bill 1047, both of which have been the subject of extensive controversy and intrigue.

Until quire recently, AB 2930, known simply as "Automated Decision Tools," was a bill that looked a lot like the Colorado AI Act and the doomed Connecticut AI bill. All three share a common ancestor — AB 331, an earlier California framework that has been subject to significant policy engagement across the country this year.

Though still structurally similar to its siblings — including requirements for developers and deployers to conduct impact assessments, empower responsible AI governance officers and establish safeguards to mitigate algorithmic discrimination — AB 2930 has been drastically limited in its scope. Until 15 Aug., the bill would have applied to automated decision-making systems in the areas of employment, education, housing, utilities, family planning, health care, financial services, criminal justice, legal services and voting.

Now, AB 2930 is a sectoral bill that would only cover automated employment decisions. Specifically, the revised bill covers systems that are a "substantial factor" in making a decision or judgment that has a "legal, material, or similarly significant effect on an individual's life" relating to the "impact of, access to, or the cost, terms, or availability of employment" with respect to hiring, pay, promotion, termination and "automated task allocation."

The changes also remove government entities from the scope of the bill and disempower the California Privacy Protection Agency from requesting a copy of algorithmic impact assessments conducted under the framework. It retains the powers of California's Civil Rights Department to bring a civil action against violators, while removing the same authorities it would have given to the attorney general. The shift in enforcement power stands to reason as the CRD has existing authority to enforce California's employment discrimination laws.

Bloomberg reports the narrowed scope may be an attempt to reduce the expected costs of the legislation.

Regardless of the motivation, the employment context has been the most highly scrutinized area of AI use in the U.S., so the focus of the updated bill is not a big surprise. In fact, this week saw the passage of a bill in Illinois amending the state's Human Rights Act to prohibit the use of AI to subject employees to discrimination with respect to "recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment."

Husch Blackwell's Byte Back blog published full analysis of the impact of this updated law, which also comes with new rulemaking authority.

To complicate matters in California, the effects of the newly streamlined AB 2930 are far from certain, as the new law would be layered on top of ongoing rule changes proposed by the rulemaking arm of the CRD, the California Civil Rights Council. The proposed regulations seek to modify existing employment discrimination protections to clarify their application to automated decision-making systems.

The Future of Privacy Forum this week published an analysis of the proposed rules along with recommendations in response to the Council's request for comment.

Far more policy attention has been paid to SB 1047, which would implement a swath of governance requirements for certain foundation models, particularly focused on critical safety concerns. This bill, too, has been substantially amended with an eye toward surviving Governor Newsom's veto pen.

Other private-sector AI governance bills are still in play too, including three related to generative AI.

AB 2013 would create transparency obligations for developers of generative AI models, while SB 942 would require deployers of such systems to include disclosures that content is produced by AI and to release tools that help detect AI-generated content. Finally, AB 3211 would require watermarks for outputs, plus impose risk assessments and reporting on vulnerabilities of watermarking systems.

We may not know which bills become law in California until the end of September, but one thing is for certain: tate lawmakers' interest in building guardrails around AI systems is not going to dwindle any time soon.

Please send feedback, updates and gubernatorial pens to cobun@iapp.org.

Cobun Zweifel-Keegan, CIPP/US, CIPM, is the managing director in Washington, D.C., for the IAPP.