More than a quarter of U.S. state legislatures are considering bills that would regulate the private sector's use of artificial intelligence. With the federal government yet to pass a law governing this topic, state lawmakers are demonstrating a willingness to jump into the void.
The current state lawmaking climate around AI appears remarkably similar to the consumer privacy space after California passed the California Consumer Privacy Act in 2018. Following California, state legislatures across the country considered many different types of consumer privacy bills, including bills modeled on the CCPA, Washington state's Privacy Act and other approaches, including a Uniform Law Commission model act. The Washington Privacy Act only emerged as the prevailing model for non-California consumer privacy laws in the last two legislative cycles — although the bill was, of course, never passed in Washington.
We are beginning to see common categories and themes emerge when analyzing the current landscape of proposed state private sector AI bills. Although some bills blur the lines between these, the categories include algorithmic discrimination, automated employment decision-making, AI Bill of Rights and "working group" bills. In this article, we provide readers with a snapshot of these emerging categories and discuss some of the bills that fit into them. Although the descriptions below are by no means exhaustive, they provide a useful guide to making sense of what may otherwise appear to be chaos. Finally, although not discussed below, it is important to note the California Privacy Protection Agency is currently drafting regulations on automated decision-making technology.
Algorithmic discrimination
The first set of bills takes a broad approach to combating "algorithmic discrimination," which is generally defined as an automated decision tool's differential treatment of an individual or group based on their protected class. These bills place the burden on AI developers and businesses using AI, often referred to as deployers, to proactively ensure that the technologies are not creating discriminatory outcomes in the consumer and employment context. The nine states currently considering such bills are California (Assembly Bill 2930, formerly AB 331), Connecticut (Senate Bill 2), Vermont (H.710 and H.711), Hawaii (House Bill 1607 and its companion SB 2524), Illinois (HB 5116 and HB 5322), New York (A8129, S8209, and A8195), Oklahoma (HB 3835), Rhode Island (HB 7521), and Washington (HB 1951). Note, however, that the Rhode Island and Washington bills appear to have died.
While each of these bills differ in important respects, many are modeled after one another and impose similar obligations upon developers and deployers of AI. Provisions found in most of these bills require regular impact assessments of AI tools to ensure against discrimination; disclosure of such assessments to government agencies; internal policies, programs and safeguards to prevent foreseeable risks from AI; accommodating requests to opt-out of being subject to AI tools; disclosure of the AI's use to affected persons; and an explanation of how the AI tool uses personal information and how risks of discrimination are being minimized.
Among these requirements, some of the bills differentiate between "high risk" AI systems, generative AI, and general purpose or foundational AI models, imposing different obligations for each. Some of the draft legislation also imposes a duty of reasonable care standard upon developers and deployers to avoid algorithmic discrimination. Notably, most of these bills rely on government enforcement, with only a few providing a private right of action for violations.
Automated employment decision-making
While the bills discussed above take an expansive approach, the next category of pending legislation focuses on the use of AI technologies in the employment context. These bills generally target AI tools, commonly referred to as "automated employment decision tools," or "predictive data analytics" used by employers to make employment decisions about hiring, firing, promotion and compensation. To date, the following five states have introduced bills specifically targeting this area: Illinois (HB 3773), Massachusetts (H.1873), New Jersey (S1588), New York (A7859, S5641A and S7623A), and Vermont (H.114). Note that a few laws in this category have already been enacted in Illinois (AI Video Interview Act), Maryland (HB 1202) and New York City (Local Law 144).
Common features among these bills require employers to provide advance notice to and obtain consent from job applicants and employees who are subject to AEDTs, explain the qualifications and characteristics that AI will assess to candidates, and conduct and disclose regular impact assessments or bias audits of AI tools. Most of these bills, however, include carveouts for the use of AI when promoting diversity or affirmative action initiatives. Requirements also apply to developers of AEDTs to provide bias auditing services and certain disclosures to deployers regarding the tool's intended uses and known limitations.
Several of these bills also include additional provisions regarding employee privacy. They restrict the types of employee personal information employers can collect and disclose, and require advance written notice of and certain limitations on, the use of employee monitoring devices. A few of these bills, like New York's S7623A and Vermont's H.114, also prohibit employers from relying "solely" on an AEDT output when making hiring, promotion, termination, disciplinary or compensation decisions.
AI Bill of Rights
The next set of bills introduced this year would establish an AI Bill of Rights. Examples of these bills can be found in Oklahoma (HB 3453) and New York (A8129 and its companion S8209).
While they overlap in some ways with the bills discussed above, these proposed bills would provide state residents the rights to know when they are interacting with AI, to know when their data is being used to inform AI, not to be discriminated against by the use of AI, to have agency over their personal data; to understand the outcomes of an AI system impacting them and to optout of an AI system. Oklahoma's HB 3453 would also grant rights to rely on a watermark to verify the authenticity of a creative product and to approve derivative media generated by AI that uses a person's audio recordings or images.
"Working Group" bills
The final category of AI bills takes a more wait-and-see approach by creating government commissions, agencies or working groups to study the implementation of AI technologies and develop recommendations for future regulation. Such bills can be found in Utah (SB149), Florida (HB 1459), Hawaii (HB 2176 and SB 2572) and Massachusetts (S.2539). Note, however, that some of the bills already mentioned, like Connecticut's SB 2, also provide for the creation of a commission to similarly assess future AI policy.
These bills outline the makeup of the working groups, usually providing appointment authority to the state's governor, legislature and preexisting state departments, and allowing participation by industry stakeholders. The working groups are tasked with developing acceptable use policies and guidelines for the regulation, development and use of AI technologies in the state.
One bill to highlight in this category is Utah's SB 149, which appears likely to pass after receiving Senate approval on 13 Feb. 2024. In addition to creating an Office of AI Policy and AI Learning Laboratory Program to analyze potential AI legislation, the bill includes a regulatory mitigation licensing scheme where participants of the AI Learning Laboratory Program can avoid regulatory enforcement while developing and analyzing new AI technologies. Outside the working group context, the bill also imposes liability on uses of generative AI that violate consumer protection laws if not properly disclosed.