For years, companies have been using automated software to sort out potential hires and employment decisions. But as artificial intelligence use increases and applies more to traditionally human responsibilities, the concerns about mistakes or bias are also rising.

The debate was on display at an 8 Nov. California Privacy Protection Agency board meeting, when a majority of proponents for the agency's proposed automated decision-making technology rules focused their discussions on workers’ rights.

As industry groups argued about the potential impact of the rules on California’s economy and their ability to do business, civil liberties groups said the rules would provide much-needed guardrails around how employers can use AI. The draft provisions include any technology helping to make "significant decisions" including the hiring, allocations of money and work, promotions and punishment.

"With the advent of big data and artificial intelligence, employers in a wide range of industries are increasingly capturing, buying and analyzing worker data, electronically monitoring workers and using algorithmic management to make critical employment related decisions," said Annette Bernhardt, director of the technology and work program at the UC Berkeley Labor Center. "And yet, California is the first and only place in the U.S. where workers are starting to gain basic rights over their data and how employers use that data to make critical decisions about them."

Bernhardt's comments underscore the particular fears around AI in employment, a technique often cited as carrying risks for bias. According to a 2024 report from the Society for Human Resource Management, approximately one-quarter of human resource professionals are using AI in their jobs, with 64% of those using it to recruit, interview and hire candidates.

U.S. Equal Employment Opportunity Commission Chair Charlotte Burrows said in January 2023 that up to 83% of employers use automated tools at some point during the hiring process. A majority of those surveyed in the EEOC report said AI allows them to save time and focus on more intensive tasks.

Considerations for adding AI focus on how the tools can save time in hiring, allow for a more uniform first-interview experience and provide better analysis of potential applicants. However, the considerations go hand in hand with calls for transparency around when AI is involved in an employment decision and guardrails to ensure it does not impair the human elements of hiring.

"Whether you're the hiring manager or the job seeker, whether you want to admit it or not, whether you're Walmart or a startup company with two employees, people still want to feel a human touch in the interview process," said Jared Coseglia, CEO and founder of TRU Staffing Partners, "and early enough in the interview process that it sets a tone for what working at the organization will feel like."

How employment has been considered in the US

Oversight and rules around AI hiring in the U.S. have mostly come through state-level comprehensive privacy legislation in recent years. Connecticut, Montana and Texas allow people to opt-out of their data being used for profiling by automated decisions which result in legal or "significant" effects, including employment opportunities. Minnesota's law goes further by allowing consumers to question an automated processing decision, be informed of the reasoning and what they might do to get a different result in the future.

Illinois, famed for its robust biometrics protection law, has protections for individuals when AI is used during the interview process. The Artificial Intelligence Video Act requires notification when AI is used to study video interviews, as well as providing consent options and an explanation of the technology’s use. Maryland prohibits the use of facial recognition software during an interview unless consent is obtained.

One of the first AI-focused employment laws emerged in New York City, which requires employers who use AI to inform them ahead of time, as well as submit independent audits to prove their systems are not biased. The Colorado AI Act, which takes elements from the EU AI Act to regulate "high-risk" AI systems, includes employment decisions around hiring, retention and promotion. The law will not go into effect until 2026 and is currently being studied by a task force for potential revisions.

Winthrop & Weinstine Shareholder Nadeem Schwen, CIPP/E, CIPM, said the variety of laws creates an uneven patchwork where some employers have to follow different rules than others. He said most AI he sees clients using is software that has been around in some form for a while; think resume filtering, custom screening and scheduling applications. There are also more advanced systems which take a more analytical approach — AI which can conduct interviews itself, or which use audio or video to scrutinize an applicant's body language.

"Everyone's slapped AI on their product, and it's hard to sometimes tell what might be the artificial intelligence in the product," Schwen said. "A lot of times, it's unclear what the AI is doing, what data sources it's using to do what it's doing and what the privacy concerns might be."

Knowing how your system works and what different states require is paramount to governing those AI systems, Schwen added. But employers do not always know the details of the product beyond what salespeople have told them, he said.

"You have to know what something is doing and how it's doing it if you're going to do a risk assessment," he said.

On the federal level, U.S. President Joe Biden directed the U.S. Department of Labor to issue guidance on best practices to prevent bias in hiring and employment through its AI executive order, resulting in the AI and Inclusive Hiring Framework. Bills introduced in Congress contain some AI employment provisions, but thus far none have made it past initial lawmaker consideration. President-elect Donald Trump has promised to roll back the executive order, making its lasting effect uncertain.

States are likely to continue to take up the mantle on the issue during 2025 legislative sessions and beyond. Nearly 40 AI-related bills introduced in 2024 across states covered employment, according to a National Conference of State Legislatures report.

What to consider around AI-assisted employment tools

There's been some high-profile litigation around AI in hiring that's adding to the regulatory conversation. Notably, a 2024 case involving Workday and its AI screening software is among the discussion points around potential AI bias.

Derek Mobley v. Workday Inc centers on claims Mobley applied to more than 100 jobs to companies that use Workday's screening tools during their application process. Mobley was rejected from all of them after partaking in company assessments or personality tests. He alleged the tools are designed to reveal mental health disorders, and he suffers from anxiety and depression.

In July, a judge at the U.S. District Court of the Northern District of California rejected part of Mobley's claims but allowed his argument that Workday may be liable for any discrimination he faced because their customers may have left the decision of whether his application could move forward to the algorithm without checking it.

"According to the FAC, Workday's software is not simply implementing in a rote way the criteria that employers set forth, but is instead participating in the decision-making process by recommending some candidates to move forward and rejecting others," the judge wrote. The case is still ongoing.

Matt Scherer, a senior policy counsel for workers’ rights and technology at the Center for Democracy and Technology, said cases like Mobley's show the culture is at a similar stage as when automated credit decisions first came around. Enough instances of people getting rejected for applications based on incorrect information eventually led to the Fair Credit Reporting Act.

"If enough people keep getting rejected from jobs and if we have enough information this is happening on a significant scale, and that people are losing jobs arbitrarily and randomly and unfairly, then I think you'll start to see more pressure for a FCRA-like solution for AI and employment," he said.

There have already been some high-profile instances of how AI can discriminate against potential workers. Amazon disbanded its AI recruiting tool after it was found to be biased against female applicants. A Bavarian Broadcasting investigation into the startup Retorio found its facial analysis tool made decisions on a job applicant's personality based on factors such as whether they had a bookshelf in their video interview background or wore glasses.

Scherer said while there is some data about how much AI is being used by companies in the employment process, not everyone may be transparent about its use. A lack of reporting means people cannot exercise their civil and consumer protection rights. He argued anyone who uses AI in the selection process should be upfront about how their algorithms work such as if it will be searching for specific keywords so the playing field can be more level.

TRU Staffing's Coseglia said employers who are upfront about their AI use might find candidates are turned off. He estimated one-third of candidates told AI might be used for an interview pass on the opportunity all together.

Their concerns range from privacy issues, such as whether a video interview will be stored and for how long, to concerns about their data being used to train AI models. Coseglia said TRU always asks companies if AI is going to be used if they decline to say, that information is passed onto the applicant.

For many companies, adding AI governance policies is happening as the technology is already being used.

"Sometimes we ask these questions (about AI) and it winds up taking a second because they have to go back to their general counsel's office and start baking these things in to their policies," Coseglia said. "Not because they're required to by law, but because the lack of doing so is prohibitive to getting the best possible talent to interview based on the parameters of how they conduct their search using technology."

Caitlin Andrews is a staff writer for the IAPP.