California AI Job Bias Rules Carry ‘Backdoor’ Mandate for Audits

Sept. 30, 2025, 9:15 AM UTC

California regulations targeting AI-powered job discrimination will amplify the pressure on employers to audit their hiring and recruiting technologies for bias, despite state agencies stopping short of mandating those tests.

New rules from the state’s Civil Rights Department will govern the way employers use automated decision systems beginning Oct. 1. More upcoming regulations from the California Privacy Protection Agency will impose additional obligations when the technologies are used in employment, as well as housing, education, lending, and health care.

As businesses increasingly adopt artificial intelligence tools to aid in personnel decisions, many state legislatures are weighing the best options to shield job seekers and employees from discrimination. Few have enacted laws to date, but could imitate the approach of California, often the first mover on workplace protections.

Even if other states don’t follow suit, the regulations will pressure multi-state employers that do business in California to apply those standards nationwide.

“The nature of software is global. It’s not local,” said Y. Douglas Yang, a Sheppard Mullin partner in Los Angeles. “Good luck telling an HR manager, ‘oh this person’s in California. We’ve got to use a different AI program that’s been bias tested,’” while using another system in the rest of the country.

The civil rights regulations don’t require bias audits of AI-powered decision tools. Likewise, the privacy regulations call for risk assessments addressing hazards including discrimination, yet stop short of mandating a statistical audit of tools’ output.

But for employers facing discrimination claims, the rules “make clear that the presence or the absence of bias testing audits will be looked at as potential evidence in terms of whether an employer is abiding by its obligations” to prevent bias, said Scott P. Jang, a Jackson Lewis PC principal in San Francisco.

“It’s kind of a backdoor way of mandating a bias audit,” said David J. Walton, a partner and AI practice chair at Fisher & Phillips LLP. “It’s a big deal,” given the California privacy agency’s reputation as an aggressive regulator that asserts broad jurisdiction over companies doing any business in the state.

In the US, only New York City requires companies to conduct bias audits when using automation tools for employment, but defined the tools narrowly, covering only those that essentially replace human decision-making.

The California civil rights regulation broadly covers any “computational process that makes a decision or facilitates human decision making.” It gives examples of resume screening tools and cognitive assessment games.

Colorado’s sweeping AI law would require bias audits starting in June 2026, but lawmakers and Gov. Jared Polis (D) are looking to revise it. The California legislature this month rejected an algorithmic discrimination bill similar to Colorado’s law.

As states shape their AI approaches, the White House and tech industry continue to push ways to preempt or discourage states from regulating the technology.

‘Due Diligence’

The rules focus on clarifying that existing anti-discrimination standards apply to employers’ use of automated decision systems. Preventing unintentional bias will require employers to work closely alongside the technology companies selling the tools, employment attorneys said.

“The marketplace may not fully be there” for businesses to hire credible third-party auditors to test their AI systems, Jang said. “That’s why I think it’s very important that employers do their due diligence with respect to their vendors and what their vendors are doing for testing their own systems.”

Bias audits offer a statistical analysis of the employees or candidates that AI tools select for advancement. Under historical federal guidelines, if a particular demographic group is selected less than 80% as often as the most-favored group, that indicates a possible disparate impact.

But employers face more questions than answers, Walton said. Among them, what exactly counts as an audit? And how do you get the data for an audit if you aren’t asking job candidates about traits like race?

“You can’t do a bias audit unless you know that type of data,” he said.

There’s also the liability question—whether employers, tech vendors, or both are legally responsible for discrimination involving an AI tool. Pending litigation that a jobseeker brought against tech vendor Workday Inc. is expected to offer one California federal court’s answer.

As the California regulations take hold, the “baseline” expectation will be that employers keep record of vendors’ initial bias testing, Yang said.

The “gold standard” will be for employers to continue testing the systems, monitoring their recommendations on hiring and other personnel moves for bias, he said.

Perhaps the regulations’ most explicit new mandate is that employers keep records up to four years related to AI tools, as they do for other personnel records under California law.

“Any design and implementation of AI is presumably a collaborative process. It requires employers to work very closely with their third-party vendors,” said Linda Wang, a CDF Labor Law LLP partner in Los Angeles.

Employers that record that process will be better able to defend against bias claims, she said.

“It’s an ongoing process, and there’s probably a lot of trial and error,” Wang said. “You want to preserve your documentation to show a good faith effort.”

Further Mandates

California employers could also face new obligations under legislation (SB 7) awaiting Gov. Gavin Newsom’s (D) signature.

That bill would set limits on employers’ use of AI-powered decision tools and electronic monitoring, including a ban on fully automating personnel moves without involving a human decision-maker.

The privacy regulations will extend existing requirements on transparency disclosures and opt-out rights related to data collection practices to also cover companies’ use of automated decision systems.

For now, though, the civil rights rules taking effect Oct. 1 serve to reinforce businesses’ existing anti-bias duties and ensure there’s no confusion that they apply to AI decision-making tools.

“If there’s discriminatory hiring practices when using these technologies, you don’t get to blame the robots,” said Brooke Tabshouri, a Duane Morris LLP special counsel in San Diego. “They’re not your ‘get out of jail free’ card.”

To contact the reporter on this story: Chris Marr in Atlanta at cmarr@bloombergindustry.com

To contact the editors responsible for this story: Rebekah Mintzer at rmintzer@bloombergindustry.com; Genevieve Douglas at gdouglas@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.