AI Hiring Bias Laws Limited by Lack of Transparency in Tools

Nov. 12, 2024, 10:30 AM UTC

State lawmakers crafting statutes to fight artificial intelligence-based bias in employment decisions are struggling to pry open the black box of private companies’ AI tool usage.

Colorado and Illinois, where legislators passed laws this year regulating AI use for employment decisions, and others like Texas and California that are weighing them, require varying degrees of disclosure. It appears only one US jurisdiction, New York City, has mandated employers publicly post bias audits of their AI systems, but the city’s strict definition of automated decisions allowed most employers to determine the law doesn’t apply to them.

It’s proved difficult to craft audit requirements that are effective at preventing bias. It’s also politically challenging to get policymakers to pass them as tech companies resist heavy regulation, leaving simpler transparency measures as a potentially more attainable goal.

“We’ve been pushing for audits to be a part of it. That’s the preferred approach. There doesn’t seem to be a lot of appetite for that in the states, which is unfortunate,” said Matt Scherer, senior policy counsel for workers’ rights and technology at the Center for Democracy & Technology. “The first thing that needs to happen is transparency. We need to know which companies are using which tools.”

The Colorado and Illinois laws require employers to notify job candidates and employees when using AI for hiring. But they don’t go into force until 2026 and are subject to state agency rulemaking first.

“The devil’s in the details,” said Tracey E. Diamond, an employment attorney with Troutman Pepper Hamilton Sanders LLP. When employers are “buying a third-party tool, how much insight do they have into the inner machinations?”

Made with Flourish

Audit Law Flaws

Colorado’s law enacted in 2024 may be the nation’s broadest effort to prevent discriminatory AI use, but worker advocates like Scherer see shortcomings.

It lets companies conduct in-house impact assessments of their own AI use in employment, instead of requiring independent, third-party audits as New York City does. Other proposals including the Texas draft bill and California’s AB 2930 take the same approach as Colorado.

“I don’t have a great deal of faith that companies are going to do rigorous impact assessments in-house,” Scherer said. “When you don’t have independence, there is a strong urge to find the answers to the questions that you want to find.”

The Colorado measure also doesn’t require employers to publish online a copy or summary of the impact assessments. The AG’s office can request them, but those documents are protected from open records requests from the public.

The AG will enforce Colorado’s law, with no private right of action to let employees or job candidates sue even if they become aware of violations.

“There’s no accountability. The statute is totally toothless,” said Adam T. Klein, managing partner at Outten & Golden LLP, who represents workers in employment law cases.

Even where audits are published—as a few New York City employers have done since its law took effect in 2023—it’s unclear how those results might pressure employers or technology developers to address signs of bias.

For example, an audit under the city law of HireVue tools that JetBlue Airways Corp. and other employers use showed some race-plus-gender categories getting favorable ratings less than 80% as often as higher-scoring groups on particular assessments. This is a threshold federal regulators use to indicate potential “disparate impact” of a hiring practice on groups of applicants protected under civil rights law.

The metrics in the audit are aggregated across multiple employers using the same HireVue tools.

Companies’ audit results falling below the 80% line don’t necessarily merit revising the technology, said Lindsey Zuloaga, HireVue’s chief data scientist.

“That four-fifths rule is kind of used as this end-all-be-all thing, but there’s a lot more nuance,” she said, like whether the sample size is large enough to be statistically significant.

Although audits under the New York City law could hint at disparate impact, they don’t show demographics for candidates being hired or rejected, only how they scored on specific tests, Klein said. That makes it difficult to identify potential bias victims and bring a claim.

“If a large employer said, ‘Hey, we’re violating Title VII and here’s the report that goes with it,’ cool, I’d like to see that, and I’d find a way to make that case,” Klein said. “So far, I haven’t seen a single example of an employer reporting anything under Local Law 144 that has been useful.”

JetBlue didn’t respond to requests for comment and auditor DCI Consulting Group declined comment.

Transparency Focus

Better disclosure to job applicants might be the most likely option for near-term regulation of AI decision tools.

As Colorado’s law plus the California and Texas proposals weave transparency mandates into their broader requirements, Illinois lawmakers took a transparency-only approach. Their law requires employers to notify employees or job candidates of AI use without audit rules.

“Where I see a lot of this state and local regulation going is requiring a statement when you’re using AI that these are the things that the AI is looking for,” said Mark J. Girouard, employment attorney at Nilan Johnson Lewis PA. “It puts employers through their paces” to explain how systems work.

The details of what’s required will make a big difference in how transparent companies are.

The Illinois law requires a notice but leaves the specifics up to rulemaking by the state’s human rights agency.

The Colorado law also requires companies to explain to unsuccessful candidates why they were rejected and how the AI tool evaluated them and give them a chance to appeal.

States should require detailed, actionable explanations similar to the federal Fair Credit Reporting Act, Scherer said. It requires lenders to tell rejected credit applicants specific factors in their credit history that prevented approval, so they can correct errors and improve chances of future approvals.

“That’s the level of detail you need,” Scherer said. “Right now, the Colorado requirement is pretty ambiguous.”

To contact the reporter on this story: Chris Marr in Atlanta at cmarr@bloombergindustry.com

To contact the editors responsible for this story: Rebekah Mintzer at rmintzer@bloombergindustry.com; Alex Ruoff at aruoff@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.