- Workplace discrimination among top concerns
- CA, NY among states considering AI bias audit laws
Artificial intelligence plays a growing role in human resources departments around the country, transforming how businesses recruit, hire, monitor, and discipline workers.
But while AI can help make HR departments more efficient, it also carries a big risk: Tools used to help companies hire and manage their workforce can also discriminate against groups of workers.
Policymakers are taking notice. Federal agencies like the Labor Department and the Equal Employment Opportunity Commission are including AI discrimination in their enforcement plans. Meanwhile, states including California, Connecticut, and New York are considering measures that would require bias audits for automated employment tools, similar to a New York City law that took effect in 2023.
It’s crucial that companies vet their AI vendors closely, so they understand what data the tools are collecting and how it’s used to make decisions, lawyers said.
“These tools just give you so much more information than we’ve ever had before,” said Keith Sonderling, an EEOC commissioner, speaking at a conference in April. “If you have all this additional information that you’re not allowed to make an employment decision on, it’s just more evidence that you potentially made an unlawful hiring or unlawful firing based upon those protected characteristics.”
Combating AI Bias
Automated screening can unintentionally import bias against applicants based on race, sex, or other characteristics protected by anti-discrimination law.
Biases can creep into models through bad data, opacity in how the models perform, and incorrect use of AI tools, the EEOC and eight other federal agencies said in an April statement in which they pledged to enforce federal civil rights laws in the context of AI use.
The EEOC has made policing the technology an enforcement priority, issued employer guidance on the topic, and settled its first ever AI discrimination case against an employer that allegedly programmed its software to reject older applicants.
Other examples of potential AI bias include the use of a recruiting tool that might prefer candidates from ZIP codes near an employer’s facility because retention rates are higher for workers who live close to their jobs. But if residents in those areas are overwhelmingly white, the recruiting tool could unintentionally discriminate against candidates of color.
To mitigate risks, companies should audit AI tools to ensure they don’t result in unintentional discrimination and should establish processes for human review of algorithm-driven decisions, legal and industry sources say.
In-house counsel are also preparing to comply with potential state laws on the horizon that would require employers to independently audit their systems for bias, as well as measures that would require notification to employees and job applicants when businesses use AI, and provide opt-outs for those who don’t want to be evaluated by automated systems.
California’s AI policy choices are expected to have the biggest impact on businesses, according to just over half of nearly 400 in-house counsel, HR executives, and business leaders who responded to a survey last summer by management-side law firm Littler Mendelson P.C. One-third of those surveyed said New York City would be their main source of concern.
California privacy enforcers are also crafting regulations on AI-related data protection.
Early-adopter states have enacted narrowly focused restrictions—Illinois limiting employers’ use of AI to analyze job candidates’ video interview submissions and Maryland limiting the use of facial recognition tech in interviews.
“The clients that I work with are really just trying to keep an eye on this space, what the developments look like, what direction the proposals seem to be trending,” said Jennifer Betts, a labor and employment attorney at Ogletree, Deakins, Nash, Smoak & Stewart. “Most of the proposed regulation across the US is more process-oriented, where they are identifying certain steps employers have to take” if they want to use certain AI tools.
Vetting AI Tools
Best practices beginning to emerge among employers include only using AI decision tools that have been audited for potential bias and then periodically auditing them going forward, Betts said.
For companies that voluntarily choose to audit, she recommended involving the company’s in-house or outside attorneys so the results are protected by attorney-client privilege.
Some businesses are setting up cross-disciplinary AI or innovation teams to assess tools before implementation, Betts said. These could include in-house counsel, human resources leaders, and information technology specialists.
Businesses also should consider retaining human review of any AI-powered decision-making, carefully reviewing the terms of contracts with AI vendors, and providing easy-to-understand notices to employees and job applicants when AI decision tools are being used—including information about how they request an accommodation when appropriate, Betts said.
The tech companies that make and sell these AI products are already feeling pressure from the businesses that buy and use them to conduct bias audits on decision-making AI tools, as employers look to avoid workplace discrimination claims and prepare for the rollout of state, federal, and European Union laws, said Shea Brown, founder and CEO of auditing provider BABL AI.
“They know other regulations are coming,” Brown said.
At
It’s crucial to understand what information a tool is collecting, and doing so involves “getting really in the weeds with the vendors before ever agreeing to work with them,” she said.
Cisco uses AI tools in recruiting and hiring to help with a scale issue: The company gets hundreds of resumes or more every few months, Sisco said. But for smaller companies, it may not make sense to take on the liability and expense, including paying for bias audits, she said.
Companies should also remember that “anyone along the line could be liable” for discriminatory outcomes from their AI tools, not just the vendors, she warned. “Therefore you owe it to the candidates to do your due diligence.”
The risk of discrimination has led The Planet Group, a recruiting and staffing company, to be cautious when it selects vendor partners for AI tools, said Marni Helfand, the company’s general counsel and top HR officer.
When vetting an AI vendor, Helfand said she asks them to validate their methodology. And anyone promising the world with their AI tool also raises a red flag.
“Like any other thing that consumers buy, if it sounds too good to be true, it probably is,” Helfand said.
Regulating ‘Bossware’
There’s also growing interest in legislating digital employee monitoring tools that can include leveraging AI to aid with decisions on promotions or disciplinary actions.
New York and a handful of states, including California and Washington, have restricted the way large warehouse operators such as
Beyond the warehouse setting, a pair of bills pending in New York state would restrict employers’ use of monitoring technologies, sometimes called “bossware.”
“The electronic monitoring piece has gotten very big since Covid, with a lot of remote employees,” said Karla Grossenbacher, a labor and employment attorney with a workplace privacy focus at Seyfarth Shaw LLP in Washington, D.C. “How do you make sure they’re doing their jobs and protecting your data?”
Employers generally don’t run into privacy problems by monitoring how much time employees are logged into company computer systems or how often they’re on websites that aren’t work related, Grossenbacher said. For most businesses, those basic forms of monitoring are sufficient, but she added that companies might consider more advanced technologies such as keystroke tracking for workers with access to sensitive information such as customers’ financial data.
The relatively few regulations on electronic employee monitoring generally limit location tracking on employees’ personal vehicles that they also use for work, she said.
“You have a right to know what employees are doing while they’re at work,” Grossenbacher said, adding that most companies don’t need or want to spend the money on more invasive monitoring technologies.
To contact the reporters on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.

