Bloomberg Law
Feb. 13, 2023, 9:59 AM

Employers Seek Clarity on AI Bias Ahead of EEOC Enforcement Push

J. Edward Moreno
J. Edward Moreno

The US Equal Employment Opportunity Commission should do more to educate companies on how to prevent bias when using artificial intelligence tools if it’s going to target the area in its new enforcement strategy, employer trade groups and advocates told the workplace civil rights agency.

Dozens of groups submitted comments in the run-up to a Feb. 9 deadline to hear from the public on the EEOC’s draft strategic enforcement plan, which gives the agency’s attorneys a four-year road map for action.

The draft SEP, published in the Federal Register last month, for the first time includes an emphasis on AI, which has been increasingly used by companies to evaluate workers and applicants.It also outlines several other enforcement priorities, like the newly enacted Pregnant Workers Fairness Act, which requires employers to grant reasonable accommodations for pregnant employees.

The EEOC investigates charges of discrimination against employers, which can result in costly litigation or large settlements. The National Industry Liaison Group, an employer trade organization, said the EEOC should adopt a “educate first, then enforce” policy by prioritizing the creation of compliance materials on AI and other top targets.

“Employers would appreciate additional guidance from the EEOC about these issues before the Agency prioritizes them from an enforcement perspective,” NILG wrote.

Guidance Lacking

The EEOC alongside the Department of Justice issued guidance in May, specifically addressing how employers can avoid violating the Americans with Disabilities Act when using AI tools. But the commission hasn’t released guidance on how employers can comply with other nondiscrimination statutes while using those technologies, such as Title VII and the Age Discrimination in Employment Act.

Employers use AI tools for a range of purposes, including recruitment, screening resumes, and evaluating employees or applicants. The EEOC launched its first lawsuit in this space in May, suing an English-language tutoring services company “iTutorGroup” for allegedly programming its online recruitment software to automatically reject older applicants.

Hogan Assessment Systems Inc., a personality test developer, said the EEOC should emphasize “job relevance” when evaluating tools. “This helps avoid decisions that disproportionately impact individuals due to other characteristics that are not within their control and are not relevant to whether they can successfully do their job,” the company wrote.

A Better Balance, a caregiver advocacy group, also called on the agency to issue guidance for employers, noting that AI tools that monitor employee performance may have disparate impacts on workers who are pregnant or disabled and may require accommodations.

The Center for AI and Digital Policy said the EEOC should also address data privacy in prospective guidance.

“Increasingly, employers use worker surveillance products to monitor the activities of their workers,” the CAIDP said. “However, most of the time, the surveillance and/or tracking systems and devices blur the line between what is necessary to conduct and complete work vs what should be private. Such surveillance and tracking can provide protected information to an employer.”

The EEOC may amend the SEP and publish the final version before it is voted on by the full commission. The EEOC currently has a 2-2 partisan split, with a fifth Democratic commissioner moving through the Senate confirmation process. The agency is still following its previous SEP, which expired in late 2022.

Workers, Employers in the Dark

Discrimination stemming from the use of AI tools can be difficult to spot considering employees may not be aware they were evaluated or recruited that way. There is no requirement for an employer or recruiter to disclose the use of AI tools, though advocates including the American Civil Liberties Union have called for one.

This means the EEOC likely won’t get many charges from alleged victims of AI-based discrimination, it may have to lean on its authority to launch directed investigations or commissioners’ charges.

Upturn, an advocacy group that has studied the use of AI tools in hiring, said the EEOC should use those tools to investigate how job platforms like LinkedIn, ZipRecruiter, and Indeed rank candidates.

“Doing so will not only help ferret out discrimination against protected groups, but will also diminish the persistent information asymmetries that impede individuals from asserting their civil rights under equal employment laws,” Upturn wrote.

Real Women in Trucking filed a class discrimination charge against Meta Platforms Inc. with the EEOC in December, accusing the tech giant of steering job ads to specific age and gender groups on its Facebook platform. That complaint was filed by Gupta Wessler PLLC as well as Upturn, which has studied alleged algorithmic bias on Meta’s platforms.

But employers, who often buy AI tools from a third party vendor, may have little insight into how the technology works despite facing the most potential liability in the event of a lawsuit.

At a Jan. 31 hearing on employer use of AI tools, Republican EEOC Commissioners Keith Sonderling and Andrea Lucas noted that no vendors were asked to attend the event, which in part discussed auditing strategies for AI technologies.

Jiahao Chen, owner of Responsible Artificial Intelligence LLC, said in comments regarding the SEP the EEOC should do more to hold vendors accountable.

“At present, it is unclear if these vendors have any anti-discrimination compliance obligations: when an employer uses their [AI tool] to make an employment decision, the vendor is neither the employer making an employment decision, nor an employment agency making such decisions on their behalf,” wrote Chen.

The Center for Democracy and Technology said when vendors are in charge of their own audits it serves “more as vehicles to market vendors’ products” than an effort to prevent discrimination.

“Companies often conduct audits only when forced to or after extensive harm has been publicized—and even then, the audits they perform may be insufficient or opaque,” the Electronic Privacy Information Center wrote.

To contact the reporter on this story: J. Edward Moreno in Washington at

To contact the editor responsible for this story: Rebekah Mintzer at