ANALYSIS: First AI Bias Settlement With EEOC Spotlights Pitfalls

Aug. 24, 2023, 9:00 AM UTC

Using artificial intelligence in recruitment can be a risky proposition. The first EEOC settlement over AI bias in hiring highlights how employers should carefully review the questions these tools are asking to ensure that they don’t discriminate against protected classes.

The case, which settled in August via a consent decree for $365,000, alleged that iTutorGroup used an AI tool that rejected male applicants over the age of 60 and females over 55. While the defendants denied these allegations, this case serves as a reminder that allegedly discriminatory decisions by hiring tools—regardless of employer intention—can lead to costly legal action.

Employers using AI hiring tools must ensure that the questions and data gathered don’t violate antidiscrimination laws, and they must audit their tools to ensure that the tools comply from day one.

Asking the Right Questions

While using AI can increase efficiency and automate certain aspects of decision making, the US Equal Employment Opportunity Commission (EEOC) has determined that employers can be held liable for the actions and decisions of AI tools—regardless of whether an employer intended to break the law with the tool.

In EEOC v. iTutorGroup, the EEOC alleged that the online education company’s hiring practices violated the Age Discrimination in Employment Act (ADEA). The EEOC argued that iTutorGroup solicited birthdates from applicants with their hiring tool, and then unlawfully used this information to discriminate against certain applicants.

The charging party in this case—a female over 55 years old—alleged that she was rejected from a position for which she applied when she submitted her actual birthday but then received an interview request when she resubmitted her application using a younger age.

The EEOC alleged that more than 200 other applicants faced similar age-based rejections.

This case shows how the questions that AI tools ask applicants and the information gleaned from the questions can result in legal liability for employers. This is particularly true if a question is discriminatory or if an AI tool’s algorithm is biased against answers it associates with protected classes, resulting in illegal discrimination.

For example, if a tool asks applicants what year they graduated college and then favors applicants with more recent graduation dates, this could lead to ADEA claims, as generally (though not always) more recent graduates will be younger than those with graduation dates further in the past.

Employers may not be able to mitigate all the risks that arise from using AI tools in hiring, but a good first step toward liability mitigation is to ensure that tools don’t ask certain questions—such as those that would elicit an applicant’s age, religion, or other protected characteristic—that could land an employer in court.

To avoid risks, employers should verify that the software they use does not, by default, ask questions that are legally questionable and should avoid programming these type questions themselves when customizing hiring tools.

AI Bias Audits

The EEOC encourages employers to have vendors confirm that their AI tools don’t ask unlawful questions, but confirming this alone isn’t enough to protect employers against discrimination claims. Employers should also conduct bias audits before implementing AI tools.

AI technology is new and fallible, but bias audits could help prevent violations of equal employment laws. Bias audits holistically assess the technology to understand how the AI was trained, how the vendor accounted for bias and inaccuracies, and how the software performs under the specific circumstances for which it will be implemented.

Bias can be introduced into AI via several avenues. Human bias is the most obvious. Humans who build the algorithms, collect the training data, and test the software all have bias—conscious or otherwise—which can become ingrained in the AI unless extensive bias testing is conducted by diverse teams.

Bias can also be introduced when outliers from data sets get eliminated. Eliminating the atypical data can cause AI to amplify existing human bias. In an employment context, outliers may be underrepresented and/or protected groups in a workplace.

When assessing AI tools, employers should ask vendors if testing has been done to ensure that the software doesn’t deprioritize protected groups or otherwise subject applicants to algorithmic bias.

Knowing where AI bias originates helps prevent these biases from affecting real people.

When evaluating vendors employers should also ask them how the AI was trained, what data sets were used, and how those data sets were created and tested. An EEOC statement on using AI in hiring decisions includes a list of questions that organizations can use to ensure that biases against protected individuals have been accounted for.

It’s also important to involve multiple stakeholders (HR, legal experts) and subject matter experts (data privacy officers, IT specialists) to make sure AI software is fit for purpose and staff is trained properly.

Internal Testing

In addition to getting vendors’ assurances on AI bias prevention, the EEOC suggests that employers perform an internal assessment to determine if the software is fit for purpose and meets the business’ DEI standards before fully implementing the software.

This testing should simulate real-world hiring scenarios to identify vulnerabilities and would thus be conducted with HR staff and company employees to ensure the tool meets their needs. Through this testing, the organization should be able to assess potential biases in the tool’s decision making, evaluate its performance in assessing candidates of different demographic groups, and identify staff training needs.

Discussions with vendors and stakeholders, as well as internal testing will help employers to determine how they can most effectively—and compliantly—use AI for hiring decisions.

In the case of iTutorGroup, EEOC involvement could possibly have been avoided if a human, rather than the software, was making the final decisions—or at least reviewing the AI tool’s decisions.

Bloomberg Law subscribers can find related content in our In Focus: Artificial Intelligence (AI) page.

If you’re reading this on the Bloomberg Terminal, please run BLAW OUT <GO> in order to access the hyperlinked content, or click here to view the web version of this article.

To contact the reporters on this story: Bridget Roddy in Washington at broddy@bloombergindustry.com, Francis Boustany in Washington at fboustany@bloombergindustry.com. To contact the editor: Melissa Heelan in Washington at mstanzione@bloombergindustry.com.

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.