Employers Ask ‘What Is AI’ as Regulators Probe Hiring Biases

Jan. 30, 2024, 10:10 AM UTC

Employers are struggling with confusion over which technology counts as “artificial intelligence” under emerging legal frameworks, a problem that could complicate federal agencies’ and state lawmakers’ attempts to tackle AI-based discrimination in hiring and employment.

A first-of-its kind New York City law requiring employers using AI to independently audit their systems for bias has led more states to propose similar bills. Meanwhile, the Equal Employment Opportunity Commission and the Labor Department’s Office of Federal Contract Compliance Programs have indicated an interest in learning more about the AI tools used by employers to hire and promote employees in order to spot possible discriminatory effects.

Labor and employment attorneys who represent companies are concerned that the lack of clear legal definitions of what AI encompasses will lead to them sharing inaccurate or inadequate information with states and federal agencies. Innovation in human resources technology has complicated the situation, along with the potential inclusion of some old school tools in the “AI” category.

“Whether it’s regulators, agencies, lawmakers, it is really important that we define our terms,” said James Paretti, a management-side attorney at Littler Mendelson P.C. and former chief of staff to acting EEOC Chair Victoria Lipnic.

Agency Inquiries

Addressing AI-related systems in hiring that may disfavor workers based on race, sex, or other protected characteristics has been a growing concern for both the EEOC and OFCCP. The agencies seek to root out intentional discrimination, as well as unintentional bias, which can occur when a tool has a disparate impact on certain groups of workers.

The OFCCP’s updated mandates for government contractors facing agency audits, which went into effect last August, require them to provide information on recruitment and hiring practices and systems including those using “artificial intelligence, algorithms, automated systems, or other technology-based selection procedures.”

In the same vein, the EEOC has repeatedly emphasized its focus on addressing AI discrimination. The agency’s Strategic Enforcement Plan for 2024-2028, published in September of last year, indicates a focus on hunting down bias in AI hiring tools.

Following criticism that the agency was jumping ahead on enforcement and algorithmic bias without providing employers with sufficient related guidance, the EEOC issued new technical guidance in May 2023 addressing the role of AI in discrimination under Title VII of the 1964 Civil Rights Act and how employers might stay compliant.

According to the EEOC’s guidance, “AI may include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems.”

“The EEOC guidance is quite broad and mentions software and other sorts of tools that frankly, employers have used for decades,” said Michael Schulman, a partner at Morrison Foerster. “It could capture, for example, resume screening tools that a lot of employers have used for a very long time, even before the proliferation of AI.”

Resume scanners prioritize certain applications based on keywords for recruiters and hiring managers.

EEOC AI Litigation

In August 2023, the EEOC settled its first AI-based hiring discrimination case against iTutorGroup Inc., a company that allegedly programmed its recruitment software to automatically reject older applicants.

Despite its ongoing efforts to combat AI bias, the agency faces challenges in identifying such cases and this case remains the only litigation on the issue by the commission that’s been made public so far.

Though the agency tied the case back to its AI-based discrimination initiative, attorneys question whether the hiring activity in question fits the bill.

“That’s just plain old unlawful,” said Paretti. “A) you don’t need a computer to do that. B) you don’t need a computer or artificial intelligence to tell you that that’s not okay.”

Rachel See, a labor and employment attorney at Seyfarth Shaw LLP and former EEOC assistant general counsel for technology, said that under the agency’s broad definitions, an argument could be made that simple activities, such as sorting or filtering or linear regressions in Excel as part of vetting job candidates, could fit into the EEOC’s descriptions of algorithmic discrimination or discrimination using technology.

The EEOC and OFCCP follow the Uniform Guidelines on Employee Selection Procedures as a framework for any selection procedures and tests for employment decisions—such as interviews, reviews of work samples, and performance evaluations—to avoid discrimination under Title VII.

Schulman said that unlike the New York City law on AI bias, the EEOC AI guidance is interpreting federal anti discrimination laws such as Title VII that already apply to any sort of employment decision, regardless of the technology used.

“So even if, for example, the EEOC qualifies what AI is for purposes of their AI guidance, it wouldn’t really change the end result, which is that employers need to be aware of really any tool that they’re using to make employment decisions and how those tools are ultimately impacting, potentially, minority or protected classes,” said Schulman.

Potential State Patchwork

New York City instituted a law in July 2023 requiring employers to independently audit their systems for AI bias and publish their findings on their company websites or risk fines.

Unlike agency guidance, employment attorneys say the law provides a clearer picture of how AI is defined.

Under the city ordinance, employers are required to take an inventory of the hiring and recruitment software they use to evaluate if any of it qualifies as an “automated employment decision tool,” a term that covers many AI-powered technologies, including automated interviews and resume screening.

Paretti said one of the biggest advantages of the city’s law is that it clearly indicates that the evaluation for AI-bias only applies when individuals who have applied to a company for a specific position go through a screening or application process. It doesn’t extend to software that is more difficult to evaluate and shouldn’t qualify as AI use, such as LinkedIn searches carried out by an employer, he said.

There’s a “multi prong test of whether something is AI,” Paretti said. “Is it replacing a human decision maker? Is it making calculations and drawing conclusions of its own accord that I have not told it to do? Then maybe it starts to look like AI.”

A broader state effort to curb AI-based discrimination, including in employment, is increasing across the nation. Legislators in 31 states have introduced 191 bills focused broadly on AI, according to an analysis by software alliance BSA.

Illinois and Maryland have laws on the books regulating the use of AI in video job interviews, and other states including California, Virginia, and Vermont are drafting their own legislation that includes coverage of AI bias in hiring.

As the list of possible jurisdictions set to monitor and potentially enforce laws to mitigate AI bias grows and definitions evolve, some level of uniformity will likely be key for compliance.

“I do not think we benefit from a multitude of federal, state and city local laws that all have different definitions and all look at things slightly differently,” Paretti said.

To contact the reporter on this story: Riddhi Setty in Washington at rsetty@bloombergindustry.com

To contact the editors responsible for this story: Rebekah Mintzer at rmintzer@bloombergindustry.com; Jay-Anne B. Casuga at jcasuga@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.