Bloomberg Law
Sept. 1, 2021, 4:18 PM

Artificial Intelligence Bias Needs EEOC Oversight, Official Says

Paige Smith
Paige Smith
Reporter

Artificial intelligence tools in hiring have so far remained unregulated by U.S. civil rights agencies, despite growing use and potential discrimination risks. One EEOC official wants that to change.

“What is unfair is if there are enforcement actions or litigation, both from the government and from the private sector, against those who are using the technologies, and the federal agency responsible for administering the laws has said nothing,” Keith Sonderling, a Republican commissioner on the U.S. Equal Employment Opportunity Commission, told Bloomberg Law in an exclusive interview.

The use of artificial intelligence for recruitment, resume screening, automated video interviews, and other employment tasks has for years been on the radar of federal regulators and lawmakers, as workers began filing allegations of AI-related discrimination to the EEOC. Attorneys have warned that bias litigation could soon be on the horizon.

Those developments, however, have yet to produce laws, regulations, or agency guidance on the issue.

Sonderling hopes to revisit the topic, saying he’s interested in investigating AI discrimination via a commissioner charge, a rarely used procedure that allows EEOC leadership to initiate targeted bias probes.

“I think enforcement has been difficult in this area because the employees do not know that they’re being subjected to this technology,” said Sonderling, who’s also hosting an agency webinar on AI in employment Wednesday. “It’s difficult to bring an EEOC charge or a lawsuit if you don’t know that the decision being made against you was not made by a human.”

Academics, attorneys, and workers’ rights proponents also have called for more oversight of the tools, and said they would like to see input from the EEOC.

“I think the EEOC has a role in this in promulgating what I hope, ultimately, are common-sense regulations,” said Bradford Newman, a litigation partner with Baker McKenzie LLP in Palo Alto, Calif. He has consulted lawmakers on how to proceed with regulation in this area. “I think it’s inevitable, though I think it’s frustrating that it hasn’t happened earlier.”

Federal Inaction

The House Education and Labor Subcommittee on Civil Rights and Human Services last held a hearing in February 2020 aimed at addressing concerns about AI tools, but legislation failed to follow. Ten Democratic senators in December asked the EEOC how it could conduct research on and oversight of the tools.

Scholars, like Brookings Institution fellow Alex Engler and Ifeoma Ajunwa, an associate law professor at the University of North Carolina, also have called for EEOC oversight.

“Despite some of the proven benefits of automated hiring, there remains the potential for misuse, resulting from the opportunities to introduce human bias at any stage of the automated hiring process—from design, to implementation, and finally, to the interpretation of results,” Ajunwa wrote in a 2019 Harvard Journal of Law & Technology article.

EEOC spokeswoman Christine Nazer said the agency will continue to monitor AI developments to ensure companies are complying with the law.

“The Commission has been examining the issue of artificial intelligence, people analytics, and big data in making hiring and employment-related decisions since at least 2016 when the EEOC held a public meeting on the equal employment opportunity implications of big data in the workplace,” she said in an email.

EEOC Options

The EEOC enforces laws that include Title VII of the 1964 Civil Rights Act, the Americans with Disabilities Act, and the Age Discrimination in Employment Act—which collectively ban employment discrimination based on race, sex, color, national origin, religion, disability, and age.

And while the commission and other civil rights agencies have adopted guidelines that govern applicant tests and selection procedures, attorneys said they want regulations or guidance that specifically address AI tools.

“I would love to see the EEOC adopt new regulations and guidelines that flesh out how Title VII and the ADEA and the ADA should be applied in the digital context,” said Peter Romer-Friedman, an attorney with Gupta Wessler PLLC, who represents workers.

That means clarifying when technology can have a disparate impact on certain workers, he said. In other words, when might artificial intelligence—developed and used in a seemingly neutral manner—result in unintentional discrimination.

“What we were trying to tell Congress, and this is still very much relevant and not addressed, is that most employers have no idea whether these applications are promoting equality or undermining it,” said Romer-Friedman, who testified at the February 2020 congressional hearing. “It’s kind of blind faith.”

Sonderling’s Goals

Sonderling said that clarity for employers and workers impacted by artificial intelligence tools should be a priority for the agency.

“The EEOC needs to be providing guidelines or answers or clarity on exactly how our laws apply to this ever-advancing technology,” he said. “The industry has been asking for guidance, all sides of the industry.”

In addition to disparate impact, Sonderling said allegations of disparate treatment, or intentional bias, are a concern with AI. Singling out and discriminating against a protected class of workers “can be used by AI at a scale we’ve never seen before,” he said.

“It’s applying our laws that may be old to new technologies,” he said. “Regardless of whether the algorithm is designed to discriminate, we’re going to look at the results.”

Enforcement actions, like a commissioner charge, could also be how the agency weighs in on the issue. EEOC regulations dictate that “any person or organization may request the issuance of a commissioner charge for an inquiry into individual or systemic discrimination” by coming forth with information to an agency office.

“We’re talking about technology that’s being used in large corporations—right now, today,” he said. “The more I talk about the best way to do this, the more likely it is that someone will come forward armed with this information to tell us what’s going on.”

To contact the reporter on this story: Paige Smith in Washington at psmith@bloomberglaw.com

To contact the editor responsible for this story: Jay-Anne B. Casuga at jcasuga@bloomberglaw.com