Welcome
Daily Labor Report®

AI Screens of Pandemic Job Seekers Could Lead to Bias Claims (1)

June 30, 2020, 9:30 AMUpdated: July 2, 2020, 7:33 PM

Companies are making more use of algorithmic hiring tools to screen a flood of job applicants during the coronavirus pandemic amid questions about whether they introduce new forms of bias into the early vetting process.

The tools are designed to more efficiently filter out candidates that don’t meet certain job-related criteria, like prior work experience, and to recruit potential hires via their online profiles. Businesses like HireVue offer biometric scanning tools that give applicant feedback based on facial expressions, while others like Pymetrics use behavioral tests to home in on ideal candidates.

Companies including Colgate-Palmolive Co., McDonald’s Corp., Boston Consulting Group Inc., PricewaterhouseCoopers LLP, and Kraft Heinz Co. are using them at a time when 21 million people in the U.S. were without jobs and seeking employment in May, according to the Labor Department. Job candidates might be unable or unwilling to apply and interview in person because of rules limiting social gatherings, said Monica Snyder, a workplace privacy attorney at Fisher Phillips in Boston.

But efficiency comes with a price, attorneys and technologists say.

For example, AI-powered facial scanning tools that claim to evaluate who could be a good fit for a role based on speech patterns, expressions, or eye movements could discriminate against applicants with disabilities. Resume scanning tools that look for immediate past experience could discriminate against women returning to the workforce from raising children.

“At a high level, the risk there is essentially that you create a class-level discrimination claim based on the impermissible bias that the tool has against a protected class,” said Aaron Crews, Littler Mendelson’s chief data analytics officer.

The science underlying reading facial expressions is still an unresolved question, according to a study by Northeastern University researchers. “It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be the scientific facts,” according to the study.

AI Hiring

Companies offering algorithmic tools like Pymetrics view themselves as job-matching platforms that use technology to improve inherently biased hiring processes run solely by humans.

“We are never excluding people from employment,” said Frida Polli, the CEO and co-founder of Pymetrics. “This stuff works. If you do it right, it does actually have benefits.”

Pymetrics is deployed when companies first sort through job applicants. Polli said its tool removes the “fast human bias” that occurs when someone is scanning a resume. Applicants with elite credentials, for example, might score an interview in that traditional way.

“Up to 90% of candidates are cut at that stage,” Polli said. “We’re trying to intervene where it’s the most problematic, and then people can go on to interview.”

Pymetrics also uses behavioral data measured through computer exercises to match applicants with the “right job,” Polli said.

Kevin Parker, the chairman and CEO of HireVue, said they ensure its tools are “legal, explainable, and defensible.”

“We constantly test our algorithms and review our approach with input from external data science researchers, social scientists, and academics to assure that they are valid and meet and exceed the legal requirements for any assessment process,” he said, adding that the “majority” of the tool’s assessments are game-based, and don’t include facial analysis technology.

While intended to weed out human biases, algorithmic hiring tools have been shown in some instances to develop their own biases.

Amazon.com Inc. scrapped an AI recruitment program because it taught itself that male candidates were preferable to women candidates based on the company’s previous hiring patterns. A federal civil rights agency has looked into whether several companies used Facebook’s algorithm to discriminate while recruiting job applicants.

“There is a significant amount of concern from the technological community” and the legal space about the implementation of AI hiring systems, said Lisa Kresge, graduate student researcher focusing on AI at the University of California, Berkeley.

New Laws

Artificial intelligence is an emerging area of the law, attorneys said, with new pressure from lawmakers and regulators looking to restrict its use. Amazon and Microsoft Corp. have both asked Congress for an AI law, but have pushed back against some local measures that would have created a moratorium on their use.

An Illinois law governing the tools’ uses took effect at the beginning of the year, Maryland recently signed into law an anti-bias rule, and the New York City Council has proposed legislation on the matter.

“Black-box” like algorithmic hiring tools, which don’t give candidates a clear insight on how they work, can also run afoul of the EU’s General Data Protection Regulation, as well as the Americans with Disabilities Act, and workplace anti-discrimination laws like Title VII of the 1964 Civil Rights Act.

“Just because you are using an automated process doesn’t alleviate any of the responsibility for fairness” in the hiring process, said Brenda Leong, senior counsel and director of artificial intelligence and ethics at Future of Privacy Forum.

Having a human involved to audit the hiring process can limit the risks raised by algorithmic hiring laws and labor rules, attorneys said.

“It’s most often the reduction in human control and potentially decision-making that may be a source of increased risk, and warrants additional thought ahead of putting these types of systems in place,” said Mark Lyon, chair of Gibson Dunn’s artificial intelligence and automated systems practice group.

Vetting these AI hiring tools before deploying them, Fisher Phillips’ Snyder said, can limit reputation risks.

“Companies need to ask ‘do the tools allow someone that is disabled to interact with the AI? Does it a discriminate based on age, race, gender, or any protected status?’” Snyder said.

‘Computer Said No’

Title VII requires unbiased employment decisions. Selection procedures are subject to the governance of the Uniform Guidelines On Employee Selection Procedures, which are jointly agreed upon by the Equal Employment Opportunity Commission, the Department of Justice, the Civil Service Commission, and the Department of Labor.

The guidelines encourage employers to test the procedures to ensure they don’t result in an adverse impact on job applicants. If the procedure has an adverse impact, it will be considered biased, “unless the procedure has been validated in accordance with these guidelines.”

“When using any selection tool for hiring, employers need to be aware that seemingly neutral tools can violate anti-discrimination laws if they disproportionately exclude protected classes,” said Akin Gump partner Esther Lander.

The House Subcommittee on Civil Rights and Human Services held a hearing in February to learn more about workplace AI tools, but multiple staffers confirmed that there hasn’t been any movement in this area since then.

Employers have expressed more interest in clearing any potential tools of discriminatory impacts before they’re rolled out for use, Littler Mendelson’s Crews said. Firms like Littler “can test, and vet, and pilot,” the tools.

But doing so requires a window into how the technology actually works, something not all tools allow for, he said. Some “black box” algorithms don’t reveal how certain data points are weighed and measured, leaving Crews and his team unable to do more than test for impermissible bias at the back end of the process.

“You need to be on the ‘transparent and explainable’ side of the ledger,” he said. “‘Computer said no’ is a problem.”

(Updates story that published June 30, 2020, to add comments from HireVue.)

To contact the reporters on this story: Paige Smith in Washington at psmith@bloomberglaw.com; Daniel R. Stoller in Washington at dstoller@bloomberglaw.com

To contact the editors responsible for this story: Martha Mueller Neff at mmuellerneff@bloomberglaw.com; Bernie Kohn at bkohn@bloomberglaw.com; Karl Hardy at khardy@bloomberglaw.com

To read more articles log in. To learn more about a subscription click here.