New Lie-Detecting AI for Job Interviews Risks Violating Old Laws

March 15, 2024, 9:10 AM UTC

A case brought in Massachusetts over the use of artificial intelligence as a “lie detector” in the employee interview process raises broader concerns over whether software that claims to evaluate candidates’ integrity is illegal, and perhaps even discriminatory.

In his state court case against CVS Health Corp. and CVS Pharmacy Inc., Brendan Baker claimed that the drugstore chain’s use of certain HireVue Inc. tools in online job interviews without a disclaimer was illegal under a Massachusetts law that prohibits the use of lie detector tests in hiring and employment. Baker’s lawsuit recently survived a motion to dismiss after being removed to federal court.

Litigation like Baker’s could hurt the development and use of certain AI-based interview technologies and open the door to cases under a decades-old federal lie detector law, the Employee Polygraph Protection Act. The application of older laws on integrity tests to cutting-edge hiring technology poses a threat to the growing universe of tools that companies seek to use to make faster and more informed decisions.

Labor law and AI experts say concerns about this technology are exacerbated further by possibilities of discrimination against people of color and disabled individuals through AI’s ability to interpret facial expressions and eye contact in the screening process.

“It sends a warning signal,” Jonathan Crotty, an employer-focused partner at Parker Poe Adams & Bernstein LLP, said of the Baker case.

“This is another example of how there may be some unintended effects coming from these products, that maybe the employers really don’t understand how they work or what legal issues they involve,” he added.

Integrity, Honor

In his proposed class action, Baker said CVS used video-interview technology developed by HireVue Interview that recorded job seekers while answering questions HireVue said were meant to gauge an applicant’s integrity and honor and help with lie detection.

According to Baker’s complaint, HireVue uploaded interview videos to third-party platform Affectiva, an AI company that works to understand human emotions, cognitive states, and activities by analyzing facial and vocal expressions.

“Our assessments are not, and have never been, designed to assess the truthfulness of a candidate’s response,” a representative for HireVue told Bloomberg Law.

“Rather, these tools are based upon over 100-years of validated industrial organizational psychology to evaluate whether a candidate’s answers are statistically linked to important work-related competencies and measures of individual job performance,” they said. I-O psychology is the study of employee behavior and workplace issues.

The market for “emotional AI” tools is projected to reach $61.05 billion in 2028, up from $27.4 billion in 2021.

In January 2021, the same month that Baker said he was interviewed using this technology, HireVue announced that it removed facial recognition technology for hiring interviews following a third-party audit that determined that visual analysis didn’t add significant value to candidate evaluations. The company had also been the subject of a Federal Trade Commission complaint related to the use of facial analysis tools.

Massachusetts Law

Baker’s lawsuit in the US District Court for the District of Massachusetts relies on a strong state protection against lie detectors and integrity testing that makes it illegal to require or administer these tests as a condition of employment. CVS also did not inform Baker before the interview of his statutory rights, he claimed.

Baker is seeking a minimum of $500 per violation in statutory damages, and to enjoin CVS from using such technology.

“This dates back to even when I was practicing and doing testing and assessment back in the early 2000s,” said Dan Kuang, a BCGi distinguished researcher and fellow at Biddle Consulting Group. “Anything with a veneer of integrity or some type of a lie detection, we just knew in Massachusetts, don’t even go there.”

Baker’s is the second case to present the argument that HireVue’s video-interview technology constituted an illegal lie detector test. The first—filed in Massachusetts state court against TJ Maxx Co. over their use of HireVue in the hiring process— was voluntarily dismissed last May.

Courtney Hinkle, an associate at Outten & Golden LLP, said the statutory test of the state law is “quite expansive” and its interpretation of polygraph tests could encompass the video interviewing screenings used by HireVue according to Baker’s complaint.

Federal v. State

In states without protections as strong as Massachusetts’, plaintiffs may have the option of bringing cases under the EPPA, a 1988 federal privacy law which bans the use of lie detectors by private employers, at the time of hire as well as during employment.

The EPPA defines a lie detector as a “polygraph, deceptograph, voice stress analyzer, psychological stress evaluator, or any other similar device (whether mechanical or electrical)” used to form an opinion on the person’s honesty.

Congress was compelled to pass the law after the use of polygraph tests in screenings for job applicants and employees jumped three-fold by 1985 over the previous decade, according to an academic paper by Hinkle.

Echoing today’s concerns over AI evaluation tools, several studies commissioned by the federal government had concluded that polygraphs are unreliable, inaccurate, and may violate privacy.

While neither the state nor federal law were originally built to accommodate emerging technologies like artificial intelligence, cases such as Baker’s could prompt federal courts to decide if they apply.

“You’re kind of testing the scope of the statutes and the similarities that do exist between them, even though I do think just based on the language, Massachusetts law appears to be broader in scope,” Hinkle said. “I think it’s too early to really tell whether or not there would be any sort of persuasive impact based on this state law interpretation to the federal law.”

Discrimination Concerns

Though not a part of Baker’s litigation, employment attorneys and AI experts say the lawsuit also brings attention to the potential discriminatory nature of AI technology through its use of visual cues to determine an employee’s honesty.

Under Title VII of the 1964 Civil Rights Act, it is illegal to make hiring decisions based on protected characteristics such as an individual’s race, gender, or disability.

“It is almost impossible” for this use of AI to perform this assessment without risk of discrimination, according to Kuang.

Kuang, who is Asian, said for example “eye contact in conversation is not a very Asian thing. It’s something that I had to learn. And for someone who may not necessarily be raised in the US—I wasn’t raised in the US—it can tap into cultural differences.”

Referring to the AI interviewing system, he said, “It can map onto that and say ‘that person is really shady because they just lacked that eye contact, or they didn’t speak with confidence or other things where it lands along cultural lines. And guess what? Cultural lines lands on protected classifications.”

The US Equal Employment Opportunity Commission has already emphasized a focus on combating discriminatory recruitment and hiring practices through an employer’s use of AI in its 2024-2028 Strategic Enforcement Plan.

Last August, the EEOC settled what was referred to as its first “AI bias” lawsuit, which dealt with an employer’s alleged attempt to use AI to screen out candidates aged 40 and over.

The Baker case also raises concerns about how video hiring screenings could impact individuals with disabilities and those who require accommodations during the interview process, Hinkle said.

“These kinds of cases also probably need to test how these screens work and whether or not they’re complying with very long-standing anti-discrimination laws that are on the books and have been on the books for a very long time,” she said.

A HireVue representative said the company’s assessments use machine learning to analyze a candidate’s transcribed answers to interview questions. Its algorithms “do not look at anything visual or analyze tone, pauses or other forms of inflection,” the representative said.

The use of AI to evaluate unspoken cues and emotions remains part of recent human resources technology innovations.

Other companies, such as Yobs Technologies, a smart interview platform, also evaluate a number of nonverbal indicators, including video and text communication to identify an individual’s communication style.

“When we think lie detector, I think we automatically think of the polygraph,” said Hinkle. “But there’s certainly other types of technologies that fall into it and the purpose of the statute, it was sort of to prohibit lie detectors broadly.”

The case is Baker v. CVS Health Corporation, D. Mass., No. 1:23-cv-11483.

To contact the reporter on this story: Riddhi Setty in Washington at rsetty@bloombergindustry.com

To contact the editors responsible for this story: Rebekah Mintzer at rmintzer@bloombergindustry.com; Jay-Anne B. Casuga at jcasuga@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.