Bloomberg Law
May 6, 2022, 9:45 AM

Disability Bias Should Be Addressed in AI Rules, Advocates Say

J. Edward Moreno
J. Edward Moreno
Reporter

Workers with disabilities are looking to federal regulators to crack down on artificial intelligence tools that could potentially pose a bias against them.

At a recent American Bar Association event, U.S. Equal Employment Opportunity Commission Chair Charlotte Burrows said she is particularly interested in guidance that could protect people with disabilities from bias in AI tools. As many as 83% of employers, and as many as 90% among Fortune 500 companies, are using some form of automated tools to screen or rank candidates for hiring, according to Burrows.

At issue is the potential for AI-powered games or personality tests used for hiring or performance evaluations to be more difficult for people with intellectual disabilities, for example. AI software that tracks a candidate’s speech or body language during their interview also could create a bias against people with speech impediments, people with visible disabilities, or those whose disabilities affect their movements.

“That is one area that I’ve identified where it might be helpful for us to give some assistance through guidance,” Burrows said regarding the impact of AI tools on people with disabilities.

The EEOC, which enforces federal anti-discrimination laws in the workplace, announced in October that it would study how employers use AI for hiring, promotions, and firing workers. The last time the commission formally weighed in on hiring tools was in 1978.

Among other things, those guidelines establish a “four-fifths rule,” which looks at whether a hiring test has a selection rate of less than 80% for protected groups compared to others.

“I am not somebody who believes that because they are from 1978 we need to throw it out,” Burrows said, calling the four-fifths rule a starting point, “not the end of the analysis.”

Reasonable Accommodation

Urmila Janardan, a policy analyst at Upturn, a group that advocates for the use of technology to promote equity, has researched AI hiring technologies used in entry-level hourly jobs. She said employers often use personality tests or games to find candidates with certain characteristics, whether or not those traits apply to the role.

A hiring game, for example, could measure things like attention span and ability to remember numbers, which may require accommodation for someone with intellectual disabilities. An assessment could also require somebody to identify the emotions of someone in an image, which could be more difficult for a person with autism, for example.

“The farther a job evaluation strays from the essential functions of the job, the more likely it is to discriminate by disability,” Janardan said. “Is this testing for the essential functions of the job or is it just a game? Is this something where we can clearly, obviously, see the connection to the work or not? I think that’s a very critical question.”

The EEOC does not currently track data on discrimination related to artificial intelligence. That is further complicated by the fact that most candidates wouldn’t know how AI tools impacted their selection process, according to Ridhi Shetty, a policy counsel at the Center for Democracy and Technology.

Job candidates and employees should be informed of AI tools being used in their selection process or evaluations, and employers should have accommodation plans that also don’t require the candidate to disclose that they have a disability, said Shetty.

But employers are rarely upfront about accommodation options when it comes to AI assessments, according to Upturn’s research.

“It’s hard to know that you need accommodation,” Shetty said. “It’s hard to know that that particular assessment is not going to actually show the employer what you know you’d be able demonstrate in a different way, and without having that information filled in, you don’t have an opportunity then as the candidate or the employee looking for advancement to be able to show why you would be fitting for the job.”

Who is Liable?

The 1978 guidelines also don’t specify liability for vendors of hiring tools. AI vendors often advertise their products as free of bias, but when bias is found, the discrimination claim would fall squarely on the employer unless there is a shared liability clause in their vendor contracts.

“More and more we’re seeing vendors get out ahead of this issue and be prepared to work with employers on this issue, but because the ultimate liability rests with the employer, they really have to take the initiative to understand how this will have an impact,” said Nathaniel M. Glasser, a partner at Epstein Becker Green who works with employers and AI vendors.

The guidelines, which predate the Americans with Disabilities Act, focus primarily on discrimination based on race and gender. Adapting AI tools to avoid bias against disabled people is more complicated because disabilities can take many forms and workers are not legally required to disclose that they have a disability.

Glasser said the conversation around AI bias has increasingly shifted to include perspectives from disabled workers. AI tools are useful to employers who need to sift through troves of resumes or asses relevant skills, and if used correctly, could be less biased than traditional assessments, he noted. The attorney said he advises clients to conduct their own due diligence when it comes to designing and implementing AI tools.

“It’s important for employers to understand how the tool works and what accommodations may be provided in the tool itself, but also have a plan for requests for reasonable accommodation from people who aren’t able to reasonably utilize the tool or be evaluated by that tool due to the specific nature of their disability,” Glasser said.

Collecting Data

In a July 2021 letter to the Biden administration’s White House Office of Science and Technology Policy, advocacy group Upturn suggested using Commissioner charges — a rarely used procedure that allows EEOC leadership to initiate targeted bias probes—and directed investigations to address discrimination related to hiring technologies. It also pushed the agency to compel companies to share information on how they use AI tools.

According to Janardan, vendors she’s worked with often struggle to audit their own products and algorithms because the employers who use them have no incentive to share their hiring data, which could expose them to lawsuits.

Upturn also called on the Department of Labor’s Office of Federal Contract Compliance to use its authority to request information on AI tools. The OFCCP, which oversees only federal contractors, is an audit-based agency with more direct access to employer data than the EEOC.

“Given the degree to which employers and vendors have an information advantage in this space, agencies should be proactive and creative in their strategies to collect data and gain glimpses into the nature and extent of employers’ use of hiring technologies,” the Upturn letter said.

To contact the reporter on this story: J. Edward Moreno in Washington at jmorenodelangel@bloombergindustry.com

To contact the editors responsible for this story: Andrew Harris at aharris@bloomberglaw.com; Genevieve Douglas at gdouglas@bloomberglaw.com