Bloomberg Law
July 19, 2019, 10:00 AM

Law Firms Fill Void Left by Lawmakers in AI Discrimination Space

Paige Smith
Paige Smith
Reporter
Jaclyn Diaz
Jaclyn Diaz
Reporter

Big Law firms are gearing up for potential discrimination lawsuits, resulting from inherent bias in artificial intelligence tools used by employers. Since March, both Paul Hastings and DLA Piper have launched practices focused on artificial intelligence, while other firms like Littler Mendelson, Fisher Phillips, and Proskauer Rose have partners well-versed in the developing technology and its risks.

A lack of federal policy means that lawyers are proactively informing employers of the risks of AI tools, considering workers have already filed charges with the Equal Employment Opportunity Commission.

Companies have turned to AI tools to headhunt potential applicants, sort desirable employee qualities in resumes, or monitor when a worker might be thinking of flying the coop, lawyers told Bloomberg Law. Most attorneys are advising clients to screen new technology carefully before deploying it, and to avoid fully automated tests that could disparately affect employees.

Facebook Inc. has already found itself in hot water over its deployed AI tools. The social media giant has faced lawsuits and federal charges over its discriminatory advertising tools that helped advertisers, including landlords and employers, isolate who can see their postings based on race or age. Facebook settled five cases in March, agreeing to pay out $5 million and to rehab its advertising policy to avoid discrimination in the future.

Yet, very little has been done about workplace AI challenges on a federal level.

In September 2018, Sens. Kamala Harris (Calif.), Elizabeth Warren (Mass.), and Patty Murray (Wash.) sent a letter to the EEOC requesting the agency address AI’s liabilities, but there’s been no change at the agency since that meeting, and the agency told Bloomberg Law that no policies are in the works. The agency didn’t say why, but the most recent figures showed a focus on sexual harassment and retaliation litigation, which has been one of the agency’s central goals following the explosion of the #MeToo movement.

Federal lawmakers also haven’t taken a stab at any sort of policy to acknowledge the potential impacts of the spreading technology, and key members have been resistant to drafting legislation. In recent months, some members of Congress have begun paying closer attention to the potential risks associated with artificial intelligence in the workplace, but members were resistant to committing to drafting legislation to regulate the field based on what they find.

With both Congress and the EEOC dragging their feet, that leaves the responsibility, at least for now, with lawyers representing clients who want to use the technology.

How It Works, and Doesn’t

Artificial intelligence takes many forms in shaping the employment process, appearing everywhere from the resume screening process to tests that measure cognitive ability, Paul Hastings partner Kenneth Willner said. But it’s important to know that the tools aren’t any smarter, or any less biased, than the people that design it, he said.

One example of inherent bias is resume screening, where Willner said he commonly sees missteps.

“Often times, employers will try to model their AI resume screening tools in order to replicate their previous human-driven resume screening,” he said.

Willner described one client whose tool searched for candidates by filtering for the words “black” and “Africa,” because those words had yielded ideal candidates in the past.

“Those are words one would not want to use in resume screening,” he said.

While that’s a blatant example, others are more nuanced, like analogy testing, a common cognitive measure used in the interview process.

“You can get different answers, different associations, from people with different backgrounds,” resulting in an impact on certain protected classes of people, he said.

Some employers are also willing to take a risk to reap the benefits the tools can provide a business, Paul Hastings partner Carson Sullivan said.

“There is always a risk of adverse impact,” she said. “They can only do what they can do to minimize it.”

Lawyer’s Advice

AI-based discrimination claims haven’t made their way to the courts yet beyond the Facebook lawsuits, but workers have filed charges with the EEOC, Willner said.

The pending charges at the EEOC have to do with selection tests, which use AI to choose questions a candidate must answer, and then rely on AI to score those questions, which could be inherently biased, he said.

Clients are expressing interest in learning more about the technology, to safeguard themselves from liability, Fisher Phillips partner Randy Coffey said.

“If you don’t know the basis of the algorithms generating decisions, you are making yourself at risk if it generates” an unfair impact, he said.

Sullivan advises clients to look through the often-lofty promises sellers of the emerging AI tools purport. The intentions of widening candidate pools are noble, but not all tools are made alike.

“I would say we’re watching everything,” she said. “The technologies are ever-changing.”

Back in 2016, the EEOC did hold an educational meeting about AI, though no policies evolved from it. Littler Mendelson shareholder Marko Mrkonich testified at that meeting, but said that he’s happy the agency hasn’t moved brashly to limit the tools.

“Trying to create a brand new, comprehensive answer, before you know what the question is, is always a challenge,” he said.

To contact the reporters on this story: Paige Smith in Washington at psmith@bloomberglaw.com; Jaclyn Diaz in Washington at jdiaz@bloomberglaw.com

To contact the editor responsible for this story: Cheryl Saenz at csaenz@bloombergtax.com