As companies turn to artificial intelligence for help making hiring and promotion decisions, contract negotiations between employers and vendors selling algorithms are being dominated by an untested legal question: Who’s liable when a robot discriminates?
The predictive strength of any algorithm is based at least in part on the information it is fed by human sources. That comes with concerns the technology could perpetuate existing biases, whether it is against people applying for jobs, home loans, or unemployment insurance.
Contract talks between vendors who market artificial intelligence services and businesses looking to use algorithms to streamline recruitment are inherently tense. That’s because of the uncharted legal territory created by the adoption of artificial intelligence to perform business functions normally performed by humans.
“What we have right now is old laws and new tech,” said Jennifer Betts, an attorney who represents businesses for Ogletree Deakins in Pittsburgh. “Until the laws are updated, there will be a lot of questions.”
Both sides want to avoid the cost of being a party to a potentially drawn-out, precedent-setting case. The Equal Employment Opportunity Commission is already investigating at least two cases involving claims that algorithms unlawfully exclude certain groups of workers during the recruitment process, and seven attorneys told Bloomberg Law it’s just a matter of time until courts are asked to weigh in on similar arguments.
“We think that the next wave of litigation may be to attack the use of algorithmic hiring and recruitment software,” said Adam Forman, a business attorney in Chicago for Epstein Becker Green.
The companies actually making hiring decisions are likely to be on the hook for any discrimination claims. But tech firms that create hiring algorithms can also expect to be hauled into court. That has both sides jostling over questions like whether vendors will allow customers to look under the hood at their algorithms or agree to pick up the legal tab for customers if they later get sued.
How these issues are being resolved varies widely, the attorneys who spoke with Bloomberg Law said.
“There’s potential leverage on both sides,” Betts said. “Vendors want to be able to say to customers that they already have these big companies on board using the algorithm. On the other side, there aren’t that many reputable names in town when it comes to choosing a vendor.”
‘The Chatbot Doesn’t See Gender’
In many ways, hiring algorithms are similar to written tests and other assessments employers use to weed out job candidates.
EEOC guidance on the use of such tests, first published in 1978, stresses the need for the results to be validated—that is, to show that they accurately gauge a person’s ability to perform the job. The guidance also makes clear the responsibility is on the employer using the test, even if it was created by outside vendors.
“While a test vendor’s documentation supporting the validity of a test may be helpful, the employer is still responsible for ensuring that its tests are valid,” the EEOC says in a fact sheet on the agency’s website.
In other words, taking a vendor’s word for it that a hiring algorithm has been properly vetted won’t be much of a defense in court.
But vendors have so far largely been unwilling to let customers take a look at their algorithms, for fear of exposing proprietary technology.
“We would not share the algorithm,” said Aida Fazylova, the chief executive officer for XOR, which offers chatbot technology to recruit and screen job applicants. “I think the algorithm itself is not that important; it’s the data set that is important.”
“The chatbot doesn’t see gender, age, or ethnicity,” Fazylova added. “It screens candidates based on a very strict criteria that is unbiased.”
Even if the technology vendor agreed to give a potential customer a closer look at how the algorithm was developed, that would not solve problems arising from the subjective nature of the hiring process.
A smartphone application called “Not Hotdog” is often cited as an example of how data is used to teach an algorithm.
The app, developed for an episode of HBO’s “Silicon Valley,” reviews user-submitted photos of food and other subjects to determine whether an image contains a hot dog. It was crafted by feeding the app a number of photos of hot dogs so that the algorithm learned how to identify the distinguishing features of the popular barbecue food. Developers then tweaked the app by providing it with information on how well it was performing.
But quantifying the characteristics that are likely to make a job candidate a good fit and developing ways to identify them — the central hiring challenge for any employer — is nothing like designing artificial intelligence technology that is able to tell the difference between a hot dog and other food items, said New York University professor Julia Stoyanovich.
“We don’t actually know how to state how someone will perform well on the job,” Stoyanovich, who serves on a New York City automated-decision-systems task force, told Bloomberg Law. “Using the past to predict the future often means replaying all of the discrimination from the past. It is not an objective decision.”
Indemnification in Eye of Beholder
As a result of those challenges, businesses seek other ways to validate an algorithm’s effectiveness, including by asking whether the technology has been stress-tested. Attorneys told Bloomberg Law they’re also advising employers to run hiring algorithms against their existing hiring systems until they can verify the results.
Employers are also pushing vendors to agree to pay up—or indemnify—in the event the company gets sued for discrimination.
“We don’t see a downside to insisting that the vendor indemnify the employer, should the algorithm be subject to legal challenge,” Forman said. “If you validated your product and you believe in it, back it up. But we also tell the employer: Look, this vendor is selling to a lot of other customers.’ If some other company gets sued and this vendor indemnifies, how much money are they going to have left?”
But many vendors have been unwilling so far to make that kind of pledge, said Bradford Newman, a California attorney for Paul Hastings.
“The manufacturers are saying, ‘We will help insulate you from bias, but, by the way, we’re not responsible for any liability if you get sued,’” Newman said. “They’re pitching this naive feeling of comfort for employers that if we’re using an algorithm and the vendor says it has been vetted, we are in good shape.”
Vendors should expect to be roped into litigation over their algorithms, Newman said.
“As soon as the employer is sued, they’re going bring you in,” he said.