Artificial Intelligence (AI) machine learning products and services have been entering the market with greater frequency. As a result, many of us interact with AI on a frequent (if not daily) basis. From targeted advertisements and suggested articles on our social media feeds to results provided by search engines, AI is ever-present in our lives.
Now, in addition to government agencies and marketing companies who have long been relying on AI, some employers have begun to utilize AI in order to assist with recruitment and performance management functions. Indeed, algorithms are being used to screen resumes, analyze video interviews, and track employees’ productivity.
While streamlining employment-related processes can be useful, employers should carefully consider and evaluate whether it is prudent to unleash AI into their workplace for these and/or other employment-related purposes.
While the prospect of having a machine—a presumably neutral arbiter—review resumes or employment applications may certainly be tempting, recent research shows that algorithms have the potential to discriminate.
Algorithms “learn” by accessing either a specific designated data set, or an algorithm driven search for data sets residing on the internet or in a confined database. The algorithms then identify and analyze patterns within the training data set(s) in order to apply those patterns to assist in making future decisions.
What happens, however, when the training data contains flawed or biased information? Because machine-learning algorithms are designed to analyze patterns in their training data, they may be susceptible to relying upon any potential biases embedded in the data to make future decisions.
Although the most well-intentioned computer scientist might create a neutrally-coded machine learning system, that alone does not guarantee that the algorithm will operate bias-free. Instead, the critical inquiry centers on the quality of the training data provided to the program and the efforts expended to ensure the training data is as neutral as possible. Given the difficulty that lies in identifying bias by manually reviewing data sets, the risk of algorithmic bias may still remain.
Additionally, risk exists in what the algorithm reviews; for example, if in reviewing resumes from a training data set, algorithms are able to determine a person’s age, gender, race, or other protected characteristic, they may begin to consider these traits in making decisions. Simply put, the end result may be bias in, bias out.
Federal Law and Algorithmic Bias
While there has not yet been significant litigation concerning whether algorithmic bias is actionable with respect to employment decisions, several lawsuits have been filed concerning discriminatory advertising practices which implicate algorithms.
As discussed by the Proskauer attorneys in greater length in a Bloomberg Law Practical Guidance, Title VII of the Civil Rights Act of 1964 prohibits discrimination on the basis of a protected characteristic (e.g., sex, race, color, disability, etc.). Generally speaking, plaintiffs who claim discrimination in violation of Title VII often allege that they were subjected to disparate treatment, disparate impact, or both.
A plaintiff alleging that they have been subject to disparate treatment must demonstrate that the employer treated them differently than other employees at work on the basis of their protected status. In order to sufficiently plead their claim, the plaintiff must show that the employer actually intended to discriminate against them on the basis of their protected characteristic.
Although some courts and legal commentators have suggested that algorithms, as machines, cannot discriminate because they lack the requisite “intent,” it is notable that some courts have permitted disparate treatment claims to proceed based on allegations of unconscious or implicit bias.
In addition to alleging disparate treatment, plaintiffs alleging violations of Title VII may also bring claims based upon disparate impact. To prevail on such a claim, a plaintiff must show that a facially neutral employment practice disproportionately impacts or burdens a protected group.
Courts analyzing disparate impact claims have often relied on statistical significance to calculate the likelihood that a challenged practice causes a disparate impact in violation of Title VII. As some commentators have noted, because AI is data-driven, it is theoretically possible for a plaintiff to argue that an AI-driven employment practice has a disparate impact on members of a protected class.
Aside from exercising caution from a federal standpoint, employers should also be vigilant that their use of AI complies with the state and/or local antidiscrimination laws in the states in which they operate which often include greater protections than the federal anti-discrimination law, including an expanded number of protected classes.
There is no question that employers have been (and will continue to be) tempted by the promise of AI to streamline systems at rates never before seen. Notwithstanding this alluring benefit, employers would be well-served to seek legal counsel before incorporating AI into their pre-and-post hire processes in order to avoid the risks associated with algorithmic bias.
This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.
Joseph O’Keefe is a partner at Proskauer Rose LLP in New York and Danielle Moss and Tony Martinez are associates at the firm in New York. They all focus their practices on primarily employment litigation defense in multiple forums, including arbitration, administrative agencies and federal and state courts. They also counsel employers about a wide array of issues, including the emerging challenges associated with the introduction of AI applications to the workplace.