Ten U.S. senators sent a joint letter to Janet Dhillon, the chair of the Equal Employment Opportunity Commission, on Dec. 8, 2020, urging the EEOC to use its powers under Title VII of the Civil Rights Act of 1964 to “investigate and/or enforce against discrimination related to the use of” AI hiring technologies.
The senators’ letter demanded the EEOC answer the question of whether under Section 705(g)(5) of Title VII, the commission possesses the authority to conduct a technical study and investigation into “the development, design, use and impacts” of AI hiring technologies “absent an individual charge of discrimination,” and to explain “why or why not.”
The senators proactively posed three telling questions:
- Can the EEOC request access to “hiring assessment tools, algorithms, and applicant data from employers or hiring assessment vendors and conduct tests to determine whether the assessment tools may produce disparate impacts?”
- If the EEOC were to conduct such a study, could it publish its findings in a public report? and
- What additional authority and resources would the EEOC need to proactively study and investigate these AI hiring assessment technologies?
Sounding an Alarm
These questions, posed by 10 influential senators to a powerful federal agency, should set off a cacophony of alarm bells.
It is no accident that the letter frames the urgency of the questions posed in the context of businesses beginning to quickly reopen and rapidly hire according to Covid-19 guidelines.
The letter also presents the matter as one requiring deliberate and proactive work by the commission to combat systemic discrimination that job applicants “alone cannot effectively learn about and challenge.”
While the senators note “hiring technologies can sometimes reduce the role of the individual hiring managers’ biases,” they conclude by opining that “they can also reproduce and deepen systemic patterns of discrimination reflected in today’s workforce data.”
Citing a June 2, 2020, Bureau of Labor Statistics population survey and a July 2, 2020, Reuters article, the senators call out that “today, Black and Latino workers are experiencing significantly higher unemployment than their white counterparts and the unemployment gap between Black and white workers is the highest it’s been in five years.”
They posit that “effective oversight of hiring technologies requires proactively investigating and auditing their effects on protected classes” and “enforcing against discriminatory hiring assessment or processes.”
Noting that “far too little is known” about the “design, use, and effects of hiring technologies,” the senators assert it is “essential” that these hiring processes “advance equity in hiring, rather than erect artificial and discriminatory barriers to employment.”
A Clear Message to Investigate and Enforce
The regulatory handwriting on the proverbial wall is clear. These 10 senators want the EEOC to utilize its power and resources to launch investigations and enforcement actions against vendors who create hiring algorithms and companies that utilize them.
They encourage the EEOC to compel production not just of the proprietary algorithms, but the data sets used to train them, and the applicant data from individual employers reflecting the impact of the algorithmic output.
They believe vendors and companies should supply this information so the EEOC can “conduct tests” to determine whether the algorithms produce disparate impacts on the hiring of protected individuals. Finally, they suggest the EEOC publicly publish its findings.
Companies who create this machine-learning based hiring technology and the businesses that use them should prepare now for government inquiries and enforcement actions.
How to Prepare for What’s to Come
Here is what to understand and the steps to take.
First, while there are many proposals under consideration, actual legislation restricting the use of AI in hiring tools does not seem imminent.
Second, under the Biden administration, the EEOC will likely step up its enforcement efforts in the area of AI and machine-learning driven hiring tools.
Third, in response to requests from the EEOC to produce algorithms, training data sets and related information, there will be serious and legitimate trade secret and confidentiality considerations for the innovators who have invested substantial time and resources developing these products.
Innovators should assess their current trade secret protection measures and implement additional steps specifically designed to ensure all reasonable actions are employed to safeguard the valuable nature of competitively valuable information before any government inquiries are received.
Fourth, based on either the EEOC’s stated intent to publicly disclose investigations of this private-sector technology or its refusal to commit to maintaining its confidentiality, it is imperative that companies evaluate challenges to the agency’s process and obligations regarding safeguarding and disclosure of specific information.
Fifth, AI vendors should prepare for plaintiff class action lawsuits by continually testing their algorithms to ensure there is no implicit or unintended bias.
Finally, companies who purchase these AI hiring tools should ensure that at the time it negotiates the initial vendor contract, the vendor supplies clear representations as to the product’s fairness, and that indemnification provisions are negotiated with a future government investigation in mind.
The industry of AI hiring tools is growing exponentially based largely on the promise of efficiency and the more elusive hope that algorithms will lead to less, not more, bias by hiring manages. The senators’ letter serves as an important reminder that any disconnect between the promise and reality of these tools can lead to substantial liability.
When designing, purchasing, and utilizing these tools, innovators and business-side consumers are well-advised to view these products, the related sales and license agreements, and internal communications and studies regarding output accordingly. For the immediate future, the operating assumption must be that all internal and third-party audits, and related data and communications, will be scrutinized by the government and potentially by the public at large.
This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.
Bradford Newman is a litigation partner at Baker McKenzie who specializes in matters related to trade secrets and artificial intelligence. He is the chair of the AI subcommittee of the American Bar Association. He has been instrumental in proposing federal AI workplace and IP legislation that in 2018 was turned into a U.S. House of Representatives discussion draft bill.