A novel New York City law that penalizes employers for bias in artificial intelligence hiring tools is leaving companies scrambling to audit their AI programs before the law takes effect in January.
The law, which requires employers to conduct an independent audit of the automated tools they use, marks the first time employers in the US will face heightened legal requirements if they wish to use those any automated decision-making tools.
Such tools—which can range from algorithms built to find ideal candidates to software that assesses body language—have faced scrutiny in recent years for their potential to perpetuate bias against protected groups.
But without guidance from the city, employers aren’t clear what, exactly, is expected of them and how to prepare.
“Notably, the law does not define who or what is meant by an ‘independent auditor,’” said Danielle J. Moss, a partner at Gibson Dunn & Crutcher LLP. Employers may likely rely on a law firm or consulting firm, but the law does not specify who should perform the audit other than it be “independent.”
Adding an extra burden, the automated tools employers use often come from third-party vendors, which aren’t regulated under the law. That means employers must engage with both those vendors and outside auditors to ensure compliance.
“That’s exactly the concern, where they may be using a third-party vendor they may not necessarily know what went into the construction of these AI tools,” said Shauneida Navarrete, special counsel at Stroock & Stroock & Lavan LLP.
Employers will need to consult both technical experts who understand how the tools work as well as attorneys who can spot the potential for discrimination complaints.
“That’s why it requires so much more than your head of HR reviewing the tool or tinkering with the tool,” Navarrete said.
Audit Guidance Lacking
Though it’s the first time US employers will be subject to these requirements, laws addressing AI bias in hiring are on the rise, with several states and the federal government taking steps to regulate those tools.
The New York City Council in November passed Local Law Int. No. 1894-A, which amended the city’s administrative code to protect employees from unlawful bias during the hiring and promotion process when automated employment decision tools are used.
In addition to requiring the AI tool audits, the law requires employers to make the results publicly available on their websites. The employer also must disclose the data the AI tool is collecting, whether by disclosing it publicly or responding to an inquiry.
When considering job candidates residing in New York City, employers also have an obligation to provide at least 10 days’ notice before using automated tools that include, among other things, accommodations that may be available.
The city hasn’t issued guidance on complying with the law, but employers may be able to draw on US Equal Employment Opportunity Commission technical assistance document that covers artificial intelligence hiring tools. The federal guidance also includes questions to ask vendors and how to provide reasonable accommodations, particularly for workers or candidates with disabilities.
The city didn’t respond to a request for comment on whether it plans to issue guidance.
Enforcement Questions
The New York law will be enforced by the city’s Office of the Corporation Counsel, though it’s unclear how the office will do so. The most likely way the city will catch wind of a problematic AI decision-making process is through a complaint, Navarrete said.
Notably, the law doesn’t include a private right of action, meaning workers can’t go directly to court if they believe they’ve been discriminated against via an employer’s AI tools. But there is a potential for class action complaints in federal court if, for example, the city finds a tool an employer used was discriminatory.
Employers that don’t meet the law’s requirements will be subject to a $500 fine for the first violation and $1,500 for each subsequent violation. Those fines are multiplied by the number of AI tools used and number of days that the employer fails to correct the issue.
“While the penalties don’t seem that large, they add on every day,” Navarrete said.
Employers still have time to get into compliance before January, and it’s possible the city will issue some form of guidance before enforcing the law, Moss said. As long as compliance standards are clear, the law likely won’t curb employer use of AI tools.
“As with all regulations, there’s going to be an adjustment period,” Moss said. “I don’t think its deterring people from using the tools but I do think employers and vendors are really eager for guidance from the city. “
More to Come?
Although New York’s law is the first of its kind in the US, it likely won’t be the last considering momentum on the issue.
At the federal level, the EEOC’s May guidance directs employers to assess their AI tools for potential bias against disabled workers. That guidance doesn’t impose any additional requirements or threat of penalties.
Congressional Democrats last year introduced the Algorithmic Justice and Online Platform Transparency Act, which would establish a cross-government investigation into discriminatory algorithmic processes across sectors, including employment. The bill has yet to move forward since being introduced.
At the state and local level, Illinois became the first state to regulate employer AI tools in 2019, although its law is narrower and specifically targets tools that evaluate video interviews.
District of Columbia Attorney General Karl Racine introduced a bill last year that would mirror New York City’s law and would put the onus on employers to ensure AI tools they use aren’t discriminating against certain candidates. The law is still under review by the city council, with a public meeting scheduled next month.
The California Civil Rights Department announced earlier this year that it’s drafting regulations to clarify that the use of automated decision-making tools is subject to employment discrimination laws. The proposed regulations include record-keeping requirements to retain “all machine-learning data” for four years.
Nathan Jackson, a California-based attorney at Liebert Cassidy Whitmore, said that like the New York City law, most employers in California that use AI tools need to understand how those tools work as they prepare for compliance.
“Any time law and technology intersect, there’s going to be a learning curve,” Jackson said.
To contact the reporter on this story:
To contact the editors responsible for this story: