AMS’s Jonathan Kestenbaum explains how a new employment law enacted in New York City will change how employers use AI tools when recruiting and hiring employees as similar proposals gain popularity nationwide.
Starting July 5, New York City’s Automated Employment Decision Tool law requires employers that use AI and other machine learning technology as part of their hiring process to perform an annual audit of their recruitment technology. These audits must be performed by a third party and check for instances of bias—intentional or otherwise—built into these systems.
Failure to comply with the new law, which is mandatory for any company operating and hiring in New York City, could result in fines starting at $500 with a maximum penalty of $1,500 per instance.
You may be thinking, my company doesn’t have offices in New York City, and we don’t use AI, so these arcane laws don’t apply to me. But you would be wrong. Regardless of office locations, the rise of remote work increases the possibility of candidates in New York City applying for roles in non-local organizations, and a law like it could be coming to a city near you.
Rise of AI Bias Laws
The New Jersey Assembly is considering a limit on use of AI tools in hiring unless employers can prove they conducted a bias audit. Maryland and Illinois have proposed laws that prohibit use of facial recognition and video analysis tools in job interviews without consent of the candidates. Meanwhile, the California Fair Employment and Housing Council is mulling new mandates that would outlaw use of AI tools and tests that could screen applicants based on race, gender, ethnicity, and other protected characteristics.
And claiming ignorance of AI in your talent solution won’t be a viable excuse. Today, 40% of talent tech solutions integrate AI in some fashion. Does your company use AI to filter through the hundreds or thousands of resumes received every week? If so, your organization is almost certainly subject to this regulation.
In short, no hiring executive or counsel, at any company, will be able to hide behind an algorithm to justify their hiring decisions and shield themselves from potential fines or compliance issues.
Preventing AI Bias
While this impending regulation puts many organizations behind the eight ball on compliance, it’s a powerful sign of government catching up to emerging technology before it wreaks havoc on the workforce.
To be sure, AI has impacted corporate hiring in mostly positive ways. It has facilitated the more mind-numbing aspects of hiring, such as filtering through thousands of resumes, and removed unintended bias from hiring processes. Hiring managers receive candidates filtered by skills and experience—not by where they went to school, where they live, or when they graduated from high school.
But left unchecked, AI can also perpetuate unintended biases, violating both local and existing federal laws. For instance, an organization seeking to enhance the diversity of its workforce can’t legally use AI to filter in favor of protected classes, such as pregnant women, just as it can’t filter them out.
While promoting diversity is a positive activity, when a company hires candidates based exclusively on a protected class, it’s the same as excluding candidates on a protected class because it provides one pool of candidates an unlawful advantage over the rest.
Organizations that are just now approaching the use of AI in their hiring process and who are now navigating how best to comply with the regulation would do well to keep these simple truths in mind:
Lean on third parties. This is mandated by the NYC regulation but also good practice. Groundbreaking regulation often comes with speculative best practices, and even with strong attention paid to AI within your organization, you may not be compliant. Third parties can remove the guesswork out of how to execute on the new regulation.
Transparency is key. The NYC regulation is clear on this point: Employers must make the third-party audit results available to the public on their corporate website, and they must inform candidates and current employees who reside in New York City of their use of AI in hiring decisions via e-mail, postal mail, job postings, or the company website. Sure, this applies to NYC-based candidates only, but with an increasingly globalized workforce, it makes sense to apply this level of transparency across all candidates, locations, and stakeholders.
Audit all hiring processes alongside AI technology. Consider diversity hiring as a use case: If you employ AI on the front end to widen the candidate pool and include more diverse candidates in a hiring process, AI is being employed correctly. But if you use AI at the end of the hiring process to determine whom to hire, you are breaking not just NYC AI regulation but federal law. Understand the best applications of AI and where it may be providing illegal bias.
New York City’s new anti-bias law serves as a significant step forward in the ongoing fight against discrimination and bias in the workplace. By mandating comprehensive anti-bias audits, the law empowers employees to challenge their biases and create a more inclusive and fair working environment. Rather than look the other way, all employers should view it as the first step toward a rapidly approaching global revolution in the future of work.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Jonathan Kestenbaum is a licensed attorney and managing director of tech strategy and partners at AMS, a recruitment solution provider and consultancy.
Write for Us: Author Guidelines
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.