New York State Bills Push AI Worker Bias Guardrails Past NYC Law

March 13, 2024, 9:40 AM UTC

New York lawmakers are looking to police potential discrimination in automated employment decision tools, crafting statewide legislation that worker advocates say fills the gaps left open by New York City’s bias audit law and other states’ vendor-backed proposals.

A pair of bills pending in the state legislature (S7623 and A9315) would require broader and more detailed bias audits and transparency notices for artificial intelligence tools than the first-of-its-kind New York City law that took effect in July. The tools covered could include software that screens job applicant resumes or ranks candidates it deems most qualified.

The bills’ supporters say they’re likely to capture a broader universe of employers, technology developers, and types of employment decisions than those in related proposals backed by HR tech vendor Workday Inc. in California, Connecticut, and other states. Workday has provided “input in the form of technical language” for some state-level legislation, according to a company spokesperson.

The New York bills also include a robust private right of action that isn’t likely to win friends in the business community. The measures would let workers sue for alleged violations—instead of relying on the state to enforce them—and would impose joint liability on employers and the AI technology developers and vendors for any damages awarded in court.

The legislation is part of a recent explosion of government proposals aimed at reducing the potential harms of AI tools in business, elections, criminal justice, litigation, and elsewhere.

President Joe Biden issued a sweeping executive order in October targeting government and private-sector use of AI, and the US Equal Employment Opportunity Commission has made clear its enforcement of workplace discrimination laws will apply to alleged bias by automated tech tools.

The New York legislation goes even further by proposing to limit employers’ tech-enabled surveillance of employees, an area of workplace technology use that most state proposals don’t address.

“We see this really as the strongest bill out there this session among all states that addresses bossware and electronic monitoring and also the hiring systems, automated employment decision systems,” Irene Tung, a senior researcher and policy analyst at the National Employment Law Project, said of S7623.

Uncertain Fate

Groups such as TechNet and the Business Council of New York, which represents large corporations in the state, haven’t yet taken a formal position on the bills. But it’s early in the legislative session, and the bills aren’t likely to get committee hearings before April, after the annual budget process.

The proposals face an uncertain fate in the 2024 session, which ends in early June. The Senate has been more proactive than the Assembly over the past year in approving bills on tech topics like digital privacy.

Gov. Kathy Hochul (D) has expressed openness to restricting AI tools, including proposals aimed at deepfakes in her state budget plan. She also gave state agencies her blessing to procure and use AI tools within certain limits and announced state investment in an Empire AI Consortium aimed at bolstering research, innovation, and job creation in New York.

Assemblymember George Alvarez (D), who’s sponsoring A9315, said the goal of the legislation isn’t to block employers’ use of AI tools, but to prevent discrimination and other abuses.

“Basically, we’re telling the company ‘Hey, nothing against that, use it,’” he said. “That’s the future. However, we want you to release an audit report every single year of how you use it.”

A couple of states have enacted narrower measures that focus on employers’ use of AI in hiring. An Illinois law restricts the use of AI to analyze video interviews submitted by job applicants, while Maryland bans facial recognition software in job interviews without a signed waiver from the candidate.

Worker Focus

The New York state legislation and similar bills pending in Massachusetts (H.1873) and Vermont (H.114) would ensure stronger worker protections than those proposed in California (AB 2930), Connecticut (SB 2), and Washington state (HB 1951), said Matt Scherer, senior policy counsel at the Center for Democracy & Technology.

That’s partly because the New York state approach targets employment-related bias specifically, rather than addressing a wide range of AI decision-making tools in other settings.

The New York bills also would require companies to conduct bias audits and provide transparency notices to workers for automated systems that assist human managers in making employment decisions, not just those tools that serve as a “controlling factor” in decision making as California’s and similar bills say. An alternate New York bill (S5641) also limits the auditing and transparency requirements to automated systems that act as a “controlling factor” in decisions.

The “controlling factor” requirement is similar to but narrower than New York City’s ordinance and related regulations, which say automated decision tools are covered if they play a dominant role in the decision, Scherer said.

“The problem with that language is every single vendor says our tools are not designed to be used on their own. They’re designed to be used to assist humans in making their decisions,” he said. For employers that want to avoid the legal requirements, “all they have to do is have a human rubber stamp the decision, and then voila the tool isn’t covered by the bill.”

Workday helped craft and has publicly endorsed California’s AB 2930.

The California bill and similar measures propose “meaningful regulation that allows room for innovation,” using an approach that focuses on regulating high-risk technology uses, the Workday spokesperson said in an email.

“Risk-based approaches include defining the types of decisions in which AI tools are being applied as well as the level of automation in AI-assisted decision making,” the spokesperson said. “For example, the EU considers high-risk tools those that materially influence outcomes and recent California Privacy Protection Agency draft regulations suggest focusing on AI tools that are a key factor in human decision making.”

Workday won dismissal of a proposed class action in January alleging the company’s job candidate screening tools lead to discrimination against Black and older workers, though the plaintiff was allowed to amend his complaint. The company acknowledged in its latest annual report to shareholders that similar litigation could pose a business and legal risk going forward.

Some business groups, including tech association Chamber of Progress, are urging state lawmakers to focus on risks in government use of AI first, like Connecticut’s new law (SB 1103) that requires impact assessments for automated tools used by its judicial and executive branches.

In the meantime, these groups suggest that states go slow on regulating the private sector, in order to wait and see how the technology develops and what approach the federal government and European Union take.

“It’s such a rapidly developing space, we don’t want to foreclose innovation and the equitable proliferation of AI’s benefits,” said Todd O’Boyle, senior director of technology policy at the Chamber of Progress.

NYC ‘Loophole’

The New York City law has come under scrutiny for not going far enough, with some critics suggesting it might not be achieving its intended purpose of preventing algorithmic discrimination in hiring.

The law requires employers to conduct an annual bias audit on automated technology tools they use in making hiring and promotion decisions, publish the results on their company websites, and provide notice to job candidates about the automated decision tools they’re using.

Research from Cornell University found that only 5% of a sample of 267 New York City employers publicly posted the results of a bias audit for automated decision systems. The researchers said the law gives employers broad discretion to decide whether the audit and transparency requirements apply to the AI systems they use, making it impossible to know how many companies are noncompliant and how many deemed themselves exempt entirely.

Enforcement of the city’s automated employment decision law is dependent on complaints from the public, and the city hasn’t received any about alleged violations, said Stephany Vasquez Sanchez, a spokeswoman for the city’s Department of Consumer and Worker Protection.

“It’s an ‘I hate to say I told you so’ thing about the New York City law,” Scherer said, pinning the blame largely on the way the department crafted the final rule implementing the law.

“The way they interpreted it, it has a lot of similarities to the ‘controlling factor’ language in these other bills,” he said. “It created loopholes that you could drive a truck through.”

To contact the reporters on this story: Chris Marr in Atlanta at cmarr@bloombergindustry.com; Zach Williams at zwilliams@bloombergindustry.com

To contact the editor responsible for this story: Laura D. Francis at lfrancis@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.