New York AI Consumer Protection Bill Puts Process Over Substance

Feb. 19, 2025, 9:30 AM UTC

The New York artificial intelligence consumer protection act, pending before the Assembly (A 768) and Senate (SB 1962), puts process before substance and, in doing so, risks upsetting New York’s progress in becoming a leader in AI now and for the foreseeable future.

New York has a legacy of embracing innovation and, in doing so, expanding opportunity for the rest of the country. From the Erie Canal to the Niagara Falls Power Project, the state has long adapted to technological change by investing in its development and accelerating its adoption.

That approach in many ways aligns with the growing abundance movement. It’s an approach state lawmakers should apply when evaluating various proposals to regulate AI this session.

Abundance calls for an alignment of procedural processes with substantive ends. For too long, the former has undermined the latter. Excessive housing regulations, for instance, have curtailed the supply of housing in New York and around the country. New York is at risk of repeating that disconnect in the context of AI regulations.

The AI Consumer Protection Act contains well-intentioned procedural safeguards that will nevertheless become hurdles to AI development.

It seeks to prevent AI from perpetuating discrimination against protected classes, a worthy and important goal that would require developers of high-risk AI systems to comply with numerous requirements.

To name a few, developers would need to:

  • Evaluate and mitigate biases in the system that may result in discriminatory outputs.
  • Use reasonable care to shield consumers from known or reasonably foreseeable risks of algorithmic discrimination.
  • Allow an independent third party to analyze the system to trigger a rebuttable presumption that they used reasonable care to protect consumers in the event of an enforcement action.

On paper, these procedural checks seem benign and easily satisfied. In practice, however, each requirement will impose substantial limitations on the ability of developers to continue their research and development. Computer scientists debate the technical feasibility of eliminating bias. Some allege that attempts to do so may actually do more harm than good. Efforts to fix systems may have unintended negative consequences. Efforts to de-bias training data, for instance, often have unpredictable effects in subsequent training of the model.

Legislators may be sending developers on an impossible and expensive goose chase if they pass this bill. Training a model to avoid certain topics could exacerbate its tendency to generate erroneous outputs.

Calls on developers to take “reasonable care” may likewise result in net-negative outcomes, because ambiguity could lead to confusing, shifting guidelines and arbitrary enforcement.

Auditing requirements presuppose a ready supply of qualified, independent third parties. There’s reason to think that no such supply exists because of the pre-existing ties between would-be auditors and AI labs. A global shortage of AI talent combined with a finite number of common funders for AI projects has forged connections between many AI research outfits and leading AI developers.

Perhaps most importantly, New York already has laws on the books to shield them from the concerns supposedly animating this act. Companies may not engage in unfair or deceptive practices. The New York State Human Rights Law prohibits discrimination on the basis of protected classes in a wide-ranging set of circumstances. Consumers may also turn to case law setting forth protections against defective products. Attorneys general in other states have outlined entire lists of existing laws for which there is no AI exception. In New York, too, the law is the law.

Rather than try to develop precise regulations for evolving technology, legislators should lean into policies that align processes with positive outcomes. They should consider following the lead of Oklahoma, whose partnership with Google will provide 10,000 residents with an AI education that will help them harness AI’s potential, recognize its limitations, and spot its biases.

To the extent New York lawmakers feel that a similar educational effort wouldn’t go far enough to reduce the odds of biased AI systems imperiling the public good, they should allocate more funds to the AG’s office to aid greater enforcement of existing laws.

As with zoning regulations and permitting processes for energy projects, the motives behind the AI Consumer Protection Act are likely good. Good intentions, though, won’t put “innovation, research and technology at the forefront” of New York’s AI investments, despite Gov. Kathy Hochul’s proclamation.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Kevin Frazier is an affiliated scholar of emerging technology and the law at St. Thomas University and an adjunct professor at Delaware Law School.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Rebecca Baker at rbaker@bloombergindustry.com; Jada Chin at jchin@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.