New York’s Bid to Ban AI Chatbot Legal Advice Has Serious Flaws

March 13, 2026, 8:30 AM UTC

New York lawmakers are rightly concerned about artificial intelligence that falsely presents itself as a licensed professional. If a chatbot claims to be a lawyer, fabricates a bar number, and gives harmful legal advice, that presents a genuine consumer-protection issue.

Senate Bill S7263 purports to address that problem by imposing liability for damages caused by a chatbot “impersonating certain licensed professionals.” The bill creates a vague and expansive liability regime that reaches far beyond impersonation and leaves courts to guess where lawful AI-generated information ends and unauthorized professional practice begins.

The problem is evident in the bill’s operative provision. Subsection 2(a) prohibits a chatbot proprietor from “permit[ting]” a chatbot to provide any “substantive response, information, or advice,” or take any action, that if “taken by a natural person” would violate Article 15 of the New York Judiciary Law governing unauthorized legal practice.

The bill then authorizes private suits for actual damages and, in cases of willful violations, attorneys’ fees and costs. That architecture sounds targeted. It’s not.

The bill’s most serious flaw is that it’s framed as an anti-impersonation measure, as reflected in its stated purpose of addressing chatbots “impersonating certain licensed professionals.” But the operative text reaches far beyond impersonation.

It imposes liability without requiring any false claim of licensure, fake credential, falsely holding out as a lawyer or doctor, or even concealment that the system is AI. Put differently, the bill is presented as a narrow response to deceptive professional impersonation but drafted as a much broader restriction on chatbot outputs.

The bill’s disclaimer provision underscores that point. It expressly states that liability can’t be avoided by notifying users that they are interacting with a non-human chatbot. Even a system that clearly discloses it isn’t a lawyer, and therefore isn’t pretending to be one, could face liability if a court later characterizes its output as “substantive” legal information or advice.

That isn’t a narrowly tailored anti-fraud measure. It’s a broad and ill-defined speech restriction masquerading as an anti-impersonation law.

In practice, that overreach matters. To avoid liability, chatbot providers likely will disable or limit features that come close to legal assistance. The burden would fall hardest on pro se litigants and others who already struggle to obtain affordable legal help. At the same time, it could be a windfall for lawyers by suppressing lower-cost informational tools. That isn’t a sound policy result.

The bill also fails to distinguish between consumer-facing chatbots and tools supervised by licensed professionals. It’s a serious flaw. A chatbot marketed directly to a layperson as a substitute for legal guidance presents a different risk than an AI tool used by a lawyer as an internal aid without abdicating final decision-making.

By treating both contexts the same, the bill creates uncertainty for ordinary professional workflows and may push vendors to disable useful features even if the AI is only assisting, not replacing, licensed judgment.

Beyond the bill’s misaligned policy effects, its operative statutory mechanism is incoherent and unworkable. The bill prohibits proprietors from allowing chatbots to generate outputs or take actions that, if performed by a natural person, would violate Article 15 governing unauthorized legal practice.

Within Article 15, Section 478 governs the unauthorized practice of law. It makes it unlawful for “any natural person” to practice or appear as an attorney without admission and registration, to furnish counsel, to render legal services, or to hold oneself out as entitled to practice.

That creates a fundamental drafting problem. S7263 doesn’t actually amend Section 478 to address AI systems, nor does it create a tailored standard for when chatbot outputs cross the line into unauthorized legal practice. Instead, it imports a human licensing rule applicable only to “natural persons” into the AI context by analogy.

It asks courts to apply a hypothetical: Would this output violate Section 478 if a chatbot were a natural person? That isn’t a workable statutory framework. It leaves the core rule dependent on a legal fiction rather than a clear substantive standard.

If the legislature intends to regulate AI-generated legal advice, it should do so directly by amending the law to address AI systems on their own terms, not by forcing courts to pretend that a machine is a human lawyer for purposes of unauthorized-practice doctrine.

That leads to the bill’s second major drafting flaw—vagueness. The statute prohibits any “substantive response, information, or advice,” yet defines none of those terms. It likewise fails to define what it means for a proprietor to “permit” such a response.

Does that mean strict liability for any model output? Does it require knowledge, negligence, reckless failure to implement safeguards, or deliberate design? The bill doesn’t say.

Nor does the bill define what it means for a chatbot to “take any action” in a software context. These are the statute’s core operative terms, yet the bill leaves them open-ended while giving the general public a private right to sue over “information” and “advice,” not just false claims of licensure or fraud. That invites opportunistic litigation, inconsistent enforcement, and serious First Amendment challenges.

For all its breadth, the bill is underinclusive where it matters most, reflecting a basic misunderstanding of how generative AI systems are built and controlled. Its definition of “proprietor” excludes third-party developers that license chatbot technology to deployers—frontier models such as ChatGPT.

The result is that liability can fall on the downstream entity facing users, while the upstream developer, which often controls the model’s architecture, training, safety layers, and core output behavior, is exempt.

There is a better path. If lawmakers want to address real harms, they should draft a narrow false-representation statute. Such a law could prohibit chatbots from falsely claiming licensure, falsely asserting human professional review, or using protected titles in ways likely to mislead consumers.

S7263 doesn’t do that. It’s not a coherent anti-impersonation law. It’s a vague, overbroad, and poorly structured liability regime that should be rewritten, not enacted.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Author Information

Oliver Roberts is an adjunct professor of law at Washington University in St. Louis School of Law, co-head of Holtzman Vogel’s AI practice group, and founder and CEO of Wickard.ai.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Jada Chin at jchin@bloombergindustry.com; Rebecca Baker at rbaker@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.