AI chatbots have exploded across industries from customer support to mental health coaching, gaming, and more. As they have rapidly grown, some cases involving artificial intelligence have illustrated how chatbots can contribute to harm involving both minors and adults.
Although a federal bill recently was introduced to address some of these risks, its future remains uncertain. In the meantime, companies will likely need to navigate a patchwork of state legislation. To fill this gap, lawmakers in places such as California and New York are stepping in to fill the regulatory vacuum left by the absence of federal AI rules.
Companies should prepare for emerging and potentially divergent regulations across the country to effectively navigate this evolving legal landscape.
States seek to regulate human-like chatbots. AI chatbots are software agents that interact through text or voice, ranging from simple scripted customer-service bots to advanced generative AI systems capable of free-form conversation. These systems leverage large language models, memory, and affective cues to simulate sustained human relationships.
As these systems increasingly blur the line between tool and companion, lawmakers are crafting new regulations to address the psychological and social risks posed by chatbots capable of forming human-like interactions.
California’s SB 243, effective January 2026, applies to “companion chatbots,” defined as AI systems with natural language interfaces that provide adaptive, human-like responses to meet a user’s social needs, including by exhibiting anthropomorphic features and sustaining a relationship across multiple interactions.
The law creates significant litigation risk because it includes a private right of action, which permits individuals to sue for violations and recover either actual damages or statutory damages of $1,000 per violation. Covered entities must disclose that users are interacting with AI if a reasonable person would be misled to believe that the person is interacting with a human, maintain and publish a crisis response protocol, and file annual safety reports with the Office of Suicide Prevention.
For users known to be minors, operators must also issue reminders every three hours that the chatbot isn’t human, encourage breaks during continuing interactions, and must implement reasonable measures to prevent the generation of sexually explicit content. The law excludes bots used solely for operational utility, certain in-game characters limited to game-related dialogue, and basic voice assistants.
New York’s Companion law, effective in November, applies to “AI companions” that simulate sustained human relationships through a combination of features. To fall within scope, a system must retain information from prior interactions to personalize engagement, initiate unprompted emotion-based questions beyond direct user prompts, and sustain ongoing dialogue on personal matters.
Covered systems must have protocols to detect signs of self-harm and refer users to crisis services. They must also notify users, at least once per day and every three hours during ongoing interactions, that they are engaging with AI, not a human. Violations can result in penalties of up to $15,000 per day.
Other states are pursuing similar laws. Massachusetts proposed a law requiring chatbot disclosures and granting legal force to chatbot interactions, treating them as equivalent to statements made by human agents. Maine enacted a law prohibiting the use of AI chatbots in trade or commerce that could mislead consumers into thinking they are interacting with a human, unless a clear and conspicuous disclosure is provided.
Determining which chatbots are covered. Whether certain products are in scope of any of these laws may not be apparent. Many companies may not realize their products fall under applicable definitions.
For example, gaming studios increasingly use large language models to power non-player characters, or NPCs, that remember player choices, express simulated emotions, and engage in open-ended conversations. While California’s law excludes certain in-game bots, the exclusion only applies if the character can’t discuss mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game.
If an NPC engages in emotionally rich dialogue or builds trust over time, it may still be regulated. New York’s law contains no such carve-out.
Other edge cases include virtual influencers that build parasocial relationships with fans, wellness apps that encourage self-reflection, and language learning platforms that use emotionally aware chatbots to boost engagement. These systems aren’t designated as therapy tools and may be considered out of scope of laws that directly target chatbots used for mental health and therapy interactions. Still, their ability to simulate emotional relationships and sustain personalized dialogue could bring them in scope of California or New York laws.
Earlier legislative efforts were even broader. California’s AB 1064, vetoed by Gov. Gavin Newsom (D), would have banned companion chatbots for minors unless they were “not foreseeably capable” of harmful conduct. Newsom rejected the bill as overly expansive, warning it could effectively ban most AI tools for young users. Still, the bill’s introduction and passage signal where future legislation may be headed.
These laws reflect a growing effort to regulate chatbots that simulate human-like relationships and could mislead users into believing they are interacting with a real person. While the laws aim to protect users from harm, their scope may be overly broad. This could unintentionally subject a wide range of AI systems to compliance requirements, even those not designed to foster emotional engagement.
Legislators appear to be narrowing the scope, but further refinement may be needed to avoid overreach and unintended results.
Preparing for a patchwork of AI chatbot laws. Businesses should consider that any chatbot with memory, emotional interaction, or sustained dialogue could potentially be in scope, even if not marketed as companions. To assess whether a system is likely in scope of these laws, companies should review its purpose, persona, memory, content scope, user base, safeguards, and device form factor.
Disclosures must be clear and provided at required internals (e.g., every three hours in New York). Systems should include classifiers to flag self-harm indicators and refer users to crisis services, with protocols documented for audits and regulatory inquiries. Companies can further mitigate risk through regular red-team tests to identify harmful responses and preserve interaction records to support future compliance reviews.
In business-to-business contexts, contracts should allocate compliance responsibilities clearly. Insurance coverage for chatbot-related liability, especially involving mental health, may become a necessary safeguard. Companies should also maintain internal documentation of chatbot features, intended use, and safeguards to support classification decisions and demonstrate good-faith compliance.
A patchwork of laws regulating AI chatbots is likely to emerge across the country, requiring companies to either tailor compliance programs to the most stringent requirements or explore geo-gating certain features and compliance measures.
Companies that audit their use of conversational AI and build compliance frameworks will be better positioned to adapt as the legal environment continues to evolve.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Brandi Taylor and Melissa Fox are partners at Eversheds Sutherland.
Luisa Domenichini is an associate at Eversheds Sutherland.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.


