The Bottom Line
- A recent state lawsuit against a service with an AI chatbot, combined with other state and federal inquiries, may signal a new wave of enforcement.
- Companies with AI chatbots should reassess risk exposure across operations.
- Some companies have already begun to change policies and practices to prepare for potential investigations.
In their escalating scrutiny of artificial intelligence chatbots, state attorneys general have moved from inquiries to enforcement. Kentucky’s lawsuit this year against Character Technologies Inc. over its service, Character.AI, marks the first state action against an AI chatbot.
The complaint asserts that Character.AI’s human-like design and allegedly inadequate safeguards exposed minors to physical and mental harms, violating state consumer-protection, privacy, and related laws. Taken with recent state AG letters and federal inquiries, this case signals a potential wave of enforcement using legal theories that other states can adopt.
Considering these developments, companies offering AI chatbots may want to reassess their risk exposure across their design, marketing, and safety operations.
Kentucky claims that Character.AI, with over 20 million monthly users, uses a design that elicits emotional attachment and blurs the line between simulated and real relationships. Character.AI’s age‑gating and content filters are ineffective or easily bypassed, which exposes minors to hypersexualized interactions and exacerbates teen mental health issues, the suit alleges.
The complaint highlights tragedies tied to the platform, including the suicides of a 14-year-old and 13‑year‑old, alleging that Character.AI’s anthropomorphic chatbot characters encouraged delusions and harmful behavior while the platform failed to meaningfully intervene. The lawsuit alleges material omissions and misrepresentations to parents and minors, such as assertions that the service is safe and age appropriate for minors, and the failure to disclose that chatbots could assure children they are real.
Kentucky is seeking a permanent injunction, civil penalties, and disgorgement of profits.
The case marks the latest step in years of attorney general scrutiny of AI chatbots and generative AI, which began soon after the technology rose to prominence:
- In September 2023, 54 AGs urged Congress to create a commission focused on AI-enabled child exploitation and extend child sexual abuse material prohibitions to AI-generated content.
- In August 2025, 44 AGs sent a letter to leading AI companies alleging that their chatbots were engaging in sexualized interactions with minors, normalizing eating disorders, and encouraging violence and drug use.
- A December 2025 letter from 42 AGs to Character Technologies and other AI companies demanded concrete safeguards against “sycophantic and delusional outputs” and warned of potential civil and criminal exposure.
- AG scrutiny homed in on xAI and its chatbot, Grok, with California launching an investigation on Jan. 14, 2026, into the spread of “nonconsensual sexually explicit material” produced using Grok, followed shortly by a Jan. 23 letter from a group of 35 AGs demanding stronger actions from xAI to prevent the same.
In context, Kentucky’s complaint reads as a template for nationwide state enforcement. Other states can adapt its theories under their own consumer-protection statutes, privacy laws, and codes governing online services or products used by children.
Federal enforcement also is looming. The Federal Trade Commission opened an inquiry in September into the effects of AI chatbots on children, and a bill seeking to ban AI companions for minors was introduced in the US Senate in October. But state AGs have made clear they aren’t going to wait for Washington. In November, 36 AGs wrote to Congress to oppose a moratorium on state laws regulating AI.
The plaintiffs’ firm that represents Kentucky in its litigation played a lead role in the opioids and social media addiction litigation, which exposed companies to state AG enforcement nationwide. Kentucky’s case therefore offers a glimpse into the future of multistate enforcement against companies operating AI chatbots. We expect state AG chatbot enforcement to significantly ramp up in 2026.
Risk Areas
Kentucky’s lawsuit and state AG correspondence with legislators and AI companies highlight key risk areas of which companies offering AI chatbots—particularly interactive, anthropomorphic chatbots such as those offered by Character.AI—should be aware.
Interactions with minors: State AGs are focusing on minors’ ease of access to, and age-inappropriate interactions with, AI chatbots. The alleged intentional marketing of chatbots to minors is particularly concerning for AGs, considering the ways chatbots can be used to exploit minors and the “particularly intense impact” this technology has on still-developing adolescent brains.
For example, the Kentucky complaint details the ways in which minors using Character.AI’s service allegedly were exposed to highly sexualized conversations and roleplay with chatbots. Some minors were said to have expressed thoughts of self-harm and suicide and encouraged to act on their thoughts by these chatbots. Others allegedly engaged with chatbots on topics such as illegal drug, substance, and alcohol use.
AGs also have raised concerns relating to the alleged use of AI chatbots to generate child sexual abuse material, as well as collect, use, and monetize minors’ data.
Human-like design: The anthropomorphic, human-like design of these AI chatbots is at the forefront of AG concerns. The Kentucky complaint alleges that Character.AI’s chatbots were “intentionally modeled to simulate friendship, empathy, and trust.”
Minors are more vulnerable to this type of anthropomorphism, and the American Psychological Association warns that “adolescents are less likely than adults to question the accuracy and intent of information offered by a bot as compared with a human,” and are therefore more likely to have “heightened trust in, and susceptibility to, influence from” AI chatbots, “particularly those that present themselves as friends or mentors.”
A 2025 study by Common Sense Media found that 31% of teens find conversations with AI chatbots “as satisfying or more satisfying than those with real-life friends.”
Training and testing: Increased scrutiny of AI companies highlights the opacity of the training and testing processes AI chatbots undergo before coming to market. For example, Character.AI merely advises users that “Character.AI is a new product powered by our own deep learning models, including large language models, built and trained from the ground up with conversation in mind.”
The Kentucky complaint shows Character.AI’s alleged use of large language models “trained on vast, uncurated internet data sets” that create “the risk of producing harmful or adult content, particularly in the absence of rigorous content-moderation controls.” Likewise, the APA has found that AI chatbots may suffer from algorithmic bias, whether from “skewed training data, flawed model design, or unrepresentative development and testing teams.”
Monitoring and responsiveness: AGs have voiced concern about the lack of monitoring once this technology is made available to minors. The Kentucky complaint alleges that Character.AI’s chatbots lack warnings or safety disclosures and, in some instances, contain labels or information that are affirmatively misleading, such as labeling chatbots as “psychologists,” “therapists,” and “doctors.”
In some instances, lack of monitoring became apparent when it was too late. The Kentucky complaint cites a case where a minor mentioned an intent to commit suicide upwards of 50 times, with no notification to her parents and no attempt to connect with her professional help or resources.
Looking Ahead
Some AI companies have already begun to change their policies and practices. For example, Character.AI announced in October that it would prohibit minor users from engaging with “open-ended chat with AI” on its platform and would implement new “age assurance functionality to help ensure users receive the right experience for their age.”
OpenAI announced in December the addition of new under-18 (U18) principles to its “Model Spec, the written set of rules, values, and behavioral expectations that guides” the behavior of its AI models (including ChatGPT) to dictate how those models “should provide a safe, age-appropriate experience for teens aged 13 to 17.”
Both companies announced they consulted with third-party organizations specializing in teen development and safety in developing these changes.
Litigation isn’t the only trend to watch. Around the time Kentucky filed its lawsuit, OpenAI and Common Sense Media reportedly reached a compromise on competing initiatives for a California ballot measure that would impose restrictions on AI chatbots. The draft measure apparently requires AI companies to “determine a user’s age,” “implement safeguards” for minors, and limit the sale of minors’ data.
That news followed California Gov. Gavin Newsom (D) signing a bill requiring providers of “companion chatbots” to warn users that the chatbot is artificially generated and to implement safety protocols designed to minimize mental health and suicide risks.
These developments—in California and elsewhere—suggest that formal oversight of AI’s impact on minors will only intensify in the coming years.
But as challenges abound, so do opportunities. The current landscape offers companies ample runway to demonstrate proactive, creative, and collaborative industry leadership on these high-profile and evolving issues and, in turn, to potentially minimize legal risk and strengthen their competitive advantage.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information:
Daniel R. Suvor is co-chair of O’Melveny’s state attorneys general investigations and litigation group.
Lindsey Greer Dotson is a litigation partner at O’Melveny who led the Criminal Division of the US Attorney’s Office for the Central District of California.
Reema Shah contributed to this article.
O’Melveny counsel Casey Matsumoto and associate Ry Amidon contributed to this article.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.