The Federal Trade Commission is changing the game for artificial intelligence companies. On Sept. 11, the FTC issued orders to seven tech giants, probing the unique risks posed by AI chatbot companions—a subset of emotional AI, which measures, understands, simulates, and reacts to human emotions.
The FTC’s inquiry directs Alphabet, Instagram, Meta, OpenAI, Snap, xAI, and Character Technologies to disclose information on its AI companion safety measures. Given the shared risks of emotional manipulation, data privacy, and algorithmic bias across all these applications, companies operating in any area of emotional AI should take the FTC’s inquiry seriously as a signal of increased regulatory scrutiny.
AI Companions, Lawsuits, and State Laws
An AI companion is an application, often in the form of chatbots or virtual characters, that simulates human-like interaction as a friend, romantic partner, support tool, or entertainer. These apps experienced an 88% increase in downloads in the first half of 2025. According to the Harvard Business Review, companionship now surpasses productivity and search as the primary use of AI.
In recent years, tragic suicides and violent acts involving AI chatbots have led to lawsuits, with families alleging the chatbots manipulated vulnerable users’ emotions, worsened their mental health, and even encouraged suicide.
In Garcia v. Character Technologies, Inc., a Florida federal court allowed the plaintiff to proceed with their product liability claim that Character.AI, by creating an AI companion product, owed a duty of care given the foreseeable risk of harm but allegedly failed to take adequate precautions. The court also permitted the claim that Character.AI engaged in deceptive practices by designing chatbots that misled users—especially minors—into believing they were real people or licensed mental health professionals.
To address these concerns, New York enacted the first law in the US mandating safeguards for AI companions, effective Nov. 5. This law requires operators of AI companions to establish protocols for detecting and addressing user expressions of suicidal ideation or self-harm, including referrals to crisis services, and mandates disclosure of the AI’s non-human nature.
The New York Attorney General has the right to seek civil penalties up to $15,000 per day and injunctive relief. On Sept. 11, the California legislature passed a similar bill, Senate Bill 243, currently awaiting Democratic Gov. Gavin Newsom’s signature.
FTC
Against this backdrop, on Sept. 11, the FTC ordered the seven companies that provide AI chatbots to disclose information on their AI companion safety measures. The agency seeks to understand how companies limit use by and harm to minors, how they monetize user engagement, and how they process user inputs to generate outputs. It also inquired about the development and approval of characters and the methods used to measure, test, and monitor negative impacts, especially on children, both before and after deployment.
Additionally, the inquiry examines companies’ disclosures to users and parents about capabilities, risks, and data practices, as well as compliance monitoring and personal data handling from user conversations.
Historically, the FTC has relied on Section 5 of the FTC Act, which addresses unfair and deceptive acts, to stop such practices in AI tools, particularly chatbots designed to manipulate users’ beliefs and emotions. The agency has warned against automation bias and anthropomorphism in AI systems, particularly in critical areas such as finance, health, and employment.
This inquiry suggests the FTC is moving beyond its reactive enforcement, recognizing it may be inadequate to protect consumers against sophisticated AI manipulation. While the focus is on protecting minors, potential outcomes—such as new rules for AI companions, enforceable through civil penalties and consumer redress—will likely also affect AI companies serving adult users.
Companies operating in any area of emotional AI—such as mental health apps, emotion-adaptive learning tools, emotional targeting marketing tools, or social media engagement platforms—share the common risks of emotional manipulations, privacy violations, and bias and therefore should take the FTC’s inquiry seriously as an indication that heightened enforcement is coming.
Compliance Strategies
In light of the FTC’s inquiry, New York’s AI companion law and the court cases, companies providing emotional AIs should adopt the following key measures now to mitigate government enforcement risks and potential class actions.
- Clearly disclose the AI’s non-human nature, capabilities, limitations, and potential emotional risks. Provide crisis resource information.
- Design business models and premium features that avoid exploiting users’ emotional vulnerabilities or mental health conditions.
- Establish rigorous pre-deployment safety assessments and ongoing, automated monitoring with escalation procedures for risks of self-harm, violence, emotional dependency, or manipulative AI responses.
- Implement robust age verification and enhanced safeguards for minors, including parental notifications and monitoring, restricted conversation topics, and safety interventions.
- Implement a strong data governance program, minimizing data collection, adopting strong security measures, establishing clear deletion policies, and providing users with opt-in consent for sensitive data.
- Audit AI characters regularly, consult mental health experts, and remove traits that encourage unhealthy emotional dependency.
- Establish a robust AI governance system, maintain detailed documentation, train staff on compliance requirements, and set up incident response procedures.
The above represents key measures addressing emotional manipulation and privacy concerns, rather than an exhaustive compliance framework. Companies should consult AI attorneys for comprehensive guidance.
The FTC’s focus on AI chatbots is likely just the beginning. Recent congressional hearings and risks posed by the broader emotional AI ecosystem suggest companies should expect new legislation, government enforcement, and private lawsuits.
Navigating this landscape isn’t only a compliance matter, but also a strategic imperative that separates industry leaders from those left behind.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Lena Kempe is principal attorney at LK Law Firm. With over 20 years of legal experience in law firms and companies, including general counsel roles, Lena provides strategic guidance on AI, IT, IP, and data privacy.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.