Chatbot developers are facing pressure from state lawmakers and consumer advocates to limit the technology’s potential harm amid a swell of private litigation and legislative activity.
Policy makers on both sides of the aisle have been eyeing increased regulation of children’s online safety and privacy as lawsuits mount over teens who died by suicide or experienced severe emotional distress after prolonged interactions with Character.AI and ChatGPT bots.
Technology and privacy attorneys warn state enforcement actions are expected to ramp up and are just the start of efforts aimed at emerging technology that isn’t squarely covered by existing law.
Kentucky Attorney General Russell Coleman (R) last month was the first to bring a lawsuit on behalf of a state against Character.AI, alleging the app exposes children to sexual conduct, exploitation, and substance abuse.
It won’t be the last, said Joanna Forster, a partner with Crowell & Moring LLP. “I think you’re going to see in the months to come a kind of crescendo effect,” she said.
Tech companies, meanwhile, are on the defense.
Google LLC and Character.AI settled suits in January, and both Meta Platforms Inc. and Character.AI have banned minors’ access to their AI chatbots.
“Youth protection issues are going to have the most pronounced effect on the chatbot industry,” said Kristen Mathews, a partner at Cooley LLP. “It’s resonating most with people; it has the biggest downside,” she said.
‘Seminal’ Moment
Despite the attention, chatbot legislation still is in the early stages. While definitions vary, an AI chatbot is broadly defined as a computer program that uses artificial intelligence to generate human-like conversation providing contextually relevant responses.
California and New York, which enacted chatbot safety legislation last year, define “AI companions” as systems that simulate a continuous human-like relationship through asking for and remembering past details. Both states require companies to disclose that users are interacting with a machine and implement safety protocols to prevent user harm. California also provides additional protections for minors and auditing requirements.
Nevada and Utah separately enacted laws regulating the use of AI chatbots for mental health.
States started signaling an appetite for greater chatbot enforcement last summer when the National Association of Attorneys General sent a letter to technology companies promising to protect children from “predatory artificial intelligence products.”
The NAAG followed up in December with another letter demanding chatbot developers adopt policies to counteract outputs that encourage user delusions.
The letter was a “seminal” moment because it indicated that more than 40 states are looking at chatbot issues and expecting remediation measures, Forster said.
“Companies should know what their bots can do, what their design features are, and they should whether or not they maintain ongoing conversations with customers over long periods of time,” said Matthew Ferraro, a partner at Crowell & Moring LLP.
Aside from new legislation, attorneys say companies should be cognizant of how states could use existing consumer protection and privacy laws to bring enforcement actions.
These anticipated actions are setting the scene for privacy and technology regulation as a whole, said Tatiana Rice, senior director for US legislation at the Future of Privacy Forum. The organization is already tracking approximately 70 chatbot bills, Rice said.
“We’re entering this new phase where a lot more entities need to pay attention to not only the new laws that are coming onto the books, but also the enforcement actions and how the laws are getting interpreted,” she said.
Broader Question
The Kentucky AG’s lawsuit relies on data privacy and unfair and deceptive act laws, but the litigation touches on the broader question of chatbots’ potential psychological harms and whether and how states should adopt new laws that address them directly.
State legislatures are reaching consensus on the need for additional chatbot protections, but there isn’t agreement on the best way to do that, said Laura VanDruff, a partner at Kelley Drye & Warren LLP.
Advocacy groups and the industry alike have been jockeying to shape any regulation that does emerge.
Kids safety group Common Sense Media and OpenAI recently paired up on a California ballot initiative that would require chatbot companies to introduce protections for users under 18, including prohibiting child-targeted advertising and the sales of kids’ and teens’ data. OpenAI didn’t respond to a request for comment.
Critics, who see the initiative as a way to steer away from stricter state requirements, are pushing for broader legislation.
A coalition of more than 70 consumer protection, digital rights, and sexual violence prevention organizations have endorsed the People-First Model Chatbot Bill authored by the Consumer Federation of America, the Electronic Privacy Information Center, and Fairplay.
The model legislation would categorize chatbots as products, create liability standards, and allow a state attorney general to create safety rules.
It’s designed to set “clear rules of the road” for what chatbot companies and those that use the technology can do, said Ben Winters, director of AI and data privacy at CFA.
“Simply, if you don’t take people’s data and sell it—and if you don’t deliver targeted advertising based off the input data—then you don’t have anything to worry about,” he said.
Arizona and Vermont lawmakers have introduced legislation based on the model, and the groups behind it are also engaging with lawmakers in Maryland, Kansas, Illinois, Louisiana, Oregon, Texas, South Carolina, and Utah.
“I think you’re going to see in this legislative session that more and more chatbot laws are going to give rights to the AG or private causes of action to pursue violations of chatbot law” that aren’t just using other data privacy and consumer protection theories, Forster said. The issues the technology raises “sit at the cross section of a lot of different doctrinal concerns.”
If you or someone you know needs help, call or text the Suicide & Crisis Lifeline at 988.
To contact the reporters on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
