Chatbot User’s Death Spurs Legal Query: What’s an AI ‘Product’?

Nov. 26, 2024, 9:45 AM UTC

Chatbots promising companionship with features that mimic human interaction are sparking a potential new body of law over what constitutes a design defect when it comes to generative AI.

Last month Character.AI, the maker of a customizable chatbot app, was sued after a 14-year-old user died by suicide. The boy’s mother, Megan Garcia, alleged that Character.AI marketed predatory artificial intelligence chatbots that encouraged suicidal ideation and sexually suggestive conversations.

“When you’re targeting kids in their pubescent years with highly sexualized material, that is a pernicious design defect,” said Matthew Bergman of the Social Media Victims Law Center.

Garcia’s first-of-its-kind lawsuit says the app is designed to blur the line between reality and fiction to convince users that the bots are human by presenting an “anthropomorphic user interface design,” and that the company chose to forgo safeguards for young users.

“The best you could say about it is that young people are developing social interactions with bots as opposed to with people at the precise time that they’re supposed to gain the social and intellectual skills they need to survive,” said Bergman, who’s representing Garcia.

Character.AI’s co-founders earlier this summer agreed to join Google LLC as part of a licensing deal valuing the company at $2.5 billion. Google is also a named defendant in Garcia’s suit.

“Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry,” Character.AI said in a statement.

Google didn’t respond to a request for comment.

John Browning, a professor at Faulkner University’s Thomas Goode Jones School of Law, expects more lawsuits to follow.

“This technology is certainly in the crosshairs now,” said Browning, a retired Texas Court of Appeals justice. “We’ve also seen other indications that courts, and I think the public, want to reign in AI because the abuses or the consequences of the AI in terms of conduct really are fairly wide-reaching.”

Changing Definitions

Courts have hesitated to apply product liability principles to speech products, including software services, because of the First Amendment, said Kerry Maeve Sheehan, legal advocacy counsel for the tech industry group Chamber of Progress.

Previous product liability suits against tech companies failed based on legal definitions of “product” that exclude software algorithms, Browning said. But “our understandings of what fits neatly into the different boxes of product liability law are changing.”

The US Court of Appeals for the Third Circuit, for example, recently revived product liability claims against TikTok related to a social media challenge on the platform that prompted users to strangle themselves until they passed out, Browning noted.

The court ruled TikTok’s promotion of the challenge through its “For You Page” algorithm was “first-party speech” that isn’t protected by Section 230 of the Communications Decency Act, which immunizes platforms from civil liability stemming from user-generated content.

But even if courts determine that apps like Character.AI’s are “products,” it’s not yet settled what legal duty developers have to prevent foreseeable harms, said Brenda Leong, managing partner at Luminos.Law LLP, a firm that works with tech companies that build or use AI.

Discussions about social media design for user engagement and our experience with AI systems thus far show there are foreseeable impacts of chatbot design features that could lead to emotional dependence, she said.

“Whether you decide that the law has a duty to prevent them or not is perhaps a more contentious or open question,” Leong said.

‘Phone Call With a Friend’

Character.AI, launched in September 2022, allows customers to create their own AI characters based on fictional media, celebrity likenesses, or wholly original creations that they can message. The app also allows users to have voice-enabled conversations like “having a phone call with a friend,” as the company has said.

The company claims it serves 20,000 queries per second, while Character.AI’s subreddit community has nearly 2 million members.

Character.AI announced earlier this month that it was launching a separate model with stricter guidelines for minors, as well as revised disclaimers reminding users that AI “is not a real person.”

“I knew it was a matter of time before we were going to see harms to kids coming from generative AI given how rapidly the technology is developing,” said Meetali Jain of the Tech Justice Law Project, who’s also representing Garcia.

The impacts are similar to the harms alleged in the sprawling social media addiction litigation, she said.

‘Premature’ Allegations?

But research on the physiological effects of these technologies isn’t settled, said Sheehan from the Chamber of Progress, which says it partners with tech giants like Google , Apple Inc., and Amazon.com Inc. to advance a progressive society.

“The science really hasn’t been decided on that issue, and it’s a little premature to be basing litigation or legislation on tertiary conclusions from that research,” Sheehan said. “We’re in kind of an era of a moral panic around technology, and particularly around adolescent use of technology.”

She compared Garcia’s Character.AI product claims to a lawsuit brought against the publisher of a mushroom encyclopedia that was rejected by the Ninth Circuit, which said the publisher wasn’t liable for injuries after the plaintiffs got sick from eating mushrooms they found based on information in the book.

Additionally, Sheehan said the US Supreme Court’s decision in Moody v. Netchoice meant a social media platform’s use of algorithms to curate user-generated conversation is expressive speech.

But Jain characterized the four concurring opinions in Moody as creating “possibilities for different First Amendment analysis when it comes to AI,” suggesting that judges may not be that quick to side with chatbot developers.

“A lot of the arguments for constricting those doctrines become ever the more powerful in the generative AI space where it’s really hard to say that this is third-party content,” Jain said.

To contact the reporter on this story: Shweta Watwe in Washington at swatwe@bloombergindustry.com

To contact the editors responsible for this story: Laura D. Francis at lfrancis@bloomberglaw.com; Brian Flood at bflood@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.