A defamation lawsuit filed against the artificial intelligence company OpenAI LLC will provide the first foray into the largely untested legal waters surrounding the popular program ChatGPT.
Georgia radio host Mark Walters claimed in his June 5 lawsuit that ChatGPT produced the text of a legal complaint that accused him of embezzling money from a gun rights group.
The problem is, Walters says, he’s never been accused of embezzlement or worked for the group in question. The AI-generated complaint, which was provided to a journalist using ChatGPT to research an actual court case, is entirely fake, according to Walters’ lawsuit filed in Georgia state court.
As use of ChatGPT widens in the legal industry, reports of such alleged “hallucinations” of fake facts and legal documents are popping up around the globe. An Australian mayor made news in April when he said he was preparing to sue OpenAI because ChatGPT falsely claimed that he was convicted and imprisoned for bribery.
In New York, a lawyer is facing potential sanctions in a federal court after filing legal briefs he researched using ChatGPT that cited fake legal precedents.
Walters’ lawsuit could be the first of many cases that will examine where legal liability falls when AI chatbots spew falsehoods, although legal experts said it has deficiencies and will face an uphill battle in court.
“In principle, I think libel lawsuits against OpenAI might be viable,” said Eugene Volokh, a First Amendment law professor at UCLA. “In practice, I think this lawsuit is unlikely to succeed.”
OpenAI has admitted that hallucinations are a limitation of its product, and ChatGPT has a disclaimer explaining that its outputs aren’t always reliable.
The company didn’t respond to requests for comment about the lawsuit.
“While research and development in AI is worthwhile, it is irresponsible to unleash a system on the public that knowingly disseminates false information about people,” Walters’ lawyer John Monroe said in an email to Bloomberg Law.
‘Complete Fabrication’
Fred Riehl, the editor-in-chief of the magazine AmmoLand, was researching the real-life federal court case Second Amendment Foundation v. Ferguson when ChatGPT produced the fake legal complaint against Walters, the host of a pro-gun radio show, according to the complaint.
Riehl asked ChatGPT to summarize the Ferguson case, which involves allegations that Washington state Attorney General Robert Ferguson is abusing his power by chilling the activity of the Second Amendment Foundation.
The chatbot produced a summary saying the foundation’s founder, Alan Gottlieb, was suing Walters for embezzling money as the organization’s treasurer and chief financial office. But Walters has never been employed by the foundation and the embezzlement lawsuit, including the case number, is a “complete fabrication,” the defamation suit said.
Riehl never published the summary with the fake lawsuit. He asked Gottlieb about the the allegations, and the founder confirmed they were false, according to Walters’ complaint.
Volokh, the law professor, said Walters’ complaint doesn’t appear to meet the relevant standards under defamation law. Walters never claimed he told OpenAI that ChatGPT was making fake allegations. The fact that Riehl never published the falsehood would likely limit the economic damages Walters could prove, Volokh said.
“I suppose the claim might be, ‘You knew that your program was outputting falsehoods generally and you were reckless about it,’” Volokh said. “My sense of the case law is that it needs to be knowledge or recklessness as to the falsity of a particular statement.”
Defamation laws vary state by state, and some require a plaintiff to first ask for a retraction before they bring a lawsuit, said Megan Meier, a defamation attorney at Clare Locke LLP who represented Dominion Voting Systems in its suit against Fox News, which recently settled.
Under Georgia law, plaintiffs are “limited to actual economic losses” if they don’t request a retraction at least seven days before suing, she said. “A publisher’s refusal to retract is additional evidence of actual malice,” she noted.
Monroe said in an email to Bloomberg Law: “I am not aware of a request for a retraction, nor the legal requirement to make one.”
“Given the nature of AI, I’m not sure there is a way to retract,” he added.
Section 230 Defense
Many emerging internet firms have been shielded from lawsuits by Section 230 of the Communications Decency Act, a 1996 federal law that has come under intense scrutiny from lawmakers in recent years. It protects internet platforms from legal liability based on content created by their users.
But the question of whether a generative AI program is protected by the legal shield hasn’t yet reached the courts. Many legal observers, including the co-authors of Section 230, have argued that a program like ChatGPT falls outside the immunity.
Jess Miers, legal counsel at the tech-aligned think tank Chamber of Progress, said she believes Section 230 would likely cover generative AI. Users provide their own inputs to ChatGPT, and the outputs are based on predictive algorithms, similar to Google search results snippets, she argued.
“It’s unlikely that there will be evidence that OpenAI materially contributed to the illegal content by hard coding in this disinformation about this one person,” she said.
There’s a chance that OpenAI may not want to “open a can of worms” by bringing that defense and instead fend of the Georgia suit on other grounds, Miers noted.
Volokh argued that the defense wouldn’t apply, especially in a case where a chatbot is generating content that doesn’t come from a user or other public sources.
“The whole point is that ChatGPT isn’t passing along information, it’s just making things up,” he said.
The case is Walters v. OpenAI LLC, Ga. Super. Ct., 23-A-04860-2, complaint filed 6/5/23.
To contact the reporter on this story:
To contact the editors responsible for this story:
