Popular AI Chatbots Found to Give Error-Ridden Legal Answers

Jan. 12, 2024, 10:00 AM UTC

Popular AI chatbots from OpenAI Inc., Google LLC, and Meta Platforms Inc. are prone to “hallucinations” when answering legal questions, posing special risks for people using the technology because they can’t afford a human lawyer, new research from Stanford University said.

Large language models hallucinate at least 75% of the time when answering questions about a court’s core ruling, the researchers found. They tested more than 200,000 legal questions on OpenAI’s ChatGPT 3.5, Google’s PaLM 2, and Meta’s Llama 2—all general-purpose models not built for specific legal use.

Generative artificial intelligence has raised hopes that the powerful ...

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.