- More companies customizing AI tools for their own use
- Hallucination risk shows need for human involvement
Corporate legal departments and law firms are drawn to developing more AI tools in-house they can control, but there’s a downside to their popularity: They are more at risk of litigation, regulatory misfires and other threats.
Wall Street giants like
Case in point: Personal injury firm Morgan & Morgan, whose attorneys were sanctioned by a judge who said eight of the nine case citations were made up by AI.
“This deeply regrettable filing serves as a hard lesson for me and our firm as we enter a world in which artificial intelligence becomes more intertwined with everyday practice,” T. Michael Morgan, a Morgan & Morgan attorney, said in a filing. “While artificial intelligence is a powerful tool, it is a tool which must be used carefully.”
Elsewhere, pro se litigants and plaintiff’s attorneys have been fined thousands of dollars for citing made-up cases hallucinated from generative-AI technology, including ChatGPT.
For corporate legal departments, dealing with an emerging technology draws them away from the industry-specific expertise that they lean on for their jobs, Elisa Botero, a partner at Curtis, Mallet-Prevost, Colt & Mosle LLP, said.
“It’s a challenge for in-house departments because this is an area of specialty that they wouldn’t normally have,” Botero said.
An Array of Risks
Building AI tools in-house, proponents say, alleviates some of their top concerns with buying AI tools. In-house tools allow for greater control over sensitive data and customization for easier use—all while potentially reducing the risk of hallucinations, owing to their more limited data sets.
In-house tools generally use foundation models like
“You end up paying a pretty high premium for grabbing these things off the shelf,” Alden said.
But homemade AI tools still bring multiple risks to the table. For Morgan & Morgan, the hallucinations led to court-imposed fines for its attorneys. The firm also said it offered topay the legal fees of opposing counsel who contested the motion.
In its annual securities filing, Citi said AI’s emergence could pose compliance risk if laws or regulations were updated to meet the technology’s growth. Hallucinations are another risk, Citi said.
“While Citi has policies which govern the use of emerging technologies, ineffective, inadequate or faulty Generative AI development or deployment practices by Citi or third parties could result in unintended consequences, such as AI algorithms that produce inaccurate or incomplete output or output based on biased, incomplete and/or inaccurate datasets, or cause other issues, concerns or deficiencies,” Citi said in a filing.
Reliance on nascent technology creates new information security risks that might be exacerbated by human error, Morgan Stanley said in a public filing. Existing risk-management policies might not be suited to the new risks, it said.
Morgan Stanley has said it has ChatGPT-powered tools its employees can use.
Human Element
Morgan & Morgan, the personal injury firm, said in court filings that MX2.law “was developed as a private AI tool, specifically customized to meet our firm’s stringent requirements for privacy and security protocol.” The firm declined to comment.
The Morgan & Morgan case shows that AI use raises a human element alongside a technological one. A Wyoming judge declined to sanction the firm itself because it proved it had trained its attorneys on proper AI use.
Teaching lawyers how to think the right way about AI answers is a critical step, Dechert LLP’s Christian Matarese, chair of the firm’s technology committee, said.
“You need to focus on training them to be skeptical,” Matarese said. “If you do everything through the lens of skepticism you’ll get a better answer.”
Minimizing Threats
Brandi Pack, AI Specialist at UpLevel Ops, said her company’s clients are increasingly asking for humans to be in the loop on AI tools. UpLevel makes custom GPTs for clients.
One of the ways AIs keep humans involved is deep linking, where AI products provide links to the source material they’re drawing answers from. That makes it easier for humans to verify what an AI has told them.
Another way to minimize hallucination risk is by using multiple AI tools to check each other’s work, Pack said, referring to hallucinations as “basically just little glitches.”
Adding additional layers of artificial intelligence is expensive, however, and not every company has the resources to build that kind of system, she said. How much companies should invest depends on what they’re using AI for, she said. Using AI to produce text for public consumption comes with higher risk than using it for internal purposes, she said.
“We would never use a single GPT, even a custom GPT, for a high-risk use case,” she said.
McDermott Will & Emery is able to reduce hallucination risk by limiting the its use of AI to certain tasks, the firm’s chief knowledge officer, Hunter Jackson, said. The firm is developing its own AI tools based on ChatGPT. McDermott is training attorneys to use the tools for tasks like document review, which he says has a lower hallucination risk because it relies on limited data.
“If you are operating in a walled garden, that hallucination risk is substantially mitigated,” Jackson said. “If I upload 30 credit agreements to CoCounsel, and I say review those, that’s all it’s doing, it’s not crawling the web to look at SEC credit agreements. And so, if I sort of constrain the experience to a dataset, those hallucination risks effectively go away.”
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.