Legal Tech Battles to Set Itself Apart From General AI Models

Sept. 2, 2025, 9:00 AM UTC

The warning in the MIT report was blunt: ChatGPT was outperforming a specialized AI legal technology tool that cost thousands of dollars more.

“A corporate lawyer at a mid-sized firm exemplified this dynamic. Her organization invested $50,000 in a specialized contract-analysis tool, yet she consistently defaulted to ChatGPT for drafting work,” said the report from MIT Nanda, which is building infrastructure to support autonomous AI agents.

The July report highlights the risk legal technology companies face in leaning into generative artificial intelligence while cheaper, general models keep getting better. Crunchbase estimates that about $2.2 billion in legal startup investment since 2024, or about 80% of the total, has gone to AI-related companies.

It’s an existential problem faced by virtually every software company in the AI era, including those making legal-specific tools: Can they differentiate themselves from widely available foundation models? The answer will determine, in part, how lawyers work in the future.

To stay ahead, legal techs have to do more than offer raw computing power—indeed, most of them depend on the big foundation models for their essential functioning. They need to make their tools easier to use than the general models and tailor them for legal teams. Otherwise, they face the danger that their functions will be duplicated by the big AI companies, similar to the way that Apple has added iPhone and Mac features that were previously third-party apps. iPhone users no longer needed to download an app to tape calls once Apple made that feature standard on phones, for example.

Daniel van Binsbergen, CEO of contract tech startup DraftPilot, said the odds are against legal tech companies outdoing the top AI models purely on AI. “The changes they’re making, they seem to lift quality in medicine, in law, in all of these disciplines,” he said.

A Tight Race?

Some early comparisons have shown legal-specific AI in a tough fight to supplant general purpose tools.

In one study, an OpenAI model led to bigger improvements in the clarity of some legal writing than the legal-specific tool Vincent AI did.

Another comparison found ChatGPT to be as accurate as the legal tech tool Oliver and more accurate than GC AI. Oliver and GC AI, however, were “far more valuable to the lawyer in daily routine work,” the study found.

DLA Piper, meanwhile, has “hundreds” of attorneys who love using the AI-legal assistant Harvey, Barbara Taylor, the firm’s chief knowledge officer, said. Attorneys also tell her they still loveChatGPT, she said at a recent legal tech conference.

Legal tech companies believe they’ll stand out on user experience.

“The large language model is not the unique part,” Austin Walters said of his company, IP CoPilot. “It’s the other components around it.”

IP CoPilot identifies patentable ideas within a company in a way that would be very difficult for a large large language model to do, Walters said.

Niche is the Key

General legal research tools are more vulnerable to being overrun by the foundation AI models because that task is most suited to a chat-like interface akin to Claude or ChatGPT. Research platforms and other legal tools that have more expansive legal data will have an edge though, because in theory they’ll be able to create AI tools trained on data that Claude and ChatGPT don’t have.

“The reason why specialist legal technology makes a difference is because, although the generalist models have access to large data, they have much more limited access to specialist data that may change over time,” said Harry Borovick, general counsel and AI governance officer for contract tech maker Luminance.

He gave the example of a complex lease agreement between two companies where a few mistakes made by an LLM could upend the contract.

There’s a big difference between having access to a wide variety of generalist contractual information, and actually understanding the risk appetite and the nuance of your business, especially under new circumstances, Borovick said.

Not all legal AI runs on general AI models. Some developers have smaller systems trained only on legal-specific data. Others use general models but add retrieval-augmented generation, a way to add relevant information to increase accuracy.

OpenAI, Anthropic and Google will continue to develop more general purpose uses for their AI, but they’re less likely to make legal-specific applications, said CJ Saretto, Axiom Law’s chief technology officer.

“The products are niche and they’re very sophisticated,” he said.

“Will some of them get absorbed? Sure,” he added. “Will all of them just sort of die? I don’t think so.”

Convenience and Obligations

Understanding the needs of clients could be the key to survival.

Building convenience and ethics into tools will be the calling cards for legal tech companies, said Nicole Black, a legal insight strategist at 8am, which offers products for legal and accounting firms. “Legal technology companies really are able to stand apart by fully understanding the ethical compliance needs, having a deep understanding of the workflow needs of lawyers, and then being able to understand the way that lawyers work, understand their data sets and understand how to guard rail things,” she said.

And as general LLM technology improves, legal techs can stay ahead by focusing on privacy, security and being diligent about telling the market that their systems are secure, said Robert Weiss, a partner at Barnes & Thornburg, and a specialist in information technology law.

Being able to predict outcomes would also give legal AI an advantage, Weiss, a former president of the International Technology Law Association, said.

For example, a legal tool might be able to predict the chances of a positive litigation outcome, transactions getting regulatory approval, or the likelihood of lawsuits occurring because of a merger agreement that’s about to be signed, he said.

“That kind of judgment oriented analysis is something that they can get ahead of the large language models on because it’s more expensive,” he said. “It requires more internal expertise.”

To contact the reporters on this story: Evan Ochsner in Washington at eochsner@bloombergindustry.com; Kaustuv Basu in Washington at kbasu@bloombergindustry.com

To contact the editors responsible for this story: Jeff Harrington at jharrington@bloombergindustry.com; David Jolly at djolly@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.