The Pentagon’s decision to designate Anthropic as a national security risk has forced the company’s customers to assess the AI titan’s role in powering their own products.
Anthropic has said the March directive—which labeled the company a supply chain risk—leaves the vast majority of its customers unaffected but bars use of its products in direct connection with defense department contracts.
But a broader set of Anthropic customers are digesting what the risk designation means for them. Although it “technically applies only to certain covered defense contracts, contractors should be prepared to respond to directives and inquiries from customers on their other government contracts as well,” law firm Mayer Brown said in an analysis on its website.
Many of Anthropic’s customers, such as legal tech vendors, act as an intermediary between the AI giant and users by building additional uses onto Anthropic’s Claude AI model.
Those vendors are closely watching how the landscape might change due to the Pentagon’s actions. Mayer Brown said companies that have federal government contracts need to “be prepared to explain the impact of transitioning away from Anthropic.”
Ryan Anderson, CEO of Filevine, a company that makes AI-powered practice management tools for the legal industry, said the company is conducting a high-priority review of impacts of the Pentagon’s decision and exploring the ramifications with its government clients.
“We are actively having conversations with federal government agencies, including the military,” Anderson said.
Makers of legal tech tools aim to entice lawyers with the promise of artificial intelligence. But what they’re selling isn’t computing power. They’re selling features, like a user-friendly experience with some tailored-for legal capabilities and bulked up privacy and security standards built on top of baseline models like Anthropic’s Claude.
Legal AI makers are competing against each other, but they all get their underlying artificial intelligence from the same places: baseline models built by Anthropic, OpenAI and Google.
Preferred Models Rule
Because of how quickly the landscape changes, most legal AI makers are model agnostic, meaning they can run their platforms using various models. They also use different models, including Gemini and GPT, for different tasks depending on what a particular model is good at. But among the models, many legal techs have a favorite.
“Claude Opus 4.6 is probably the best model for legal right now,” said Scott Stevenson, CEO of the legal contract AI company Spellbook.
Preferences can change quickly as the big AI makers leapfrog each other with new releases and updates. OpenAI, for example, released its newest model, GPT 5.4, earlier this month. Claude is also more expensive to use than some of its competitors, so some legal AI companies opt for cheaper options when they don’t need the most powerful model.
Still, those that rely heavily on Claude might see a performance dip if they switch to another model, Stevenson said.
“It won’t be the same quality necessarily,” Stevenson said. “It would be disruptive because we’d be falling back to a different model.”
Thomas Bueler-Faudree, CEO of the AI startup August, said his company’s software, which is used for both the business and practice of law firms, could run on different AI models without disrupting service. But “forcing the industry toward inferior tools over a contract disagreement doesn’t make anyone safer,” he said in an email.
The Pentagon’s actions are creating unpredictability in the legal industry, Bueler-Faudree said.
“When the government designates a leading American AI company a national security risk, that uncertainty doesn’t stay contained—it ripples across the entire ecosystem,” he said. “We’ve already heard concerns from clients who are evaluating how this could affect their own compliance obligations downstream.”
Mayer Brown said those with government contracts, at least, “should evaluate whether they can perform equivalent functions through alternative means, and document any technical or functional differences.”
Perception of the government’s action matter, too. Some Anthropic customers may rely on how their own clients react to the Pentagon’s decision.
“If we’ve got a client that’s themselves spooked by it or influenced by it and wants to make sure that everything’s provided, then we switch over to an OpenAI model,” said Jon Chan, senior managing director at FTI Technology.
San Francisco-based Anthropic, most recently valued at $380 billion, quickly became a titan in the AI space. The company made its first dollar less than three years ago. But now the privately held company has run-rate revenue of $14 billion, it said last month. The spat with the Pentagon originated over a $200 million defense contract.
Waiting for Feud to End
The supply chain risk designation came after weeks of negotiation over the US military’s access to Anthropic’s technology. Talks broke down after the company demanded assurances that its AI wouldn’t be used for mass surveillance of Americans or autonomous weapons deployment.
Anthropic last week sued the Defense Department over the designation, a label typically reserved for companies from countries the US views as adversaries.
Some expect the administration to back down. Mihir Patil—founder of the startup Overstand Labs, which is backed by the prominent tech accelerator Y Combinator—pointed to an instance when defense tech firm Palantir successfully sued the Army, but is now used by the US military.
“This is all temporary,” Patil said. “I think this is all just a hiccup,” he said.
Filevine’s Anderson likewise said in an email that he believes the feud between the government and Anthropic will end. “We are confident that these groups will find a way to work together,” he said.
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
