US District Judge
The company
The startup wanted assurances its AI wouldn’t be used for mass surveillance of Americans or autonomous weapons deployment, while the government cited
Lin questioned the rationale for the ban, saying it didn’t appear to be directed at national security interests.
“If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude,” the judge wrote. “Instead, these measures appear designed to punish Anthropic.” Such a move, “is classic illegal First Amendment retaliation,” she said.
A Pentagon spokesperson didn’t immediately respond to a request for comment sent after office hours.
In a statement, Anthropic welcomed the judge’s ruling. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI,” the company said.
Anthropic claims it is being shut out of government contracts for disagreeing with the administration and argues the legal principles at stake affect every federal contractor whose views the government dislikes. The Trump administration has vowed a legal fight to oust Anthropic from all US government agencies.
During a hearing before Lin earlier this week, a government lawyer said trust was a key component in any relationship the military has with companies providing it with services and that Anthropic destroyed that trust during contract negotiations by trying to dictate Pentagon policies on use of AI technology.
The lawyer argued that the government is concerned about the risk of “future sabotage” from Anthropic, including changes to the AI software the government purchases from the company.
But in her ruling, Lin said the US Justice Department had no “legitimate basis” to determine that Anthropic’s firm stance regarding restrictions on its AI technology could lead it to “become a saboteur.”
At the hearing, an attorney for Anthropic pointed out that the Pentagon is able to review any AI model before deploying it, and that Anthropic has no way to stop a model from working, change how it works, turn it off, or see how its being used by the military.
As part of the legal fight over the ban, Anthropic also filed a complaint in an appellate court in Washington, DC, focusing on a law governing procedures for mitigating supply-chain risks in procurement. In that suit, the company claimed the Defense Department exceeded its authority with actions that were “arbitrary, capricious and an abuse of discretion.”
The case is Anthropic v. US Department of War, 26-cv-01996, US District Court, Northern District of California (San Francisco).
(Updated with comment from Pentagon official in seventh paragraph. An earlier verson corrected the date of the suit filing.)
--With assistance from
To contact the reporter on this story:
To contact the editors responsible for this story:
Steve Stroth, Peter Blumberg
© 2026 Bloomberg L.P. All rights reserved. Used with permission.
