A judge appeared skeptical of the Trump administration’s rationale for banning the federal government from using artificial intelligence technology from
During a hearing Tuesday in San Francisco, US District Judge
Instead, the ban that followed the Pentagon’s dispute with the company over AI safety concerns “looks like an attempt to cripple Anthropic,” she said. The judge added that she’s concerned about whether the government is punishing Anthropic for speaking out publicly about the conflict.
Read More:
Anthropic sued last month to block the Pentagon’s declaration, escalating a high-stakes dispute over safeguards on AI technology used by the military. The startup demanded assurances that its AI wouldn’t be used for mass surveillance of Americans or autonomous weapons deployment, while the government cited national security in arguing it couldn’t accept any restrictions.
Anthropic had asked the judge to issue a preliminary injunction to block the government’s ban from staying in effect while the legal fight plays out. Lin didn’t rule on that request, but said she’d make a decision in the coming days.
Anthropic wants the judge to remove the supply-chain risk designation and require US agencies to withdraw directives related to it. The company claims it is being shut out for disagreeing with the administration and argues the legal principles at stake affect every federal contractor whose views the government dislikes.
The Trump administration has vowed a legal fight to oust Anthropic from all US government agencies.
During the hearing, a lawyer for the federal government said trust was a key component in any relationship the military has with companies providing it with services and that Anthropic destroyed that trust during contract negotiations by trying to dictate Pentagon policies on use of AI technology.
The lawyer argued that the government is concerned about the risk of “future sabotage” from Anthropic, including changes to the AI software the government purchases from the company.
In response, a lawyer for Anthropic pointed out that the Pentagon is able to review any AI model before deploying it, and that Anthropic has no way to stop a model from working, change how it works, turn it off, or see how its being used by the military.
The case is Anthropic v. US Department of War, 26-cv-01996, US District Court, Northern District of California (San Francisco).
(Updates with background on case, comments from the hearing.)
To contact the reporter on this story:
To contact the editors responsible for this story:
Steve Stroth, Peter Blumberg
© 2026 Bloomberg L.P. All rights reserved. Used with permission.
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
