The Pentagon earlier declared Anthropic a supply-chain risk after President Donald Trump directed US government agencies to stop using the artificial intelligence giant’s products.
The Pentagon wants to use Anthropic’s Claude chatbot for any purpose within legal limits — but without any usage restrictions from Anthropic. The firm has insisted that Claude not be used for mass surveillance against Americans or in fully autonomous weapons operations.
“Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company,” the company said in a
To contact the reporter on this story:
To contact the editor responsible for this story:
© 2026 Bloomberg L.P. All rights reserved. Used with permission.
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
