Anthropic’s Feud With Pentagon Mushrooms Into Broader Battle (1)

Feb. 27, 2026, 6:27 PM UTC

Anthropic PBC got a vote of support from Silicon Valley workers for its increasingly contentious public-relations battle with the Pentagon over how the military can use artificial intelligence.

Two coalitions of workers – including employees of Amazon.com Inc., Google, Microsoft Corp. and OpenAI – are asking their companies to join Anthropic in refusing to comply with Defense Department demands for unrestricted use of AI products.

“We are writing to urge our own companies to also refuse to comply should they or the frontier labs they invest in enter into further contracts with the Pentagon,” a coalition of labor unions and other groups representing workers at Alphabet Inc., Amazon and Microsoft said in a letter posted early Friday.

The letters, and similar support for Anthropic from tech executives on social media, show how a tussle between one AI company and the Pentagon could mushroom into an industry-wide battle over how best to deploy the powerful technology safely.

Anthropic and the US military have been in talks over what exactly the armed forces can do with its tools. The richly valued startup, which has pitched itself as a cautious and responsible AI developer, insists that its products, including the Claude chatbot, not be used for surveillance of US citizens or to carry out lethal strikes without human involvement.

US Under Secretary of Defense for Research and Engineering Emil Michael discusses talks with Anthropic about loosening restrictions on the use of its artificial intelligence technology by the US military. Michael says the Pentagon offered concessions to the company and is open to continuing talks ahead of the Friday afternoon deadline. Source: Bloomberg

Defense officials have demanded the right to use Claude without restriction, threatening to invoke the Defense Production Act to compel Anthropic to make its products available. They’ve also floated cutting off Anthropic sales to Pentagon suppliers by designating the firm as a supply-chain risk.

Anthropic Chief Executive Officer Dario Amodei said in a statement Thursday that the company could not comply with the Defense Department request, though it continues to negotiate with the Pentagon. In response, a senior defense official took to social media to accuse Anthropic of putting US safety at risk.

In the open letter posted Friday, workers with groups including Amazon Employees for Climate Justice, the Alphabet Workers Union, No Tech for Apartheid and No Azure for Apartheid sought to connect Anthropic’s stand to employee efforts to get their companies to disclose more about the services they sell to state agencies taking part in President Donald Trump’s deportation push.

“Executive leadership at Google, Microsoft and Amazon must reject the Pentagon’s advances and provide workers with transparency about contracts with other repressive state agencies including DHS, CBP and ICE,” they said, referring to the Department of Homeland Security, Customs and Border Protection and Immigration and Customs Enforcement.

Another letter, published earlier this week and signed by Google and OpenAI employees, urged executives to put aside their differences “and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.”

Project Maven

It’s not the first time Silicon Valley’s rank-and-file have revolted over their employers’ ties to cutting edge military programs.

In 2018, thousands of Google workers protested after discovering the company was quietly involved in Project Maven, a Pentagon effort to use AI to analyze drone footage. They objected to Google being in the business of war and feared AI could help lead to deaths on the battlefield.

Google subsequently decided not to renew its contract and was roundly criticized by senior defense officials for failing to support US national security while continuing the company’s work in China.

The high-stakes fracture set back the Pentagon’s efforts to develop military applications of artificial intelligence and cast a spotlight on the Pentagon’s increasing reliance on tech workers for defense.

Even so, some champions for military AI are in favor of some limits. Jack Shanahan, the retired three-star general who directed Project Maven, expressed sympathy for Anthropic’s position this week.

“No LLM, anywhere, in its current form, should be considered for use in a fully lethal autonomous weapon system,” he said in a LinkedIn post, referring to large language models that power Anthropic’s Claude and other AI tools. “Despite the hype, frontier models are not ready for prime time in national security settings. Over-reliance on them at this stage is a recipe for catastrophe.”

Stop Killer Robots, a coalition of more than 270 civil society groups backing regulation of lethal autonomous weapons, said Anthropic’s participation in AI-enabled military systems was already cause for alarm.

“The standards Anthropic has chosen to maintain are a bare minimum of responsible conduct, not cause for celebration,” said Nicole Van Rooijen, executive director of Stop Killer Robots.

(Updates with background on Google drone protests, comment from former Pentagon official, beginning in the 10th paragraph.)

To contact the reporters on this story:
Matt Day in Seattle at mday63@bloomberg.net;
Katrina Manson in New York at kmanson4@bloomberg.net

To contact the editors responsible for this story:
Robin Ajello at rajello@bloomberg.net

Seth Fiegerman

© 2026 Bloomberg L.P. All rights reserved. Used with permission.

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.