OpenAI Defends Pentagon Deal, Claims Safety Exceeds Anthropic’s

Feb. 28, 2026, 10:46 PM UTC

OpenAI has agreed to deploy its own artificial intelligence models within the Defense Department’s classified network after rival Anthropic PBC saw its relationship with the Pentagon implode over surveillance and autonomous weapons concerns.

OpenAI Chief Executive Officer Sam Altman said late Friday that he’d reached an agreement with the department that reflects the firm’s principles that prohibit domestic mass surveillance and require “human responsibility for the use of force, including for autonomous weapon systems.” The startup also built safeguards to ensure its models behave as they should as part of the deployment, Altman said in a post on the social media platform X.

Sam Altman
Photographer: Kosuke Okahara/Bloomberg

OpenAI declined to comment on whether the firm’s services for the department would replace work previously done by Anthropic. The Defense Department didn’t respond to requests for comment.

Just hours earlier, the Pentagon had declared Anthropic a supply-chain risk, an unprecedented move against an American company that could have profound consequences for its business. Dean Ball, a former adviser to US President Donald Trump on AI, described the decision as “attempted corporate murder.” Less than a day after Altman posted about the deal, Trump announced that the US had carried out airstrikes against Iran alongside Israel.

In a statement Saturday, OpenAI shared more of its rationale for agreeing to work with the Pentagon and said its deal had more safeguards than any other for classified AI work, including Anthropic’s. It said the contract would respect OpenAI’s guardrails, including no use of its technology to conduct mass domestic surveillance, direct autonomous weapons systems or run high-stakes automated systems that track behavior.

While other developers have leaned on usage policies to set their red lines, OpenAI said it retains control over safety through the deployment of its AI tools, with strong contractual provisions and company staff with US security clearances working with government personnel.

Read More: Anthropic’s Pentagon Showdown Is About More Than AI Guardrails

OpenAI said that it disagrees with the Pentagon decision to declare Anthropic a supply-chain risk and that it hopes its accord with the military will ease tensions between the government and other top AI developers. “We don’t know why Anthropic could not reach this deal, and we hope that they and more labs will consider it,” the company said.

Anthropic, which has stipulated that its products not be used for surveillance of Americans or to make fully autonomous weapons, said Friday that “no amount of intimidation or punishment from the Department of War will change our position.” The company vowed to challenge any formal notification that it’s been designated a supply-chain risk in court, and its chief called the move “retaliatory and punitive” in an interview with CBS News.

The AI firm has been thrust into the limelight in more ways than one in recent weeks: Its push to expand AI offerings to businesses has triggered selloffs in everything from software to financial services and cybersecurity stocks as investors fear such products will disrupt entire industries.

Anthropic has yet to comment on whether OpenAI’s argument that the agreement it has detailed with the Pentagon achieves a safeguard commensurate with preventing fully autonomous weapons.

Read More: Cyber Stocks Slide as Anthropic Unveils Claude Security Tool

OpenAI’s deal with the Pentagon threatens to widen the rift between the Trump administration and Anthropic, which has drawn strong support for its stance in Silicon Valley where tech workers rallied to the company’s side and urged other major tech companies including Amazon.com Inc. and Microsoft Corp. to follow suit. The Pentagon has also struck a deal with xAI for its Grok chatbot to start operating on the classified cloud.

The Pentagon had offered terms to Anthropic earlier this week that incorporated some language that the company had proposed on surveillance and autonomy, a person familiar with the situation said, asking not to be identified because the talks weren’t public. But in Anthropic’s opinion, they didn’t go far enough in ensuring the department wouldn’t set those restrictions aside when it deemed it necessary to do so, the person said.

Altman addressed some of issues of surveillance and autonomous weapons in his post, saying the Defense Department was aligned with its principles and reflected them in its agreement with OpenAI — asking the department “to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.”

Dario Amodei
Photographer: Samyukta Lakshmi/Bloomberg

Altman’s earlier statement on X, however, stopped short of Anthropic’s red line to prohibit the use of its AI tools in fully autonomous weapons. His commitment to maintaining “human responsibility for the use of force” hews closely to existing Pentagon policy that has governed the development of semiautonomous and autonomous weapons for years and calls for “appropriate levels of human judgment over the use of force.”

One key detail in Altman’s post: OpenAI’s tools will be used in a classified setting “only” on cloud networks, as opposed to edge servers. Cloud networks can potentially run automated decision-making systems, but they also make systems easier to command and control — or cut — if something goes wrong.

Dario Amodei, CEO of Anthropic, used to work at OpenAI and left in 2020 in part because of his concerns that the startup was prioritizing commercialization and speed over safety.

Read More: US Bars Anthropic Products From Agencies, Contractors in AI Feud

OpenAI began as a nonprofit and converted to a more traditional for-profit enterprise last year. Though the company initially prohibited the use of its technology for military applications, OpenAI updated its policy to allow such uses in 2024. It has also dropped the word “safely” from its mission statement, which is currently to “ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.”

In a post on X, Defense Secretary Pete Hegseth outlined a six-month period for Anthropic to hand over AI services to another provider. “America’s warfighters will never be held hostage by the ideological whims of Big Tech,” Hegseth wrote. “This decision is final.” His post appeared shortly after Trump ordered federal agencies to drop Anthropic.

Both Anthropic and OpenAI have increasingly turned their attention to profits as they push for initial public offerings as soon as this year, tapping frenzied investor interest in AI.

Read More: Pentagon Casts Cloud of Doubt Over Anthropic’s AI Business

On Friday, OpenAI announced it had raised $110 billion in a deal that values the startup at $730 billion, representing the ChatGPT maker’s largest funding round to date. Anthropic raised $30 billion round earlier this month from some of OpenAI’s same investors.

Amodei and Altman have publicly clashed. Most recently, during an AI summit in New Delhi this month, the two men ended up standing next to each other with Prime Minister Narendra Modi, and didn’t hold hands while everyone else on stage did.

OpenAI is already involved in a Pentagon effort to develop voice-controlled autonomous drone swarming technology, Bloomberg has previously reported. The company was earlier this year selected to help compete in a $100 million prize challenge as part of a team led by Applied Intuition Inc. OpenAI models will be used to translate voice commands into digital instructions, according to a document reviewed by Bloomberg.

The company said that effort is in keeping with its usage policy guidelines. SpaceX’s xAI will also compete separately on the effort, Bloomberg has reported, despite Elon Musk’s long-stated opposition to developing “new tools for killing people.”

(Updates with headline and OpenAI statement starting in fourth paragraph.)

--With assistance from Maggie Eastland and John Harney.

To contact the reporters on this story:
Rachel Metz in San Francisco at rmetz17@bloomberg.net;
Katrina Manson in New York at kmanson4@bloomberg.net

To contact the editors responsible for this story:
Anne VanderMey at avandermey@bloomberg.net

Lynn Doan, Michael Shepard

© 2026 Bloomberg L.P. All rights reserved. Used with permission.

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.