OpenAI Chief Executive Officer
OpenAI declined to comment on whether the firm’s services for the department would replace work previously done by Anthropic. The Defense Department didn’t respond to requests for comment.
Just hours earlier, the Pentagon had declared Anthropic a supply-chain risk, an unprecedented move against an American company that could have
In a statement Saturday, OpenAI shared more of its rationale for agreeing to work with the Pentagon and said its deal had more safeguards than any other for classified AI work, including Anthropic’s. It said the contract would respect OpenAI’s guardrails, including no use of its technology to conduct mass domestic surveillance, direct autonomous weapons systems or run high-stakes automated systems that track behavior.
While other developers have leaned on usage policies to set their red lines, OpenAI said it retains control over safety through the deployment of its AI tools, with strong contractual provisions and company staff with US security clearances working with government personnel.
Read More:
OpenAI said that it disagrees with the Pentagon decision to declare Anthropic a supply-chain risk and that it hopes its accord with the military will ease tensions between the government and other top AI developers. “We don’t know why Anthropic could not reach this deal, and we hope that they and more labs will consider it,” the company said.
Anthropic, which has stipulated that its products not be used for surveillance of Americans or to make fully autonomous weapons, said Friday that “no amount of intimidation or punishment from the Department of War will change our position.” The company vowed to challenge any formal notification that it’s been designated a supply-chain risk in court, and its chief called the move “retaliatory and punitive” in an interview with CBS News.
The AI firm has been thrust into the limelight in more ways than one in recent weeks: Its push to expand AI offerings to businesses has triggered selloffs in everything from software to financial services and cybersecurity stocks as investors fear such products will disrupt entire industries.
Anthropic has yet to comment on whether OpenAI’s argument that the agreement it has detailed with the Pentagon achieves a safeguard commensurate with preventing fully autonomous weapons.
Read More:
OpenAI’s deal with the Pentagon threatens to widen the rift between the Trump administration and Anthropic, which has drawn strong support for its stance in Silicon Valley where
The Pentagon had offered terms to Anthropic earlier this week that incorporated some language that the company had proposed on surveillance and autonomy, a person familiar with the situation said, asking not to be identified because the talks weren’t public. But in Anthropic’s opinion, they didn’t go far enough in ensuring the department wouldn’t set those restrictions aside when it deemed it necessary to do so, the person said.
Altman addressed some of issues of surveillance and autonomous weapons in his post, saying the Defense Department was aligned with its principles and reflected them in its agreement with OpenAI — asking the department “to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept.”
Altman’s earlier statement on X, however, stopped short of Anthropic’s red line to prohibit the use of its AI tools in fully autonomous weapons. His commitment to maintaining “human responsibility for the use of force” hews closely to existing Pentagon policy that has governed the development of semiautonomous and autonomous weapons for years and calls for “appropriate levels of human judgment over the use of force.”
One key detail in Altman’s post: OpenAI’s tools will be used in a classified setting “only” on cloud networks, as opposed to edge servers. Cloud networks can potentially run automated decision-making systems, but they also make systems easier to command and control — or cut — if something goes wrong.
Read More: US Bars Anthropic Products From Agencies, Contractors in AI Feud
OpenAI began as a nonprofit and converted to a more traditional for-profit enterprise last year. Though the company initially prohibited the use of its technology for military applications, OpenAI updated its policy to allow such uses in 2024. It has also dropped the word “safely” from its mission statement, which is currently to “ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.”
In a post on X, Defense Secretary
Both Anthropic and OpenAI have increasingly turned their attention to profits as they push for initial public offerings as soon as this year, tapping frenzied investor interest in AI.
Read More:
On Friday, OpenAI announced it had raised $110 billion in a deal that values the startup at $730 billion, representing the ChatGPT maker’s largest funding round to date. Anthropic raised $30 billion round earlier this month from some of OpenAI’s same investors.
Amodei and Altman have publicly clashed. Most recently, during an AI summit in New Delhi this month, the two men ended up standing next to each other with Prime Minister
OpenAI is already
The company said that effort is in keeping with its usage policy guidelines.
(Updates with headline and OpenAI statement starting in fourth paragraph.)
--With assistance from
To contact the reporters on this story:
To contact the editors responsible for this story:
Lynn Doan, Michael Shepard
© 2026 Bloomberg L.P. All rights reserved. Used with permission.
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
