Anthropic, Pentagon Standoff Shows Why AI Company Design Matters

April 22, 2026, 8:30 AM UTC

When OpenAI launched in 2015, it promised that artificial intelligence would serve humanity, not shareholders. Today the company sits at the center of a growing conflict over defense contracts, commercialization, and who will control the future of AI.

These disputes reveal a deeper issue: How artificial intelligence companies are structured may shape how they balance profit, safety, and national security.

OpenAI’s 2025 restructuring was meant to reassure critics. Under a memorandum of understanding with California Attorney General Rob Bonta, the OpenAI Foundation, the nonprofit parent, appoints the entire board of the OpenAI Group Public Benefit Corp., with the attorney general exercising oversight.

On paper, this preserves mission control even as outside investors expand their ownership stakes. But a more consequential feature of the restructuring has received less attention: It changes the underlying organizational design in a way that gives OpenAI greater latitude to compete more aggressively in pursuit of profit.

The nonprofit foundation that controls OpenAI’s operating business is legally structured as a public charity. Prior to the restructuring, the charity’s core function was to govern the for-profit subsidiary and ensure that it developed AI in the interests of humanity. To reinforce this role, the for-profit arm was subject to caps on profit distributions based on the idea that profits had to be reinvested in the business, which was itself understood to advance the social mission.

Now the OpenAI Foundation resembles European enterprise foundations that control companies such as Novo Nordisk, IKEA, or Carlsberg. The foundation pursues charitable goals and may receive distributions, while the operating company runs a competitive business.

That shift became clearer when OpenAI announced that the foundation would use income from the for-profit to fund philanthropic work, beginning with a $25 billion commitment to health programs, disease research, and AI resilience initiatives. The announcement underscores a growing separation between business activity and philanthropy.

Pentagon Contract Standoff

Tensions surrounding Anthropic, OpenAI, and Pentagon contracts are often framed as a clash of values: commercialization versus safety, expansion versus restraint. But there is another explanation. Both Anthropic and OpenAI are controlled by nonprofits, yet their governance structures differ in ways that may shape how they respond to controversial government contracts.

Anthropic is what we call a socially oriented for-profit. The operating company exists primarily to pursue a social mission: namely, safe AI development, under the supervision of a Delaware purpose trust with the power to appoint a majority of the board. The trust’s role is governance rather than extracting profit.

OpenAI’s structure has evolved in a different direction. Although originally structured as a socially oriented for-profit, after the restructuring it now functions primarily as an income-generating for-profit whose surplus can be distributed to a nonprofit parent that funds charitable initiatives.

The distinction between socially oriented and income-generating for-profits, which we develop in an academic article, may seem subtle but has important consequences.

Different Incentives

On the surface, both Anthropic and OpenAI operate through Delaware public benefit corporations. Directors must balance shareholder interests with the public benefit stated in the charter and the interests of stakeholders, though courts generally defer to board judgment.

Yet their internal logic differs. Anthropic’s structure is mission-centered. The company exists to build safe AI, and its commercial decisions are filtered through that mission. Opportunities in sensitive areas such as defense contracts are naturally evaluated through the lens of safety-first development.

While Anthropic’s governance structure includes a failsafe allowing sufficiently large stockholder supermajorities to amend the trust and its powers, that provision appears reserved for extreme circumstances, not for directing ordinary business strategy, as the recent clash with the Pentagon suggests.

OpenAI’s structure creates different incentives. While it also claims to pursue beneficial AI, including in the recent negotiations with the Pentagon over safety constraints, its drive toward an IPO at a potential $1 trillion valuation creates pressure to focus primarily on generating profits.

Some of that economic upside may flow to the OpenAI Foundation, which says it will use those resources for philanthropic goals, such as curing diseases.

For-Profit Income Generator

OpenAI’s restructuring removed earlier understandings that profits would remain largely reinvested in the business. Instead, the company can generate and distribute surplus.

This shift aligns investor interests with those of the nonprofit owner, because both benefit when the company produces financial returns. It may also make the company easier to position in traditional capital markets, including a possible future IPO.

Just as importantly, the new structure gives the operating company greater strategic flexibility. Government contracts can be framed as revenue-generating investments that ultimately support charitable purposes through the nonprofit’s distributions.

There are few clear legal rules specifying how OpenAI’s nonprofit parent must balance safe AI development against income generation. “Beneficial AI” isn’t a precise operational standard, and boards retain considerable discretion in interpreting how commercial activity advances the mission.

Securing a Pentagon contract can be presented as an investment that strengthens both AI development and philanthropic output, not as a departure from a mission.

Anthropic’s Limited Flexibility

Anthropic’s structure offers less flexibility. Because its operating company is primarily a socially oriented for-profit, safe AI development is still its core mission. Controversial contracts can’t easily be reframed as income-generating opportunities for a charitable parent.

This makes trade-offs sharper. Anthropic’s investors and customers, including firms involved in defense projects, must weigh the company’s safety commitments against commercial realities. If Anthropic declines certain government engagements, it risks losing revenue and strategic position in a market where government demand is increasingly influential.

Its governance structure therefore narrows the space for presenting commercial expansion as mission-compatible. That focus may limit maneuverability in an industry defined by enormous capital needs and fierce competition.

Legal Design Matters

The Pentagon dispute highlights a broader lesson about corporate governance. Organizational design influences how boards frame decisions, how investors evaluate risk, and how mission commitments are interpreted.

By restructuring itself as an income-generating for-profit, OpenAI has embedded commercialization within its governance architecture. Anthropic has maintained a stronger mission focus on safe AI, but at the cost of reduced strategic flexibility. In frontier AI, where capital demands are vast and national security stakes are rising, those structural differences may matter more than rhetoric about purpose versus profit.

The coming years will test which model of nonprofit control proves more sustainable in the AI economy: the socially oriented for-profit or the income-generating for-profit model. That choice may shape how the next generation of AI companies balance safety, profit, and national power.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Author Information

Ofer Eldar is a professor at UC Berkeley School of Law and Senior Research Fellow at the Halle Institute for Economic Research.

Mark Ørberg is an assistant professor at Copenhagen Business School.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Melanie Cohen at mcohen@bloombergindustry.com; Jada Chin at jchin@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.