- Squire Patton Boggs attorneys assess the US shift on AI policy
- Trump prefers AI innovation to oversight and risk mitigation
Early actions from the Trump administration indicate a fundamental shift in US artificial intelligence policy away from the Biden administration’s emphasis on oversight, risk mitigation, and equity, toward a framework centered on deregulation and the promotion of AI innovation as a means of maintaining US global dominance.
The administration believes this shift better positions US tech companies to continue leading on AI development. But challenges remain for companies that also operate in foreign jurisdictions with stricter AI regulations and certain US states that have already enacted their own AI regulatory rules.
The divergence between the federal government’s pro-innovation strategy and the precautionary regulatory model pursued by the EU, South Korea, and individual states underscores the need for companies operating across jurisdictions to adopt flexible compliance strategies that account for varying regulatory standards.
Deregulation and Dominance
Vice President JD Vance laid out the administration’s commitment to US AI dominance in his first major policy speech earlier this month, saying the US is leading in AI and the administration plans to keep it that way.
He explained at a global summit in Paris that the “AI future is not going to be won by hand wringing about safety,” but rather “will be won by building,” and that overregulation would deter innovators from taking the risks necessary to realize AI’s potential.
Vance criticized foreign governments for “tightening the screws” on US tech firms and pressed European countries to view AI with optimism rather than trepidation. At the conclusion of the event, the US and the UK notably declined to sign on to the global pledge emphasizing commitments to AI safety, security, sustainability, and inclusivity endorsed by dozens of other countries.
Vance’s comments solidified the policy outlined in President Donald Trump’s Jan. 25 executive order, which replaced President Joe Biden’s prior executive order on AI.
The Trump order explicitly frames AI development as a matter of national competitiveness and economic strength, prioritizing policies that remove perceived regulatory obstacles to innovation. It criticizes the influence of “engineered social agendas” in AI systems and seeks to ensure that AI technologies remain free from perceived ideological bias.
Biden’s order, by contrast, focused on responsible AI development, placing significant emphasis on addressing risks such as bias, disinformation, and national security vulnerabilities.
Trump’s directive specifically mandates an immediate review and possible rescission of all policies, directives, and regulations established under Biden that could be seen as impediments to AI innovation. This shift will likely result in removing, or at least substantially overhauling, Biden’s structured oversight framework.
While the previous order also promoted innovation and competitiveness, those initiatives were paired with risk mitigation, enhanced cybersecurity protocols and monitoring requirements for AI used in critical infrastructure, and direction to federal agencies to collaborate in the development of best practices for AI safety and reliability.
The Trump administration’s treatment of equity and civil rights, workforce development, and national security also shows an ideological shift. Biden’s order addressed each and ensured they were addressed in AI development. In omitting any specific measures to address these areas, the Trump order aims to reduce AI governance regulation and federal oversight on the basis that maintaining US AI leadership will advance its interests on issues such as national security and workforce development.
Global AI Governance
The Trump administration’s deregulatory approach comes as other jurisdictions are advancing stricter regulatory frameworks for AI. The EU’s 2024 AI Act imposes comprehensive rules on the development and use of AI technologies, with a strong emphasis on safety, transparency, accountability, and ethics.
Globally, Japan, UK, South Korea, and Australia are also advancing AI laws, many of which align more closely with the EU’s focus on accountability and ethical considerations than with the US’s pro-innovation stance.
The Trump administration’s emphasis on reducing regulatory burdens stands in stark contrast to the EU’s approach, which reflects a precautionary principle that prioritizes societal safeguards over rapid innovation. This divergence could create friction between the US and EU regulatory environments, particularly for global companies that must navigate both systems.
While the EU may soften slightly—as EU Commission President Ursula von der Leyen acknowledged at the Paris AI summit the need to make innovation easier and “cut the red tape”—alignment between the approaches taken by US and EU is unlikely.
State AI Frameworks
The administration’s approach also widens the gap between federal and state AI regulatory systems because it presages the federal deregulation of AI. While Trump’s order signals a shift toward prioritizing innovation, the new administration’s approach to regulatory enforcement—including on issues such as data privacy, competition, and consumer protection—will become clearer as newly appointed federal agency leaders begin implementing their agendas.
At the same time, states such as California, Colorado, and Utah have already enacted AI laws with varying scope and degrees of oversight. As with state consumer privacy laws, increased state-level activity in AI also would likely lead to increased regulatory fragmentation, with states implementing their own rules to address concerns related to high-risk AI applications, transparency, and sector-specific oversight.
If Congress enacts an AI law that prioritizes innovation over risk mitigation, stricter state regulations could face federal preemption, meaning that Congress or the courts may say that the federal law precludes conflicting state laws. Organizations must closely monitor international, national, and state developments to navigate this increasingly fragmented AI regulatory landscape.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Martin J. Mackowski is partner in Squire Patton Boggs’ antitrust and competition practice.
Pablo E. Carrillo is of counsel with Squire Patton Boggs in its defense public policy and its international trade and national security practice groups.
Julia B. Jacobson is partner in Squire Patton Boggs’ data privacy, cybersecurity, and digital assets practice.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.