Holtzman Vogel’s Oliver Roberts says the federal government should consider preempting state laws on the training, deployment, and testing of frontier AI models.
OpenAI Inc.’s policy proposals for President Donald Trump’s artificial intelligence action plan introduce an issue poised to trigger the most pivotal AI policy debate of the year. The company urged the federal government to protect AI developers by preempting state AI laws that risk “bogging down innovation and, in the case of AI, undermining America’s leadership position.”
There are more than 781 AI bills currently pending in state legislatures and a booming $227 million industry in AI governance. Federal preemption could eliminate them instantly.
Federal preemption of state AI laws is fundamentally a debate about national security, economic dominance, and federalism. If the US stifles its AI industry with a fragmented patchwork of restrictive state AI laws, it risks forfeiting industry leadership to China, which is rapidly closing the gap with breakthroughs in DeepSeek and Manus.
Although bold, OpenAI’s proposal for federal preemption isn’t surprising in its substance or timing. The proposal came following a January executive order that called for the creation of an AI action plan focused on maintaining US leadership in AI development and eliminating burdens on the private sector.
The White House Office of Science and Technology Policy then issued a request seeking input on policies that should be incorporated. OpenAI submitted several recommendations, most notably federal preemption of state AI laws.
In exchange for voluntary data sharing with the federal government, OpenAI requested the private sector receive “relief from the 781 and counting proposed AI-related bills already introduced this year in US states.”
Federal preemption has been on Congress’ radar since last year. In December, the bipartisan House Task Force on Artificial Intelligence issued a comprehensive report on policies and findings related to AI. The report observed that “[p]reemption of state AI laws by federal legislation is a tool that Congress could use.”
Concerns over a fragmented state AI regulatory framework are well-founded from a practical standpoint. No comprehensive federal regulations currently govern AI development or use, leaving a regulatory void individual states have begun filling with varying AI policies.
In May 2024, Colorado became the first state to pass a comprehensive AI bill imposing regulations on developers and deployers of “high risk AI” systems. The Virginia Legislature passed a similar AI bill last month, while a Texas state representative recently introduced a comprehensive AI bill that would be among the most restrictive in the country. Meanwhile, California enacted 18 new AI laws that took effect this year, focusing—like many other states—on domain-specific AI regulation.
The patchwork of state AI regulation has already unfolded. Across the several hundred separate state bills, AI companies (and perhaps even companies simply using AI) could be subject to 50 different AI safety standards, reporting requirements, and reporting agencies. This would drive up compliance costs and hinder investment and growth.
The state AI regulations established in California—home to leading AI companies such as OpenAI, Google LLC, Meta Platforms Inc., and Anthropic PBC—could effectively dictate AI policy for the rest of the country. And the state has already demonstrated a propensity for advancing ill-advised AI regulation, as seen in SB 1047, which passed both legislative chambers but was vetoed by Gov. Gavin Newsom (D).
The bill would have imposed liability on developers of large AI models based on ambiguous and underdeveloped testing standards, potentially stifling innovation. Further demonstrating the disconnect between state and federal priorities, eight sitting Democratic members of Congress wrote a letter to Newsom urging him to veto the bill.
States are ill-equipped to regulate a rapidly evolving and complex technology, particularly when it intersects with national security and foreign affairs.
Given AI’s widespread adoption and integration into society—along with its critical role in the economy and national security—it has become analogous to essential infrastructure such as the power grid or the internet. The federal government can’t allow it to falter under a fragmented patchwork of state AI regulations.
But the decision to preempt state AI laws is just the first step—the greater challenge lies in drafting the enabling legislation.
The power of federal preemption comes from the Article VI, Clause 2 of the Constitution, also known as the supremacy clause. To preempt state AI laws, Congress must pass legislation providing for federal preemption, which may prove more challenging than it seems.
First, the term “AI” isn’t universally or consistently defined. A definition that is too broad or narrow could lead to unintentionally preempting regulations on traditional technologies or failing to preempt regulations on other intended AI technologies.
Second, the scope of federal preemption will be as contentious as the fundamental question of whether preemption should be allowed. For example, debates will arise over whether all state AI laws should be preempted or only those affecting specific aspects, such as model development, the application layer, or end-user interactions.
One possible approach is a more targeted form of preemption, focusing specifically on state regulations that govern the training, deployment, and testing of frontier AI models. Under this framework, the federal government could establish standards and regulations exclusively for frontier models (or simply prevent the states from doing so without any federal standards), while states could retain authority over application layer uses of AI and user interactions, such as restricting the use of AI tools in job interviews. But in practice, even these categories may be difficult to neatly define.
AI development demands both rapid progress and long-term investment, and uncertainty at the state level risks hindering US advancement.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Oliver Roberts is an adjunct professor of law at Washington University in St. Louis School of Law, co-head of Holtzman Vogel’s AI practice group, and founder and CEO of Wickard.ai.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.