President Donald Trump moved from rhetoric to action on Thursday, signing an executive order that directs the Department of Justice to challenge state artificial intelligence laws. By establishing an “AI Litigation Task Force,” the administration aims to dismantle what it calls a “patchwork of 50 State Regulatory Regimes” in favor of a single federal standard.
This move gives legal teeth to a now-familiar argument against state regulation of AI: that a mosaic of state rules will “overregulate” and stifle innovation. Since AI is a national economic and defense asset, the argument goes, it demands a single federal standard.
This federalist critique, while constitutionally elegant, is practically dangerous. It ignores a fundamental reality of the modern digital economy: In the absence of federal guardrails, the alternative to state regulation isn’t “freedom to innovate,” but fiduciary uncertainty. For corporate counsel, the much-maligned “patchwork” of state laws isn’t a threat to be fought, but a source of authoritative rules in a liability vacuum.
Critics of state regulation often channel James Madison, warning that 50 different AI standards will recreate the “rival, conflicting and angry regulations” that plagued the Articles of Confederation, the agreement that governed the US prior to the Constitution. They argue that just as Connecticut couldn’t enforce land grants in Pennsylvania in 1782, California shouldn’t dictate AI safety standards for the rest of the nation today.
But this reliance on 18th-century agrarian federalism ignores the contemporary legal reality. The US Supreme Court laid this “physical presence” argument to rest in South Dakota v. Wayfair, recognizing that in a digital economy, a remote entity does project itself into a state without ever crossing a border. When an AI model makes a biased credit decision against a resident of Denver, the harm is local, even if the server is in Virginia. To argue that states can’t regulate these “remote” harms is to argue that states have lost their police power to protect their citizens simply because the harm was delivered via the cloud.
The most common objection to state AI laws is that they create an impossible compliance environment (the so-called “California Effect”) that forces national companies to bow to the strictest state regulator. Critics argue that AI models can’t be retrained for every jurisdiction.
This argument fundamentally misreads the risk landscape. The true nightmare for a general counsel isn’t complying with a strict state statute; it’s navigating the uncertainty of a regulatory vacuum. In the absence of statutes such as Colorado’s SB24-205, AI liability will default to a variety of state common laws of negligence, product liability, and fraud.
Common law is the ultimate “patchwork.” It’s unpredictable, decided by juries rather than legislatures, and inherently retroactive. In Wayfair, the Supreme Court signaled that state authority over remote sellers is most legitimate when it’s prospective, giving businesses fair notice of the rules. State statutes such as SB24-205 meet this standard; they tell you the speed limit before you drive. By contrast, a common law tort verdict punishes you for speeding after the fact, based on a speed limit that didn’t exist when you built the model.
Furthermore, the claim that complying with differing state standards is an “undue burden” echoes the failed arguments of the e-commerce giants in Wayfair, who claimed that calculating tax rates for thousands of jurisdictions was impossible. The court rejected this, noting that in the modern economy, compliance software bridges the gap. The same is true for AI; “compliance-as-code” tools are already emerging to manage diverging state standards, converting an “impossible” burden into a manageable cost of doing business.
Finally, the argument that state regulations “stifle” innovation ignores the market value of trust. We have seen this movie before with data privacy. The “patchwork” of state breach notification laws didn’t kill the internet; it forced companies to harden their security, which in turn made e-commerce viable for wary consumers.
The same logic applies to AI. If the public believes that AI is an unregulated “black box” capable of discrimination or damaging hallucinations without recourse, adoption will stall. As Bruce Schneier and Nathan Sanders argue in their recent book “Rewiring Democracy,” we can’t passively wait for AI to shape society; we must actively “wire” democratic values into these systems to ensure they serve the public interest. State experiments as the “laboratories of democracy” are currently the only mechanism doing this work by building the trust and social license required for AI to be accepted in the day-to-day life of the persons who will use it, be impacted by it, and be injured when it fails. If Colorado or California can create a workable liability framework, they provide a template for Congress. If they fail, the damage is contained, exactly as the Founders intended.
The DOJ is unlikely to sue states into submission, nor will Congress pass comprehensive preemption legislation anytime soon. The legal reality is that the “California Effect” is here to stay.
For corporate counsel, the path forward is clear: Stop hoping for federal preemption to wipe the slate clean. Instead, look to the states that are filling the void. In a world of infinite AI risks, a state regulation is the only map we have.
An immaterial amount of this content was drafted by generative artificial intelligence.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Kevin P. Lee is the Intel Social Justice and Racial Equity professor of law at North Carolina Central University School of Law and the founding director of the Institute for AI and Democratic Governance.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
