Trump Plan Thwarting State AI Laws to Spur Preemption Fights

July 28, 2025, 9:05 AM UTC

The White House’s latest moves to restrict state-level AI regulation are ripe for legal challenges that hinge on how federal agencies assert preemption and which funds the administration seeks to withhold from states.

The AI Action Plan released July 23 calls for reviewing the “AI regulatory climate” in states to determine whether it undermines the goals of related federal funding. It tasks the Federal Communications Commission with evaluating whether state AI regulations interfere with the agency’s duties and authorities under the Communications Act of 1934.

President Donald Trump touted the action plan—which also includes federal policy shifts—as promoting innovation and US dominance over China in the rapidly developing artificial intelligence industry. But the funding threats and potential FCC preemption echo a state-law moratorium that Congress rejected in a federal budget bill just weeks ago after hearing bipartisan opposition from governors, state legislators, and attorneys general.

“States want to be able to regulate AI,” said Ellen P. Goodman, professor at Rutgers Law School. “Even if they’re totally AI accelerationist,” she said state lawmakers have varied interests they want to protect, from children’s safety to performing artists’ livelihoods.

Among the chief congressional opponents of the moratorium was Sen. Marsha Blackburn (R-Tenn.). Tennessee lawmakers enacted the ELVIS Act in 2024 to protect musicians against having their voices copied by AI-powered deepfakes.

States have passed dozens of AI-specific laws over the last three years, ranging from limits on election-related and pornographic deepfakes to bans on AI-generated bias in hiring and bank lending algorithms. The tech industry has actively lobbied to block or narrow legislation.

Trump’s plan promises not to interfere with states’ rights to pass AI laws unless they’re “unduly restrictive to innovation,” but provides little elaboration.

“It’s all very vague and leaves a lot open to interpretation,” said Eric Null, co-director of the privacy and data project at the Center for Democracy and Technology.

Colorado’s sweeping law targeting algorithmic bias and others that mirror it would likely not pass muster under a federal review, said Null. The law regulates AI-powered decisionmaking across employment, lending, education, health care, housing, and insurance, and includes audit and consumer notice requirements.

Colorado Gov. Jared Polis (D) has urged lawmakers to revise or delay the law before it takes effect in February 2026.

It’s hard to know yet whether other AI regulations with narrower scope such as the ELVIS Act might also draw federal scrutiny.

Unknown Funding At Risk

Legal pushback from states against Trump’s plan would depend on which “AI-related discretionary funding programs” are being withheld following an Office of Management and Budget-led evaluation.

“If in fact it’s only that they’re going to make this a condition of genuinely AI-related discretionary funding, this is pretty narrow,” said Mackenzie Arnold, director of US policy at the Institute for Law & AI. “On the other hand, there’s a chance that they’re going to interpret AI-related discretionary funding to include a bunch of different things, like broadband funding,” which would be “a much larger cudgel.”

If federal agencies take too expansive an approach—regarding which funding sources they restrict or which state regulations they deem disqualifying—that would give states a stronger case to challenge the funding restrictions in court, said Cody Venzke, senior policy counsel at the American Civil Liberties Union.

Changing the conditions on existing funding agreements with the states “could run into serious constitutional issues,” he said.

“The federal government especially through Congress has the power of the purse. They can spend money how they choose,” including by putting conditions on spending that act as a sort of contract, Venzke said. But “for that contract to really be fair, you have to know what the terms of the deal are.”

“I’m hopeful that OMB has built up expertise recognizing the potential harms that AI can impose on people,” Venzke said, as the office demonstrated in an April memo instructing federal agencies to use AI tools with certain safeguards.

Trump’s plan includes goals that could yield funding opportunities for states, such as encouraging infrastructure development like data centers and power generation sources that AI needs to operate, Goodman said.

AI-related conditions on such funding opportunities likely would overcome a state’s legal challenge, she said.

The OMB and FCC didn’t respond to requests for comment about the AI action plan.

FCC Preemption

Preempting states’ AI laws under the Communications Act might be a taller order for Trump than restricting funding, as there’s no legal precedent suggesting the FCC has direct regulatory power over AI.

“Courts have been hesitant to preempt state authority unless there is a specific declaration of intent and genuine, not merely hypothetical, conflict between federal policy and state law,” said Scott Kohler, nonresident scholar at the Carnegie Endowment for International Peace.

The FCC has specific authorities under the Communications Act to regulate areas like landline telephones and over-the-air radio and television broadcasts, Venzke said.

“The FCC doesn’t have authority over things like social media, or over TV programming, so it’s really hard to see what authority the FCC has over AI,” he said.

The commission could explore preemption of states requiring labeling of election ads with AI-generated content, since the FCC regulates broadcast and TV ads, Goodman said. The agency also could oppose state laws restricting the siting of AI-related telecommunications infrastructure.

“In the end there probably isn’t much there, but I would expect them to at least start stirring the pot and asking about it,” she said.

The preemption effort and funding restrictions also will be complicated by defining which policy areas are sufficiently AI-related, Kohler said.

State regulatory efforts thus far have ranged widely from elections to digital privacy and consumer protections.

“Those are longstanding areas of state competence and jurisdiction,” Kohler said, “that may be affected by AI but are not specific to it or limited to it.”

— With assistance from Elias Schisgall.

To contact the reporter on this story: Chris Marr in Atlanta at cmarr@bloombergindustry.com

To contact the editors responsible for this story: Rebekah Mintzer at rmintzer@bloombergindustry.com; Jay-Anne B. Casuga at jcasuga@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.