AI Bills Stall in GOP States Despite Lingering Safety Concerns

March 11, 2026, 9:00 AM UTC

Republican-led states are failing to advance artificial intelligence legislation as the White House presses for a national standard, even as some GOP lawmakers warn that guardrails are needed to address the risks posed by the emerging technology.

Florida shows how that tension is unfolding in statehouses. Lawmakers there spent months weighing proposals touted by Gov. Ron DeSantis (R) restricting the use of customers’ names, images, or likenesses by artificial intelligence systems without their consent.

The bills have struggled to gain traction in the state House under Speaker Danny Perez (R), who has said he would rather let federal regulators take the lead. Florida’s legislative session is scheduled to end Friday.

The standoff reflects a broader tension between state lawmakers eager to establish boundaries for artificial intelligence and a White House urging states to hold off while Washington attempts to write federal rules. Utah lawmakers debated several bills this year aimed at regulating the technology, but the legislative session ended March 6 without passing any of them.

“Legislators are pushing these regulations because they believe in them and oftentimes a governor or—as in Florida—a speaker of the House, the people who occupy a choke point in politics, get cowed and intimidated,” said Brad Carson, president of Americans for Responsible Innovation, a nonprofit focused on AI policy.

White House officials say they support safeguards such as protections for children and against deepfakes but argue the rules should ultimately come from Washington. President Donald Trump signed legislation in 2025 that would curb the spread of AI nonconsensual deepfake imagery online.

“Republicans are still going to push AI regulations aggressively, but the question is, will people who occupy those choke points get intimidated?” Carson added. He served as Democratic US representative from Oklahoma from 2001-2005.

Utah state Rep. Doug Fiefia (R) said he isn’t discouraged after his bill that would have required AI companies to publish safety and child-protection plans failed to pass this session given “these issues aren’t going anywhere” and expects the debate to continue next year with perhaps a stronger bill.

“Silencing states is not the answer,” Fiefia said in an interview. “I understand the desire for a federal standard—and I support it—but my job is to protect my constituents.”

The Trump administration is preparing a list of state AI laws it considers “onerous,” expected on Wednesday, with blue states such as New York and California likely to be targeted. The move could lay the groundwork for a federal crackdown on state regulation—an approach favored by much of the tech industry—and reshape who ultimately writes the rules governing the technology.

State Divide

Democratic-led states have generally moved ahead with broader proposals regulating high-risk AI systems and automated decision-making. Republican lawmakers, by contrast, have largely focused on narrower measures targeting specific concerns such as deepfakes, misuse of biometric data or child safety like Fiefia’s bill.

Florida lawmakers were considering similarly targeted proposals. One measure (H.B. 1395) sponsored by state Rep. Alex Rizo (R) would require parental consent for minors to use certain AI chatbots and prohibit companies from selling or disclosing personal data that could be traced back to individual users, highlighting privacy concerns with AI. That bill has been stuck in committee for almost two months without movement, according to the state’s legislative website.

Not every GOP-led state has run into roadblocks. Indiana Gov. Mike Braun (R) signed a bill that contained language prohibiting health providers from solely using automated systems on processing claims without the review from the provider or other person.

More notably, Texas enacted its own artificial intelligence law last year (H.B. 149) requiring state agencies to tell residents when they’re interacting with AI systems on government websites, bans collecting biometric identifiers without consent, and restricts generative AI used to create explicit sexual material involving children.

The bill’s sponsor, state Rep. Giovanni Capriglione (R), said in an interview he was not particularly concerned about the White House targeting the law as part of its review, arguing it strikes a balance between encouraging innovation and setting basic safeguards.

“The legislation we passed is well within the intent of the White House’s framework,” he said.

Capriglione acknowledged differences between how red and blue states approach AI policy, but said the divide often comes down to how broadly lawmakers try to regulate the technology based on possible fears, such as public-safety risks.

“The disagreement, if anything, is, are you going to do this in a way of what AI could theoretically do, or are you going to do it with things that we absolutely do not want artificial intelligence to do?” Capriglione said. “We found that the latter is the most effective way to do this.”

Washington Pressure

In December, Trump issued an executive order directing federal agencies to push back on what the administration called “excessive State regulation” of artificial intelligence, warning that a patchwork of state rules could “stymie innovation.”

The forthcoming list of onerous state laws could intensify pressure on state lawmakers, Carson said, and give federal officials leverage to challenge or discourage certain state regulations.

Carson said pressure from voters could make it difficult for lawmakers to abandon artificial intelligence proposals altogether.

“The problem the White House has is that voters are angry about AI and want something done about it,” he said. “The White House has influence, but in the end, the voters will prevail and the voters are increasingly agitated.”

For many state lawmakers and AI policy advocates, however, the urgency to act remains. Carson said voters want safeguards addressing risks ranging from child safety to the potential misuse of AI for criminal or terrorist activity.

Carson said what the White House calls onerous is “what 80% of the country calls reasonable and probably not far enough.”

To contact the reporter on this story: Alexandra Samuels in Austin at asamuels@bloombergindustry.com

To contact the editors responsible for this story: Bill Swindell at bswindell@bloombergindustry.com; George Cahlink at gcahlink@bloombergindustry.com

Learn more about Bloomberg Government or Log In to keep reading:

See Breaking News in Context

Providing news, analysis, data and opportunity insights.

Already a subscriber?

Log in to keep reading or access research tools and resources.