AI Whistleblowers Can’t Carry the Burden of Regulating Industry

June 3, 2025, 8:30 AM UTC

The law often struggles to keep up with major leaps in technology that challenge our perception of what’s possible in ways that are difficult to anticipate.

The transcontinental railroad was finished a decade before Congress passed the Interstate Commerce Act. Telephone companies were worth a fortune years before the Federal Communication Commission existed. And the computer was deeply integrated into government and corporate systems well before the Computer Fraud and Abuse Act was approved in 1984.

With the rapid growth of artificial intelligence, machine learning, and large language models, we once again see technology outstripping the law. Instead of trying to keep up, lawmakers are attempting an innovation of their own—empowering whistleblowers to come forward with safety concerns rather than regulating the technology itself.

US Sen. Chuck Grassley (R-Iowa), a long-time whistleblower advocate and the founder of the Senate Whistleblower Protection Caucus, on May 15 proposed the AI Whistleblower Protection Act, with strong bipartisan support. House members have their own companion bill, which was announced at the same time.

It isn’t surprising Grassley focused on the whistleblower protection component to AI regulation. But the question remains whether putting the onus on whistleblowers to be our sole regulatory response to technological change is appropriate.

The whistleblowing model emerged during the pandemic with Covid-19 safety bills. Colorado created specific protections for workers who raised legitimate workplace safety concerns during a public health emergency. The idea was that the laws couldn’t keep up with a quickly moving emergency, so the best way to ensure everyone stayed safe was to protect individuals speaking up against unsafe conditions.

Other states adapted this same approach for AI safety concerns. This past March, Colorado’s House Judiciary Committee voted to advance HB25-1212, prohibiting employers from retaliating against whistleblowers who raise reasonable public safety or security concerns either to their management or to the government—a bill I testified in favor of at the time.

That bill ultimately did not advance, but is part of a much larger trend. Similar bills have been submitted in the California State Senate, New York State Senate, and the Illinois General Assembly.

The federal government is now stepping up to create a national version of this same approach. Grassley’s proposal goes further than the state-level versions, protecting any whistleblower who reports “any failure to appropriately respond to a substantial and specific danger that the development, deployment, or use of artificial intelligence may pose to public safety, public health, or national security.”

Whistleblowers are undoubtedly an integral part of our safety system. It’s because of whistleblowers that we know about nuclear safety failures, unethical medical studies, wartime abuses, and, of course, Watergate.

US law has recognized the need to protect whistleblowers as far back as the Continental Congress, which passed a Whistleblower Protection Act in 1778 in response to Continental Navy officers coming forward to report egregious abuses by a fellow officer.

Yet our collective recognition of the importance of whistleblowers hasn’t stopped unscrupulous employers from retaliating against individuals who come forward to speak up. These protection efforts remain critical, as whistleblowers continue to need support and protection to smooth their coming forward. We are all better for the information they bring to the fore and for their bravery in being willing to speak out.

But the rush to move toward a regulatory model that relies entirely on whistleblower protection while giving up on regulating the actual technology puts enormous pressure on individual whistleblowers.

Being a whistleblower is hard enough. If we move toward a protection-as-regulation model, whistleblowers will be put under even more pressure as they become the only obstacle to AI’s boundless growth.

More than $100 billion has been invested in building AI products as fast as possible with minimal regulation. Given the strong incentives that AI developers have to “move fast and break things,” it feels unfair, at minimum, and potentially dangerous to expect individuals to bear the entire weight of stopping the unfettered growth when it puts collective safety at risk.

The influence of the AI industry can be seen in competing proposals that would stop any regulation at all.

While Grassley and his bipartisan coalition support AI whistleblowers, other congressional leaders included a proposed 10-year ban on any AI regulations at the state or local level in the 2025 budget reconciliation bill. This raises the stakes for efforts toward meaningful guardrails as we continue to build these products.

It isn’t as if Congress doesn’t know the stakes. The risks of unregulated technological growth remain fresh in everyone’s minds after the explosion of social media and its far-reaching consequences that we are still absorbing. It took whistleblowers, including my client, Frances Haugen, coming forward to stop some of the alleged abuses in that industry.

It’s clear we need a different path with the growth of AI. But we have yet to do any serious introspection about whether placing the burden on individuals coming forward to talk about any risks lessens the pressure to regulate the technology thoughtfully.

A truly effective regulatory system would have both whistleblower protections and clear regulation. Whether there is political will to achieve that end is uncertain.

However, as we saw with the growth of the railroad, the telephone, and the computer, eventually we’re going to need proper laws to guide technological advancement. There is no reason to think this technology will have a different outcome.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Poppy Alexander is a partner at Whistleblower Partners in San Francisco.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Max Thornberry at jthornberry@bloombergindustry.com; Rebecca Baker at rbaker@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.