Artificial intelligence has become nearly ubiquitous. Sixty-two percent of American adults report using AI weekly, and 78% of organizations say they have adopted the technology’s tools as part of their operations.
Reports of AI dangers, however, continue to proliferate, including concerns of ignored safety risks, intellectual property theft, and discrimination.
In an effort to address such risks, a number of states have started enacting legislation regulating AI. Those efforts were thrown into question, when President Donald Trump, following lobbying by AI companies such as OpenAI and Microsoft, signed a new executive order in December that aims to block states from regulating AI.
With states such as California set to challenge the legality of the order, we’re in for a pitched battle over the fate of these nascent state-level AI regulations. As a result, AI companies may continue to operate unchecked for the time being.
In the absence of meaningful oversight, we will have to continue to rely on whistleblowers to expose serious concerns about safety, privacy, ethics, or legal risks. For example, former OpenAI employee Suchir Balaji blew the whistle in 2024 on the company’s use of copyrighted material to build ChatGPT. Other former OpenAI employes have spoken publicly about the company’s reckless disregard for safety. There also are real concerns that show convincing evidence that AI magnifies stereotypes and bias.
These whistleblowers come forward at great personal and professional risk, as there are few legal safeguards to prevent retaliation specific to this sector.
While efforts have increased at the state and federal levels to codify protections for AI whistleblowers, there’s a long road ahead to protect those who raise concerns about this technology.
Current Protections
Leading on watchdog protections is California, the home of many of the largest AI companies. Gov. Gavin Newsom (D) signed the Transparency in Frontier Artificial Intelligence Act in September 2025. It went into effect at the beginning of 2026.
This law moves the ball forward by expressly protecting certain AI whistleblowers and by creating a requirement for certain AI companies to establish anonymous whistleblowing channels. It also prohibits covered companies from imposing restrictive confidentiality agreements that would bar employees from speaking publicly, an important provision given the history in the AI industry of using confidentiality agreements to prevent employees from speaking out.
It comes up short, however, by failing to protect the majority of workers working at AI companies. The law only applies to “frontier” AI companies—defined as the biggest companies with the highest level of computing power.
The law also only provides protections to those who report “catastrophic risk.” That is defined as a foreseeable and material risk of death or serious injury to more than 50 people or more than one billion dollars in damage arising from a single incident.
The 2024 whistleblower disclosures have shown that the risks to society from AI—from algorithmic bias in hiring, to skyrocketing energy costs from data centers, to devastating harm to children’s mental health and educational outcomes—are far broader than critical safety incidents. With such a prohibitively high standard, many whistleblowers with important information will remain unprotected.
The law also falls short by failing to offer true protection for whistleblowers. Unlike other California whistleblower protection laws, it fails to explicitly authorize whistleblowers to recover compensation for economic harm or emotional distress.
Ultimately, this law’s problems mean people will continue to stay silent. Few workers can afford to risk losing their livelihoods to come forward, even if it is for the public good. If we want whistleblowers to speak up, we must provide them with a path to recover the losses they incur for doing so.
Unfortunately, California’s imperfect law represents the high-water mark of protection in our states. A patchwork of other laws in other states provide imperfect protections for AI whistleblowers that are often unsuited to the needs of a sector that carries these unique risks.
General anti-retaliation laws in various states may protect whistleblowers, provided the whistleblower is reporting conduct that violates other state laws, such as fraud, deceptive practices, theft, or criminal activity. Many of these state anti-retaliation laws also provide for the recovery of economic damages and attorney’s fees and costs, with some providing for recovery of emotional distress and punitive damages.
Employees of publicly traded companies with AI products also may be protected under the federal Sarbanes-Oxley Act if their whistleblower disclosure implicates securities fraud, shareholder fraud, or violations of any US Securities and Exchange Commission rule or regulation.
Existing Protections
The inadequacy of these laws requires deliberate federal intervention to protect whistleblowers in this sector. In May 2025, Sen. Chuck Grassley (R-Iowa) introduced the Artificial Intelligence Whistleblower Protection Act. The law would prohibit employers from retaliating against AI insiders who report specific categories of misconduct, including violations of federal rules or regulations.
That includes failure to respond to a “substantial and specific danger that…artificial intelligence may pose to public safety, public health, or national security;” or a failure or lapse in security that could allow AI technology to be acquired by theft or other means.
The proposed law would cover both employees and independent contractors, as well as both internal and external reports.
Unlike the California law, the proposed federal measure would adopt a framework similar to other well-regarded whistleblower retaliation laws. A whistleblower who brings a successful claim for retaliation would be entitled to reinstatement, backpay, damages, and fees. Importantly, the law would override nondisclosure, nondisparagement, and arbitration agreements.
Despite what AI companies themselves describe as the “existential” risk of their technology, most AI whistleblowers remain vulnerable to career retaliation and financial ruin should they sound the alarm. Unless lawmakers, in Congress or in our state houses, move quickly to enact comprehensive, enforceable protections, the public may only learn about future AI failures after the damage is done.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Alexis Ronickher is partner at Katz Banks Kumin and co-author of the firm’s cybersecurity and data privacy whistleblower protections guide.
Isabel Rothberg is a litigation fellow at Katz Banks Kumin and a whistleblower attorney.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
