- Current whistleblower laws apply only to certain kinds of violations
- Congress could consider AI legislation as soon as lame-duck session
Workers at artificial intelligence companies want Congress to grant them specific whistleblower protection, arguing that advancements in the technology pose threats that they can’t legally expose under current law.
“What people should be thinking about is the 100 ways in which these companies can lose control of these technologies,” said Lawrence Lessig, a Harvard law professor who represented
Current dangers range from deepfake videos to algorithms that discriminate, and the technology is quickly becoming more sophisticated. Lessig called the argument that big tech companies and AI startups can police themselves naïve.
“If there’s a risk, which there is, they’re not going to take care of it,” he said. “We need regulation.”
Congress may have a window to address the issue in the coming months. Leaders could push an AI package in the post-election lame-duck session that would boost research and development and promote guidelines for the technology’s use.
“As Congress considers AI legislation this year, we must ensure that consumers, regulators, and workers have the tools and safeguards to identify and address these harms,”
Existing federal whistleblower protections come from specific statutes including those related to tax evasion, money laundering, foreign bribery, and Medicare and Medicaid fraud.
“We don’t have a general law that regulates tech at all, and to the extent that there are whistleblower protections that exist, they exist in the context of these other regulations,” said Hannah Bloch-Wehba, a law professor at Texas A&M University.
Congress, Companies ‘Lock Step’
Congress needs to create AI whistleblower rights because courts will be reluctant to act when lawmakers do not, said attorney Stephen Kohn, who in 2012 helped a client win a $104 million whistleblower award in a tax-evasion case against UBS AG.
“The vast majority of issues that would be coming up for developing technology will not be covered,” Kohn, a national whistleblowing expert, said.
“Congress and the private sector ought to work in lock step to strengthen whistleblower protections, to make sure employees can provide protected disclosures without the fear of retaliation or illegal restrictions,” said Grassley, who jointly leads the Senate whistleblower caucus. He cited a growing appetite to address shortfalls in the law.
Grassley has immersed himself in goings-on at OpenAI and what non-disclosure agreements in the company meant for those who want to speak out about safety issues.
“I am concerned these agreements may be stifling your employees from making protected disclosures to government regulators,” he said in an August letter to Sam Altman, the OpenAI CEO.
Liz Bourgeois, an OpenAI spokesperson, said in a statement: “We believe rigorous debate about this technology is essential. OpenAI’s whistleblower policy protects employees’ rights to raise issues, including to any national, federal, state, or local government agency. As we’ve previously shared, we voided non-disparagement provisions for all current and former employees back in May and we have since updated our documents accordingly.”
The company’s publicly released “Raising Concerns Policy” includes AI safety and privacy. However, the company drew a line between employees raising concerns and revealing trade secrets, which is prohibited by confidentiality agreements.
A spokesperson for
“That clarity also helps regulators make good use of the essential information that brave whistleblowers often provide,” Wyden said.
An AI safety bill in California that would have protected the right of employees to speak out was vetoed in September by
‘You and You Alone’
Technology workers who have claimed federal whistleblower protections have cited laws that wouldn’t cover some issues raised by AI’s rapid development.
For instance, Frances Haugen, a former Facebook employee who raised the alarm about the company’s practices about hate speech, misinformation and the effects of its products on children, used whistleblower protections available through the Securities and Exchange Commission. She claimed Facebook misled investors.
At a Senate subcommittee hearing in September, former employees who worked on AI issues pressed lawmakers to act.
“Essentially, if you are considering whistleblowing, it’s you and you alone against, you know, a company making a ton of money with lawyers who are set up to harm you if you make the smallest move incorrectly, whatever it is,” said Margaret Mitchell, a former staff research scientist at Google.
Helen Toner, who quit the board of OpenAI in November 2023 after CEO Sam Altman was fired and then reinstated, said for many workers it is unclear if existing whistleblower practices apply at all.
“If you are a whistleblower—potential whistleblower sitting inside one of these companies, you don’t really want to go out on a limb and take a guess at, ‘Well, is this enough of a financial issue that the SEC would be—you know, would cover me?’”
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.