Tracking Corporate AI Threats Via Federal Database Gains Support

December 23, 2025, 10:00 AM UTC

The federal government already has mechanisms to track aviation incidents through the FAA or medical-device failures through the FDA.

Now an AI governance nonprofit is calling for a national artificial intelligence incident database to report and track cases when the technology goes awry.

The group, the Future Society, says the database would let companies report incidents and alert the government about emerging threats. Others in the field, such as the Center for Data Innovation, Federation of American Scientists, and Stanford University’s Human-Centered Artificial Intelligence institute, also have supported the creation of such a database.

“There is no centralized database that aggregates different incidents, and companies could learn from other incidents, because they could investigate the root cause if they see multiple incidents,” said Caroline Jeanmarie, director of US AI governance at the Future Society. That would reduce the “trust deficit” in AI while helping companies developing or using artificial intelligence products to improve the reliability of the products, she said.

The suggestions come as AI incidents appear to be surging, with an almost 50% jump between April and October, according to a database maintained by the OECD, the global trade and policy organization.

Prominent AI incidents have grabbed headlines in the US as well, such as private conversations individuals had with LLMs that turned up in search results. In another miscue, an AI agent purchased eggs for a user when the person just asked it to check prices.

“Rather than pushing incidents under the carpet, this incentivizes sharing and coordination between companies and the federal government,” Jeanmarie said.

She suggested mandatory reporting of high-consequence AI events— including deaths, critical infrastructure disruption, or threats related to chemical, biological, radiological and nuclear materials. Voluntary reporting could cover events such as near-misses and operational failures, she said.

Supporting the Industry

Typically, the government learns about AI failures when the media writes about them, Jeanmarie said.

“But aviation doesn’t work that way. Medicine doesn’t work that way,” she said. “And I think that is why it’s in the executive order, we have sort of this opportunity for having this systematic federal mechanism.”

The Trump administration’s action plan directs federal agencies like the National Institute of Standards and Technology to create standards and response frameworks. The White House and the Department of Commerce did not respond to requests for comment.

The White House’s AI Action Plan says that “the U.S. government should promote the development and incorporation of AI Incident Response actions into existing incident response doctrine and best-practices for both the public and private sectors.”

Adam Thierer, a senior fellow at the R Street Institute’s Technology and Innovation Team, said if the requirements for a database are too sweeping, they would be cumbersome for developers and deployers and lead to unnecessary information collection. “It’s always with the best of intentions that these sorts of transparency or disclosure mandates are implemented, but sometimes they can backfire, just ultimately not produce information that’s all that useful,” he said.

“What’s probably needed to get it right, to strike the right balance, is a more focused approach on critical incidents involving very specific capabilities produced through algorithmic systems,” Thierer said.

He said such reporting could open up the possibility of litigation. If so, such a proposed policy would face stiff odds of being implemented, he said.

Jeanmarie, of the Future Society, said that a properly designed incident reporting system reduces litigation risk.

“The logic is straightforward. In litigation, companies must demonstrate they acted reasonably,” she said. “A company that participated in industry-wide safety monitoring, learned from reported incidents, and implemented improvements has documented evidence of due diligence.”

Ben Colman, co-founder and CEO of Reality Defender, a deepfake detection company, said one of the challenges he has seen in cybersecurity is that when organizations aren’t required to announce a breach, consumers don’t find out until they get a notification that their information is on the dark web.

“Ultimately, cybersecurity issues tend to become public,” Colman said. “But the challenge is, when they hide things, that means only the bad actors, the hackers, the state level groups, they’re the only ones that know.”

A requirement to disclose incidents supports organizations, their counterparts and customers, Colman said, and eventually supports the whole AI industry.

“In the complete supply chain of software and AI development, knowing that the end user—corporates—will be disclosing when there is an issue, it incentivizes everyone upstream to do a better job at solving issues before they happen,” Colman said.

Without good safety statistics collected by governments, the harm produced by the least responsible company will be blamed on even the most responsible company in the sector, said Sean McGregor, a machine-learning safety researcher, who launched an AI Incident Database, an open source project.

“Customers trust aviation far more because the government collects accidents and knows on a per mile basis how safe it is to fly,” McGregor said in an email.

To contact the reporter on this story: Kaustuv Basu in Washington at kbasu@bloombergindustry.com

To contact the editors responsible for this story: Jeff Harrington at jharrington@bloombergindustry.com; John P. Martin at jmartin1@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.