Lawmakers are trying to mandate that artificial intelligence companies share more information about how their products are built, a thorny but crucial issue as Congress crafts a national standard to govern the rapidly evolving technology.
New bipartisan legislation calls on companies creating the largest AI models, including
Industry players have asked the federal government to lead on so-called transparency requirements so they have one guideline to comply with rather than several rules in states like California, New York, and Colorado. The AI Foundation Model Transparency Act (H.R. 8094), unveiled March 26, follows the White House’s release of a national AI framework to override state laws, and comes as lawmakers are considering a potential package this year.
“We’re trying not to be Europe with the heavy-touch EU AI Act, but we’re also trying not to be the Wild West,” said Rep.
Finding a path forward on AI transparency will be tricky. Tech companies have preferred a softer approach to let the industry flourish. Meanwhile, consumer advocates have championed stricter measures to hold AI developers accountable for risks. Both sides want Congress to create a level of regulatory certainty that promotes the technology’s safe development and deployment. Threading that gap will be critical to getting such legislation signed into law.
“There is a strong, strong demand right now for transparency, and generally it does start with the foundation models, and that’s why this bill is getting a lot right,” said Akanksha Ray, director of global policy at Credo AI, a startup that helps businesses adopt AI responsibly.The proposal, she added, would be a “positive addition to a national framework.”
Public Skepticism of AI
Americans are raising concerns about AI quickly seeping into everyday life and its impacts on workers, businesses, and consumers across industries from health care to finance. Recent surveys found a majority of voters believe AI’s risks outweigh its benefits. A Quinnipiac poll found that 76% of Americans think they can trust AI sometimes or hardly ever.
The aim of the legislation is to empower the public with more knowledge about how the products are made to better identify risks and build trust in the technology.
“We don’t want them giving away their secret sauce, their algorithm, their math. That’s not fair,” said Beyer, who’s consulted with industry on the effort. “But we can have some insight into how they’re being trained and how they’re being tested.”
The measure would direct the Federal Trade Commission—working with the Commerce Department, National Institute of Standards and Technology, and Office of Science and Technology Policy—to set disclosure requirements for the largest AI models.
Transparency helps “create a dynamic where companies are incentivized to race to the top on building models that are safe, that respect user privacy, and that adhere to generally good safeguard practices,” said Chris MacKenzie, vice president of communications at Americans for Responsible Innovation, an AI policy advocacy group that supports the bill. The entertainment industry union SAG-AFTRA and nonprofit organization Mental Health America also endorsed the proposal.
AI Framework Challenges
Major AI developers like OpenAI and Google have moved to adopt their own safety frameworks and publicly document some information about their models to address concerns. The bill would help clarify, offer guidance, and standardize that process at the federal level, said Miranda Bogen, director of the Center for Democracy and Technology’s AI Governance Lab.
Transparency should be a “foundational piece of any effort” on a national AI framework and shouldn’t be framed as an “additional burden that’s being imposed,” Bogen said. “We actually need this if people want this technology to go anywhere.”
Leading industry group TechNet said its member companies are committed to responsible AI development and support federal transparency regulations that foster public trust and accelerate innovation, said Liz O’Bagy, its director of federal policy and AI policy lead.
Still, the transparency bill sparks some doubts. Companies may welcome a federal standard but might be wary of the FTC’s rulemaking approach, especially as the agency faces criticisms of politicization, according to Neil Chilson, a former chief technologist at the agency who now heads AI policy at the tech-friendly Abundance Institute.
Some policy analysts fear the measure could give rise to more lawsuits against AI companies, potentially stifling the industry. OpenAI, Anthropic, and
“This bill to me seems much more focused on, let’s create further avenues for litigation against the technology itself” and “less so about transparency,” said Joshua Levine, director of technology and statecraft at the Foundation for American Innovation, a center-right think tank, adding that he thinks the proposal runs counter to the White House’s goals.
The Trump administration’s AI framework advises Congress to avoid legislation that could hinder the technology and to boost protections for minors, ratepayers, and creators. The White House is proud of the National AI Legislative Framework and happy to engage with legislation that is consistent with the framework, a White House official said in a statement.
The bill is already gaining steam. One more Republican, Rep.
“If the United States is going to lead in Al, it must also lead in setting standards that earn the public’s trust,” said Fitzpatrick’s communications director Casey-Lee Waldron. “This legislation is a serious bipartisan step toward rules that keep pace with the technology.”
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
