What Businesses Need to Know About California’s AI Safety Law

December 5, 2025, 9:30 AM UTC

Frontier artificial intelligence models, trained on massive datasets with immense computing power, can perform tasks they weren’t explicitly trained to do. These models can be used across industries, including service delivery and customer interaction in financial services and drug development in health care.

While these models offer significant benefits, they also may pose certain challenges. To address these concerns, California became the first state to enact a law regulating the development of frontier AI models.

California’s Transparency in Frontier Artificial Intelligence Act regulates models trained on computing power above 1026 integer or floating-point operations. Noncompliance could result in fines of $1 million per violation.

The law requires developers to post information on their websites about each model’s intended use and any restrictions or conditions. Similar to the EU AI Act, developers must report critical safety incidents that pose imminent risk of death or serious physical injury within 24 hours of discovery.

Unlike the EU AI Act, which imposes certain risk governance obligations on developers of AI models, the California law requires developers that, together with their affiliates, had annual gross revenues exceeding $500 million in the prior calendar year (i.e., “large frontier developers”) to publish a “frontier AI framework”—a safety and security protocol to mitigate catastrophic risks.

“Catastrophic risk” refers to the potential for a frontier AI model to cause mass harm—defined as death or serious injury to more than 50 people or property damage exceeding $1 billion—by enabling weapons creation, committing serious crimes, or escaping developer control. Large frontier developers must also submit the results of catastrophic-risk assessments to the California Governor’s Office of Emergency Services every three months or at another frequency the developer specifies.

California Gov. Gavin Newsom (D) signed the TFAIA on Sept. 29, 2025, about a year after vetoing the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), which would’ve similarly required frontier AI developers to implement a safety and security protocol and report incidents. Newsom agreed with the goals of SB 1047 but opposed “a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities.”

Following last year’s veto, Newsom created the Joint California Policy Working Group on AI Frontier Models to craft a regulatory framework for frontier AI models. The TFAIA implements recommendations from the working group’s California Report on Frontier AI Policy report, aimed at improving transparency in the use, development, and safety of frontier AI models.

The TFAIA’s approach to targeting frontier AI models contrasts with other states’ efforts to regulate AI. Colorado’s AI Act focuses on high-risk AI systems—such as those used to provide housing, employment, or financial services—and looks to prevent algorithmic discrimination. Other states have taken narrower approaches: Texas has banned AI systems that cause discrimination, incite crime, or produce certain explicit content, while Montana requires companies using AI in critical infrastructure to implement risk-management policies but not to make those policies public.

Looking Ahead

More states may follow California’s lead, including Massachusetts, which has introduced a bill that would impose similar transparency and reporting obligations on frontier AI developers. The New York Legislature passed the Responsible AI Safety and Education Act, which, if enacted, would impose comparable transparency and reporting requirements on developers that spend more than $100 million on compute, with penalties ranging from $10 million to $30 million per violation.

In the absence of federal legislation, the US will likely see a patchwork of state laws governing frontier AI. As with California’s leadership on comprehensive state privacy laws, the TFAIA could shape future AI legislation nationwide.

To prepare, frontier AI developers should:

  • Collaborate with policymakers to define an appropriate scope of regulation. The Chamber of Progress has warned that using compute costs to flag risky models could be overinclusive, mislabel harmless systems, and overlook smaller models with real-world risks.
  • Establish clear risk-assessment protocols to comply with the TFAIA and similar laws that may follow. Doing so not only supports compliance but also helps shape evolving industry risk standards.
  • Implement strong cybersecurity, governance, and oversight systems. The TFAIA, Massachusetts’s bill, and New York’s RAISE Act all require developers to adopt cybersecurity measures and incident-reporting protocols to address potential harms from developing and deploying frontier AI.
  • Carefully review required public disclosures and, as expressly permitted under the TFAIA, redact sensitive information that, if made public, could expose trade secrets or raise cybersecurity, public safety, or national security concerns.
  • Monitor state-level developments, since California’s new law is likely the first in a broader trend of state efforts to regulate frontier AI.

California’s law sets a precedent that other jurisdictions are likely to follow. As more states introduce similar legislation, engaging proactively with these requirements can help minimize compliance risks and avoid inefficiencies that arise from having to change established practices later.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Author Information

Jennifer Everett is partner in Alston & Bird’s privacy, cyber, and data strategy group.

Dorian Simmons is a senior associate in Alston & Bird’s privacy, cyber, and data strategy group.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Jada Chin at jchin@bloombergindustry.com; Jessica Estepa at jestepa@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.