New York’s RAISE Act Is the Blueprint for AI Regulation to Come

Feb. 19, 2026, 9:30 AM UTC

New York became one of the most consequential players in artificial intelligence regulation when Gov. Kathy Hochul (D) signed the Responsible AI Safety and Education (RAISE) Act in December 2025.

The RAISE Act joined California’s SB-53 in establishing a disclosure-driven framework for governing the most powerful AI models in the market. These pieces of legislation don’t feature radically new ideas. Instead, they signal how targeted, state-level AI governance is likely to scale, converge, and settle over the next few years.

The New York law arrives at a moment when state AI policy remains fragmented and federal-level policy is politically contested. President Donald Trump’s AI executive order directs the US Department of Justice to identify and challenge state AI laws that may unconstitutionally regulate interstate commerce or violate First Amendment rights. Disclosure requirements for AI companies are explicitly named as a category subject to scrutiny.

The executive order can’t prevent states from passing laws, but the threat of litigation and funding consequences is meant to chill state-level action. That New York proceeded anyway suggests confidence that disclosure-based regulation can withstand legal challenges, or that the political calculus favored moving forward despite the risk. And the risk may have been worth it, as Utah is now considering a bill that is inspired by the New York and California laws.

The differences between New York and California’s legislation are modest. California allows a 15-day reporting window for critical safety incidents, compared to New York’s 72 hours. SB-53 caps civil penalties at $1 million, while the RAISE Act allows fines up to $1 million for a first violation and $3 million for subsequent violations. California also includes explicit whistleblower protections, which New York’s law doesn’t address.

Yet the similarities between the RAISE Act and SB-53 aren’t accidental. As more states look to regulate AI, many are likely to adopt these frameworks wholesale rather than invent new ones. Instead of dozens of conflicting state regimes due to a lack of federal rules, AI developers may soon face a de facto standard shaped by a small set of early movers. The RAISE Act reinforces that trajectory, especially given New York’s economic influence and regulatory reach.

Narrow scope, sharp teeth. At first glance, the RAISE Act looks narrowly targeted.

The law applies only to “large frontier developers,” defined as companies with more than $500 million in annual gross revenue that train or initiate the training of frontier models exceeding “10²⁶ integer or floating-point operations.” In practice, this applies to a handful of major companies such as OpenAI, Anthropic, and Meta Platforms Inc.

The law is further limited to frontier models that are developed, deployed, or operated in whole or in part in New York. Combined with its revenue and compute thresholds, this ensures the RAISE Act focuses squarely on the most capable models, rather than sweeping up smaller developers or enterprise AI users.

But narrow doesn’t mean light touch.

The law’s obligations are anchored to the concept of “catastrophic risk,” defined as foreseeable and material risks that could result in death or serious injury to 50 or more people, or cause more than $1 billion in damage from a single incident. These risks are tied to specific scenarios, such as providing expert-level assistance in the creation or release of chemical, biological, radiological, or nuclear weapons.

While there are limited exceptions—such as publicly available outputs or lawful federal activity—the law is clearly aimed at worst-case outcomes, not routine misuse or isolated errors.

Transparency as an enforcement mechanism. Rather than prescribing technical safeguards, the RAISE Act relies heavily on transparency as its primary regulatory lever.

Covered developers must create, implement, and publicly disclose a “frontier AI framework” explaining how they assess and mitigate catastrophic risks, define risk thresholds, and update safety practices over time. These frameworks aren’t static. Developers must update them annually and whenever a frontier model is materially modified.

When a model change triggers an update, the developer must publish the revised framework and justify the changes within 30 days. Before deploying a new or substantially modified model, developers must also publish a transparency report detailing the model’s release date, intended uses, restrictions and how users can contact the company.

The implication is that regulators are less interested in dictating how safety is achieved than in ensuring companies can publicly demonstrate that they have rigorously considered the risks and governance across model lifestyles.

Incident reporting under tight timelines. Where the RAISE Act becomes more prescriptive is in incident reporting.

Frontier model developers must notify state regulators of “critical safety incidents,” including unauthorized access to model weights that cause harm, loss of model control resulting in death or bodily injury, or harm stemming from a defined catastrophic risk. The law also covers deceptive model behavior, such as a model subverting controls or monitoring implemented by the developer.

The timelines are aggressive. Most critical incidents must be reported within 72 hours of determining that an incident occurred. If an incident poses an imminent risk of death or serious injury, the reporting window shrinks to 24 hours.

These requirements raise the operational bar for internal detection, escalation, and decision-making processes, especially for companies operating globally and at scale.

What this means for AI governance teams. As developers operationalize their frontier AI frameworks, contractual terms are likely to evolve, particularly around usage restrictions, incident notification requirements, and audit rights.

While model developers are best positioned to define systemic risk, downstream actors are often better placed to see how those risks materialize in real-world use. As a result, organizations deploying frontier models will increasingly be expected to identify, document and escalate risk signals quickly when something goes wrong.

The broader lesson of the RAISE Act isn’t just about New York, but also about trajectory.

Companies that invest early in durable governance processes will be better positioned as more states follow the same playbook and the regulatory floor continues to rise.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Author Information

Gerald Kierce is CEO and co-founder at Trustible.

Andrew Gamino-Cheong is CTO and co-founder at Trustible.

John Heflin Hopkins-Gillispie is director of policy and product counsel at Trustible.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Jada Chin at jchin@bloombergindustry.com; Jessica Estepa at jestepa@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.