AI Startups Push to Limit or Kill California Public Safety Bill

June 7, 2024, 9:00 AM UTC

Silicon Valley AI entrepreneurs are ramping up their efforts to stop a California measure that would place first-in-the-nation public safety guardrails around advanced artificial intelligence, saying it could stifle innovation within the sector.

The San Francisco state lawmaker sponsoring the bill (SB 1047) that aims to mitigate catastrophic risks posed by future AI models—such as weapons development or damage to critical infrastructure—is trying to quell those concerns as supporters and critics jostle for leverage this summer. The bill awaits action from the Assembly, which has until Aug. 31 to pass it, after clearing the California Senate in May.

State Sen. Scott Wiener (D), who is eyeing Congresswoman Nancy Pelosi’s seat when she eventually retires, will have to navigate his bill through the increasingly influential AI sector he represents. Last year, Bay Area companies received more than half of all global venture funding for AI-related startups, raising more than $27 billion.

The work comes on the heels of Colorado enacting the first comprehensive AI law in the nation last month. Colorado’s law deals mostly with discrimination from AI, and California has a comparable bill (AB 2930). Wiener’s bill focuses on mitigating public safety risks.

Tech groups representing the industry’s largest players have opposed Wiener’s proposal since it was unveiled in February. But over the last month, some in Silicon Valley’s startup space have become increasingly vocal against the bill, bringing renewed attention and added scrutiny.

Critics include partners at venture capital giant Andreessen Horowitz, a prominent backer of numerous tech entrepreneurs. Developers and engineers who align with “Effective Accelerationism”—a growing movement within the AI community that believes the technology isn’t something to be feared or regulated—have tried to organize community members against the bill.

Wiener, who held a May 30 town hall and hosted a separate roundtable with startup founders, is responding to the pressure. He’s promised further changes to his bill and insists it won’t hurt the smaller tech players, a key argument given Gov. Gavin Newsom (D) recently signaled he’d veto any proposal that overregulates AI.

“I am incredibly proud to represent the beating heart of AI innovation. I want that work and that innovation to continue,” Wiener said at the town hall. “And I’m glad that people are focusing on the bill now.”

Too Much Liability?

Not all tech startups have voiced opposition, but Chris Lengerich, founder of his startup Context Fund, has encouraged those in the local tech space to be aware of what he views is a problematic bill. He and others were engaged with Wiener’s office early on, he said, but were not satisfied with the changes Wiener has made.

“We didn’t feel they were taking the magnitude of the concerns of the community to heart. That was when we decided, OK, we need to be much more public about this,” Lengerich said.

Some startups say the bill’s liability would put a chokehold on their work. The Wiener measure would require AI developers to say, under penalty of perjury, whether their model reasonably has hazardous capabilities before any of its training occurs. That’s impossible to account for, Lengerich said.

“There’s a lot of things we assess perjury for. This makes a lot of sense if you’re assessing retrospective statements,” he said. “But can you prove that my interpretation of forward-looking risk of a general-purpose tech will be the same as that of an unknown judge and jury in 10 years, especially when there is no case law?”

A lot of criticism has also been levied at the Frontier Model Division, an AI regulator that would be created in the state’s technology department under the bill. Critics dislike the idea of a single entity that could unilaterally set standards on what’s considered safe or what violates the law. Strict standards by the division could hamper innovative work, they said.

Wiener disputed the idea of any broad liability. The scope of who would be regulated under his bill is narrow, only affecting the biggest tech companies with the most resources that are creating the most powerful models, not any small startups, he said.

For those who are covered, the hazardous capabilities that companies need to assess would be limited to only the most catastrophic ones, such as creating biological weapons, Ari Kagan, an AI consultant working with Wiener, said at the town hall. Enforcement would be limited with only the state attorney general permitted to pursue penalties.

“This is extremely light-touch enforcement,” Kagan said. “The attorney general under this law is not going to be able to investigate every infringement of these duties. The attorney general is going to really only be looking into extraordinarily important cases, because they have very limited capacity.”

Open-Source AI Flexibility

The fears of stifling liability underpin one of the biggest arguments raising eyebrows around the AI community: that the bill would effectively kill off open-source AI.

While many companies like OpenAI, Google, and Microsoft Corp. have kept their AI software and coding private, academics and many tech workers praise those who have made their AI coding open-sourced, freely available for the public to use and tinker with few restrictions. Most notably, Meta Platforms Inc. open-sourced its model, which has been downloaded more than 1 million times.

Many small startups build upon open-sourced models to create new apps and products given they do not have the financial resources to create their own systems. Supporters of open-source software say it enhances not only innovation but also safety, as more eyes on an AI model can lead to more people catching bugs and other vulnerabilities.

But the Wiener measure would penalize companies opening up their models, the AI Alliance—a coalition that includes Meta, IBM, and numerous startups—said in an open letter. Companies have little to no control over what happens after their model is downloaded by someone else, but the bill would impose some responsibility over whatever gets created afterward, the letter added.

Some scientists in academia are also worried about any limits on open-source software, which drives a lot of AI research and collaboration, they said. Fei-Fei Li, co-director of Stanford University’s Human-Centered AI institute, described Wiener’s proposal as “troubling,” potentially concentrating the technology into the hands of a few big companies. The institute receives funding from some tech companies like Google and IBM.

“It might protect the Big Tech, coming from public sector, coming from little tech,” she said. “We are also some of the biggest providers of open-source software.”

The ultimate fear is that companies like Meta would close off their models to decrease such liability, leaving small startups and others that can’t afford to build AI models from scratch with nothing, opponents contend.

Wiener has promised to change his bill to placate those open-source concerns. There will be language that clarifies open-source developers would not be liable for models that undergo substantial amounts of fine-tuning. A requirement to implement full shutdown capability on the model would not apply to open-sourced ones.

“There’s more work to be done around open source and we’re committed to that work,” he said.

To contact the reporter on this story: Titus Wu in Sacramento, Calif. at twu@bloombergindustry.com

To contact the editors responsible for this story: Bill Swindell at bswindell@bloombergindustry.com; Gregory Henderson at ghenderson@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.