California Lawmakers Push for Watermarks on AI-Made Photo, Video

Jan. 26, 2024, 10:02 AM UTC

California lawmakers are drawing up multiple plans to require watermarks on content created by artificial intelligence to curb the abuses within the emerging technology, which has affected sectors from political races to the stock market.

At least five lawmakers have promised or are considering different proposals that would require AI companies to implement some type of verification that a video, photo, or written work was made by the technology. The activity comes as advanced AI has rapidly evolved to create realistic images or audio on an unprecedented level.

Advocates worry the technology could be ripe for abuse and lead to a wider proliferation of deepfakes, where a person’s likeness is digitally manipulated to typically misrepresent them—with it already being used in the presidential race. But such measures are likely to face scrutiny by the tech sector.

Amid a pivotal election year and an online world full of disinformation, the ability to know what’s real or not is crucial, said Drew Liebert, director of the California Initiative for Technology and Democracy. The harm from AI is already happening, with Liebert noting the aftermath of an AI-generated photo that went viral in May of last year that falsely portrayed another terrorist attack in the US.

“The famous photograph now that was put on the internet that alleged that the Pentagon was attacked, that actually caused momentarily a [$500 billion] dollar loss in the stock market,” he said. The loss would not as been as severe, he said, “if people would have been able to instantly determine that it was not a real image at all.”

Plethora of Ideas

Legislators have not finalized language for any of the bills, though some of them and interest groups following the topic have shared certain details. Lawmakers acknowledged the competing approaches would likely have to be resolved later in the legislative process.

Assemblymember Akilah Weber (D) was one of the first on the issue, saying in a December press release that she wanted to require an industry standard for AI-generated content. That reference could refer to the C2PA standard, adopted by the likes of Microsoft Corp. and Adobe Inc., which binds provenance information—the process of how the digital content was made and altered—to a piece of media throughout its editing journey. Her office said it could not confirm any details of the bill, which is still in the works.

Meanwhile, state Sen. Josh Becker (D) has been crafting a bill that would require large AI providers like ChatGPT or image generator DALL-E to watermark images, video, and audio created by their models. His proposal goes a step further than watermarks by requiring companies to provide a platform where anyone could verify if a piece of content was created by their AI models.

The goal of the public verification platform is to accommodate for the fact that watermarks can be forged or removed, said Tom Kemp, a privacy advocate working with Becker on the measure. The platform also can verify whether written text is AI-generated, as text is hard to watermark. Ultimately, it will help prove human-made work is indeed human-made.

“Students are going to be wrongly accused of handing in homework assignments that they did,” said Kemp, who added that such a scenario happened to a family friend. “There are companies that say, ‘Oh, we can detect if contents were generated by AI,’ but they have no certainty of that. To me, well, why don’t you just ask ChatGPT directly?”

On another front, Liebert said his organization has been working with Assemblymember Buffy Wicks (D) on a proposal that would also require companies to embed provenance data for AI-generated content. The measure would be partly modeled after the European Union’s upcoming AI Act, which as written will require labeling deepfakes and providing disclosures to consumers. The EU act also mandates designing systems so that AI-generated media can be easily detected in a “machine-readable format.”

In addition, Assemblymember Evan Low (D) has promised legislation on AI watermarks. Assemblymember Avelino Valencia (D) filed preliminary legislation (A.B. 1824) that would require disclosures for AI-generated content. Their offices did not detail further information on the measures.

The legislative focus would target the large players in the AI space, such as ChatGPT, which is owned by OpenAI based in San Francisco. The preliminary measures, however, don’t address the pressing problem of rogue individuals who can use open-source or freely available code to create deepfakes or other content without any safeguards.

“I’m not saying this is going to be a cure-all,” Becker said, “but I think we’re all working on this. At the very basic level, we should be able to say was something created by AI.”

Tech Pushback?

Tech lobbyists representing the big AI players, however, may push back. One major hurdle will be enforcement, where details are still pending. It will be a challenge, especially in regard to federal law, said Liebert.

“The European Union, incidentally, does not have the First Amendment. They don’t have to worry about the constraints of free speech in that regard,” he said. “We have the First Amendment and we also have the limitations on the enforcement side that they do not have.”

Advocates for AI guardrails are hopeful the tech industry will be amenable to compromise. They note that many larger companies have adopted and made voluntary commitments to the White House on watermarking standards.

Tech groups said that they agree the verification issue is important, but they cautioned against over-regulation.

“Increasing awareness and trust of AI-generated content is crucial, but overly restrictive government mandates can create their own unintended consequences and undermine the goal of transparency,” said Dylan Hoffman, executive director of California for TechNet, which recently added OpenAI and Scale AI to its membership.

Todd O’Boyle, senior tech policy director for Chamber of Progress, a tech industry coalition, noted that as AI becomes more prevalent in everyday life, policymakers should be judicious in writing their rules. Watermarking everything touched by AI would be meaningless, he said.

“Tackling issues like election misinformation are important, but to avoid censoring unrelated speech, legislation would have to be very narrowly focused, and place the burden of compliance on content creators themselves,” said O’Boyle.

To contact the reporter on this story: Titus Wu in Sacramento, Calif. at twu@bloombergindustry.com

To contact the editors responsible for this story: Bill Swindell at bswindell@bloombergindustry.com; Gregory Henderson at ghenderson@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.