- Potential measures may ban AI usage in political communications
- First Amendment, technical concerns will complicate efforts
California lawmakers are exploring legislation to further crack down on deepfakes, including broader bans of the use of artificial intelligence in political campaigns and in pornography that are used for such nefarious purposes.
California already leads the nation on this issue, being one of the first states to pass anti-deepfake legislation in 2019 before the current frenzy over AI. Since then, fewer than a dozen states have enacted guardrails on the technology, most recently Michigan last month. Deepfakes are images or videos of a person’s likeness that have been digitally manipulated to typically misrepresent the person.
The Golden State’s law mostly consists of two measures that Assemblymember Marc Berman (D) pushed: one dealing with pornography (A.B. 602) and the other with political elections (A.B. 730). Both measures give a deepfake victim a right to sue the person or organization distributing the material. Lawmakers said they would like to expand the scope of the prohibition in the two subject areas given how AI technology has advanced.
“We have the internet that is so quick and easy to spread false disinformation and misinformation. And now AI is putting this at a whole new level,” said Assemblymember Gail Pellerin (D), who chairs the state Assembly Elections Committee.
Status Quo
Advocates have lauded the measures already enacted as important protections, but Berman said he’s not aware of his laws ever being enforced. That doesn’t mean it’s not working as a deterrent, he added.
However, some critics say California’s deepfake laws still have glaring holes. Brandie Nonnecke, an associate professor of tech policy research at University of California, Berkeley, said letting someone sue a bad actor doesn’t preemptively mitigate the harms and risks that have already occurred.
“By the time the government or any entity gets wise to the fact that there’s a harmful deepfake out there, it’s too late,” she said. “Once people have already seen it, the cat’s out of the bag. People have seen this deepfake; they’ve made their judgment.”
State law places the onus on users to flag deceptive content, which Nonnecke called a “misplaced responsibility” that should be on social media platforms. Other details of the California deepfake election law make it hard to enforce, such as having to prove someone had “actual malice” and an intent to harm.
Potential Barriers
It remains unclear whether the actions of California and other states will conflict with federal law. The most notable barrier is Section 230 of the Communications Decency Act, which frees social media platforms from liability for posted content.
The current deepfake law in California does exempt the platforms from monitoring deepfakes. It’s unclear if a new measure would impose requirements on Big Tech, but such responsibilities could bring some opposition from tech groups.
Any forthcoming legislation will also have to take into account the First Amendment right of free expression, the reason current law carves out exemptions for satire or parody.
“We think that the courts need to revisit their traditional evaluation of the First Amendment, and take into account the degree of risk and danger to our fundamental democracy that we’re seeing now,” said Drew Liebert, director of the newly formed California Institute for Technology and Democracy. “The time has come with this revolutionary development of artificial intelligence, that we can’t keep just looking back at the way things were.”
Proposed Solutions
One way to prevent harm from deepfakes is to prevent them altogether. Pellerin said she plans on filing legislation that would ban AI in political communications and activities, not just on social media but also mailers, robocalls, or other means.
Current law gives exceptions for content that have been clearly identified as altered. However, that has to be reevaluated in light of the dangers advanced AI now poses, such as realistic videos that show an elections official stating wrong information, said Liebert. His organization is working with Pellerin on the bill and is pushing for as much of a ban on deepfakes as possible.
Nonnecke argued the technical capabilities are there for social media platforms to identify whether content is synthetically generated before it’s posted. A platform, upon seeing data that something was altered by AI, could automatically let the content post with a disclosure.
The enforcement mechanism in any forthcoming deepfake bills will have to be worked out, too. It could follow current law—using the threat of lawsuits—or pursue criminal penalties, like other states have done. A California bill this past year (A.B. 1721) that would have criminalized distribution of sexual deepfake content did not make much progress. The lawmaker behind that bill, Assemblymember Tri Ta (R), didn’t respond to an inquiry.
Berman said he’s also looking into potential legislation to strengthen the deepfake laws he championed in 2019. His deepfake election law as written may not cover wholly-AI generated content, he said, but only existing videos and images altered by AI. He’s aware of how proliferate pornographic deepfakes still are.
“Arguably, both laws need strengthening,” Berman said.
Ultimately, California laws haven’t been truly tested with the state of technology at the time they were enacted back in 2019.
“2024 is likely to be the first national artificial intelligence election in the history of the United States,” said Liebert. “We anticipate this upcoming 2024 election to be the first real test of how any such laws involving deep fakes will work and how effective they’ll be.”
To contact the reporter on this story:
To contact the editor responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.