AI Tools Likely to Muddy Reach of Video Privacy Protection Law

Oct. 27, 2025, 9:00 AM UTC

The existence of AI tools that can help translate and interpret code is complicating courts’ application of a decades-old privacy law that’s already the subject of a circuit split.

The Video Privacy Protection Act—adopted in 1988 after Robert Bork’s video rental history was disclosed following his nomination to the US Supreme Court—imposes liability on “video rental service providers” that disclose a person’s rental history.

Appeals courts have differed on where to draw the line on what disclosures are covered by the VPPA, especially as the law has expanded to cover streaming services like Netflix and websites that offer video content.

The advent of AI threatens to further scramble those differing approaches.

Recent rulings suggest “artificial intelligence could sort of raise the bar of what an ordinary person can do,” resulting in a liability standard “that isn’t as fixed,” said Brian Sheppard, a professor at Seton Hall Law School. That bar also could change “depending on how much better we get at processing data,” he said.

AI allows users to input a string of code and have it pull out at least some of the data contained within. In testing, Sheppard said he found that OpenAI’s ChatGPT was confident it could identify a Facebook ID from a raw data input.

But without other comprehensive federal privacy laws to take its place, the VPPA has remained one of the primary federal online privacy laws.

“It turns out the language is broad enough that it’s had a lot of staying power in the system, and is quite popular now to use to pursue general data privacy cases,” Sheppard said.

Courts addressing how AI affects the VPPA “will likely result in another circuit split,” said Nicola Menaldo, a partner at Perkins Coie LLP who has represented companies like Amazon.com Inc. and Google LLC in privacy litigation.

Splits Within Splits

The US Court of Appeals for the First Circuit held nearly a decade ago that the VPPA prohibits disclosure of consumer data if the recipient could foreseeably translate the disclosure to discover what someone likes to watch. The Second Circuit held in May that the law doesn’t prohibit disclosure of data that would require sophisticated means to decode.

This may seem like a fine hair to split, but a judge from the US District Court for the Central District of California said recently that modern technology like AI “may indeed alter—or may already have altered—what qualifies” under the Act. That court sits within the Ninth Circuit, which is on the same side as the Second Circuit in that split.

But the Second Circuit, in a June decision, explicitly disagreed with the California court, saying the “existence of tools like ChatGPT would not alter our conclusion in this case.”

The Second Circuit likely thought “enough is enough,” Sheppard said.

Isaac Manoff of Arias Sanguinetti Wang & Team LLP—who represents the plaintiff in the California case—insists that ruling will be upheld by the Ninth Circuit if it comes to it. A different federal judge in the state sided with the decision Sept. 16.

If Manoff is right, the Ninth Circuit may create a split within the split, applying the same test as the Second Circuit but reaching results more like the First Circuit would.

Extra Intelligence

“It is implausible that an ordinary person would look at the phrase ‘title%22%3A%22-%E2%96%B7%20The%20Roast%20of%-20Ric%20Flair’"—part of a larger Facebook-rooted url—"and understand it to be a video title,” the Second Circuit said in its May decision.

But “a growing proportion of the public has the technological fluency to discern information from lines of code, particularly with the advent of artificial intelligence tools,” Judge David O. Carter—the California federal judge—said.

The Second Circuit’s ruling potentially ignores what an ordinary person, trying to discern someone’s personal interests, might do with coded information, said Andrew Selbst, a professor at the UCLA School of Law. The appeals court therefore fails to give sufficient weight to the privacy concern, he said.

“If you’re so strict on the definition of ordinary person that you would not think to, like, Google the piece of information you just got,” that “would essentially eviscerate the test” for what qualifies as a forbidden disclosure under the VPPA, Selbst said. Something like ChatGPT would work similarly, he said.

Anat Lior, an assistant professor at Drexel University’s Thomas R. Kline School of Law, cautioned that AI hasn’t gotten to the point where it can translate coded information perfectly and without hallucinations.

Carter’s ruling “presumes a level of user proficiency and comfort with generative AI that is not currently supported,” she said.

But if AI does get to that point, “then privacy as a principle takes a strong hit and loses a significant part of its scope,” she said.

Whether the code is readable by the recipient may well be a question of fact, rather than a clear legal question one way or another, Selbst said. “Maybe the jury should get to decide.”

To contact the reporter on this story: Ufonobong Umanah in Washington at uumanah@bloombergindustry.com

To contact the editors responsible for this story: Nicholas Datlowe at ndatlowe@bloombergindustry.com; Laura D. Francis at lfrancis@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.