AI search’s existential threat to news, as painted by The New York Times’ latest copyright complaint against Perplexity AI Inc., stands out amid the battles raging between the tech and content industries.
It’s the latest twist in a pivotal legal battle that will shape the future boundaries of industries including artificial intelligence, news, publishing, and entertainment. Along with a 2024 suit brought by
The prospect of lost web traffic puts the outlets in a particularly vulnerable spot—another blow to an industry struggling ever since the internet ushered in cutthroat competition for advertisers and rendered long-established revenue models obsolete.
“Their whole shtick is that you get to ‘skip the click'—ask about this subject, and we’ll tell you about it,” intellectual property and technology attorney Steve Kramarsky of Dewey, Pegno & Kramarsky said, noting Perplexity dumped the line after Dow Jones sued. “You’re obviously stealing the business model from the people you’re harvesting from.”
Perlexity’s answer in the Dow Jones case said “compiling copyrighted content to create a searchable database” has long been deemed fair use under US copyright law. But by providing more fulsome, direct answers, rather than the short snippet of traditional search platforms, Perplexity amplifies the business risk to news organizations. The Times’ suit cited a study by content-licensing platform TollBit finding AI search engines send roughly 96% less referral traffic to news sites and blogs.
The situation ultimately poses a threat to the AI companies themselves, attorneys said. If Perplexity and others can freely hoover up and repackage news for users, that stream of reliable information will eventually run dry as depleted revenue kills off news outlets, IP attorney Avery Williams of McKool Smith said.
“It’s self-destructive,” Williams said. “Sure, you can have AI slop on top of AI slop, but no one wants that.”
Perplexity didn’t respond to requests for comment.
Measuring Market Impact
News-on-AI litigation isn’t new. The New York Times in 2023 sued OpenAI and Microsoft Corp. over their use of copyrighted material to train large language models. But the cases against Perplexity—both filed by Rothwell, Figg, Ernst & Manbeck PC—cut more directly to the heart of the alleged harm than the training and article-replication claims in the OpenAI litigation, Kramarsky said.
“This is different. This is saying the output summaries are an economic replacement for our product,” he said.
Perplexity argues it doesn’t copy protectable expression. “It’s a bedrock principle of copyright law that neither facts nor ideas can be owned by authors,” it said in a court filing, adding it helps users discover content and directs them to the original content. In response to news organizations’ claims that “the sky is now falling,” Perplexity said the law “has always allowed providers of search technology to index the web to provide search information.”
A new technology’s potential to disrupt a business model isn’t an actionable claim, but it could affect IP-based litigation strategies. Courts have issued mixed and sometimes case-specific rulings regarding whether and when AI training is fair use, a defense to infringement claims.
Courts generally deem the copies’ impact on the original works’ commercial market the most important of four fair use factors, and Perplexity’s alleged portrayal of its outputs as a substitute to news outlets could tip the balance. A group of authors’ failure to show market harm from AI training led one otherwise-skeptical federal judge to deem
Another key factor is the transformativeness of the copying, which can scramble the market factor by distinguishing the new works as distinct. While OpenAI argued ChatGPT’s outputs generally don’t resemble the myriad training inputs, Perplexity likely “shot itself in the foot” with citations making it easy for outlets to trace the source of its information, attorney James Rubinowitz of New York said.
Looking Beyond Copyright
The Times and Dow Jones suits also brought other claims that have yet to be tested in AI-training cases, including trademark infringement and dilution. They argue Perplexity sometimes attributes erroneous information—"hallucinations"—to their publications.
“Using ‘The New York Times,’ attaching false facts to it: it’s bad for our reputation,” Kramarsky said. “It’s a pretty good claim.”
Perplexity’s answer suggested merely citing sources doesn’t suggest affiliation with The Times’ to consumers.
The complaints don’t make claims under the Computer Fraud and Abuse Act. That law could offer a viable path to publishers, McKool Smith’s Williams said.
The Times’ complaint said Perplexity ignored “robots.text” protocols websites use to signal they shouldn’t be scraped by third-party data-collectors. Though attorneys say that protocol is a “keep off grass sign” with no legal ramifications, The Times also alleged it was “hard-blocking” Perplexity’s content-scraper and asking the company to stop—to no avail. The AI firm’s alleged circumvention of technical barriers could support a CFAA claim, Williams said.
Reddit Inc.'s suit against Perplexity took another approach to target data-scraping, asserting claims under the Digital Millennium Copyright Act.
It’s still too early to say which claims will prove most effective in content owners’ lawsuits against AI developers, IP attorney Randall K. McCarthy of Hall Estill said.
“Somebody’s doing something new, there’s a sense that there’s something wrong with it,” McCarthy said. “And we’re still learning how to plead it.”
The cases are Dow Jones & Co. v. Perplexity AI Inc., S.D.N.Y., No. 1:24-cv-07984; and The New York Times Co. v. Perplexity AI Inc., S.D.N.Y., No. 1:25-cv-10106.
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.