Trade Secrets Law Is Awkward Fit in AI Prompt-Hacking Lawsuit

March 14, 2025, 9:05 AM UTC

A medical AI company’s novel trade secrets lawsuit accusing a rival of prompting its platform to reveal how it was built illustrates the challenges artificial intelligence presents for protecting proprietary information.

OpenEvidence Inc. in its complaint accused Pathway Medical inc. of stealing medical credentials to gain access to its AI tool and employing prompts to have it reveal the “system prompts” OpenEvidence had used to create the platform. The suit levels claims including computer fraud, breach of contract, and hacking to gain access to copyrighted material, but the trade secrets claim stood out to attorneys and law professors as the most interesting.

Trade secrets protections have perks some other types of intellectual property don’t in terms of public disclosure, enforcement, injunctions, and the availability of sizable damages. But OpenEvidence’s claims of “prompt injection attacks"—prompts designed to manipulate a large language model—may prove a difficult fit for trade secrets laws, under which reverse engineering, for example, is legal.

Trade secrets claims require proof of theft and either use or dissemination of information that has value if kept secret. “Readily accessible” information and gains through reverse engineering an available product aren’t protectable. Courts have yet to analyze methods to access material an AI program isn’t designed to provide, legal professionals said.

OpenEvidence may be able to satisfy the misappropriation element by showing its secrets were acquired by “improper means” due to the involvement of stolen credentials. But courts and juries may be skeptical that asking AI to reveal secrets, on its own, can be considered an improper means, lawyers said. That could take away what for many business has become a critical tool for protecting confidential information that differentiates their products.

“It’s a competitive sport to see who can get a model to reveal a system prompt first,” IP professor James Grimmelmann of Cornell University said. “This is a widely practiced form of AI reverse engineering.”

The builders of AI tools could try to program their offerings to not spill the beans about their secret sauce, but the nature of LLMs makes that tough. As the systems are designed to evolve to solve problems beyond their original programming, it’s difficult to create fixed barriers, Grimmelmann said. That facilitates rapid advances in AI, but it’s also detrimental to investment, he said.

“It’s almost impossible right now. People have found it extremely hard to train a model that enforces the guardrails you want enforced,” Grimmelmann said. The technology “may be heading to the AI-equivalent of the dot-com crash.”

‘Readily Ascertainable’

OpenEvidence’s program pulls from a continuously updated research database to answer questions about medical conditions, diagnoses, and treatments. Pathway allegedly used a practitioner’s medical credentials—access is limited to medical providers—and repeatedly asked the AI for its “system prompts.”

System prompts act as a framework that customizes open-source LLMs to the designer’s purpose. OpenEvidence said Pathway used its unauthorized access to conduct dozens of “prompt injection attacks,” hacking to learn system prompts—the “most critical information” that drives the platform’s value.

“This is exactly the kind of case I was expecting, where a competitor is accused of extracting secrets,” said IP and contracts law professor Camilla Hrdy of Rutgers University, who has written multiple papers on trade secrets and other IP in AI.

“We don’t know what companies are going to view as their so-called crown jewels. It’s so new, we don’t know what they are and how hard it’s going to be to reverse engineer these things.”

“Improper” and “readily ascertainable” can have context-dependent meanings, IP professor Eric Goldman of Santa Clara University said. He noted a 1970 case, E.I duPont deNemours & Co. v Christopher, in which the US Court of Appeals for the Fifth Circuit found hiring an aerial photographer to capture the layout of a construction site might constitute “improper means.”

“At the time, it was an expensive solution,” Goldman said. “Today, you can just say ‘Hello, Google Earth.’”

That “arms race issue” also applies in the AI context, he said. “At some point, a court will say the model-maker did enough, even though someone can cleverly get around” its defenses.

Establishing Misappropriation

Juries are generally sympathetic to trade secrets plaintiffs who credibly argue they made “reasonable efforts” to keep proprietary information secure, IP attorney Fabio Marino of Womble Bond Dickinson said. Courts consider the “totality of the circumstances,” but pressing a trade secrets claim where a user had the right to use the AI could be difficult.

“If you had legitimate access to an AI tool and tried to extract information by asking the right questions, the argument would be there’s no misappropriation,” Marino said. “LLMs are very hard to control.”

Terms of service barring certain types of use could help establish efforts were improper, but won’t necessarily show the existence of a secret or efforts to protect it.

“Slapping ‘no reverse engineering,’ on it and then releasing the AI to public—it’s still easy to ascertain,” Hrdy said. “That shouldn’t really be a trade secret.”

IP attorney Agatha Liu of Duane Morris LLP said hacking AI to reveal its prompts is “not a good thing, but it’s not terribly illegal.” AI developers most likely will have to stay on top of the best practices to craft their products to save them from themselves she said.

“If you want to reduce risk, you need to up the ante and make your system more resilient and context-aware,” Liu said.

But as difficult as it may be to keep AI models from regurgitating system prompts, the degree of the threat is unclear. Grimmelmann said OpenEvidence’s claims about the severity of damage that Pathway’s tactics may have caused “strikes me as exaggerated.”

“These are not deep secrets, highly technical information,” Grimmelmann said. “This is a really thin layer of instructions to customize an existing model.”

The case is OpenEvidence Inc. v. Pathway Medical Inc., D. Mass., No. 1:25-cv-10471.

To contact the reporter on this story: Kyle Jahner in Raleigh, N.C. at kjahner@bloomberglaw.com

To contact the editors responsible for this story: Adam M. Taylor at ataylor@bloombergindustry.com; Tonia Moore at tmoore@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.