- Law firms vet new tools before allowing attorneys to use them
- DeepSeek’s China ties create added security concerns
Fox Rothschild LLP blocked its lawyers from accessing tools from DeepSeek, the Chinese artificial intelligence startup, citing concerns about the privacy risks it may pose to client data.
The swift action comes as other Big Law firms, such as Polsinelli and Wilson Sonsini Goodrich & Rosati, are responding to the rapid development of generative artificial intelligence by implementing guardrails on their lawyers’ use of new technology.
Fox Rothschild’s 900-plus attorneys use AI tools and, like many other firms, it doesn’t generally bar its lawyers from using ChatGPT, although it imposes restrictions on the use of AI with client data, Mark G. McCreary, the firm’s chief artificial intelligence and information security officer, said. But DeepSeek, launched by a Chinese investor, poses unique security challenges.
“It’s one thing to have a risk that somebody makes a mistake with ChatGPT,” McCreary said. “It’s a completely different risk for someone to make a mistake with China.”
McCreary, who chairs Fox Rothchild’s artificial intelligence practice and co-chairs its privacy and data security practice, said it was prudent to ban the app while details are still emerging, like how and where DeepSeek stores data.
DeepSeek in its privacy terms says it collects and stores data in servers in China, Bloomberg News reported. “Hundreds” of companies are working to block DeepSeek, whose AI chatbot recently rocketed to the top of Apple Store app downloads.
A data breach this week illustrates further security concerns with DeepSeek, aside from the technology’s national origin, McCreary said. The cloud security company Wiz on Wednesday revealed it had discovered chat data and “highly sensitive information” from DeepSeek on a public platform.
“We were not comfortable with the security,” McCreary said of the decision to block DeepSeek.
Vetting new models
New models, like DeepSeek’s R1, have to be vetted by Wilson Sonsini Goodrich & Rosati’s chief information security officer and general counsel before their lawyers can use them, Annie Datesh, the Silicon Valley firm’s chief innovation officer said. DeepSeek’s R1 model hasn’t yet gone through that process, so therefore lawyers can’t use it, she said.
“We don’t know what’s under the hood,” Datesh said. “We’d have to do a whole assessment.”
It’s a similar story at Polsinelli, according to Chase Simmons, the national firm’s chairman and chief executive officer. The firm doesn’t have a specific policy addressing DeepSeek yet, he said, but it doesn’t generally allow AI models to run on firm computers without approval. There are also client restraints regarding AI use, he added.
“We do have a pretty strict regime over here,” Simmons said.
Vendor Risk
AI concerns aren’t limited to Wilson Sonsini’s own use of new models, Datesh said. Vendors that law firms use rely on AI models on the back end and there could be an issue if those vendors switch from a known entity like ChatGPT to DeepSeek’s R1, she said.
“We expect them to kind of make sure that they’re alerting us when LLMs are changing that are not on our approved LLM list,” Datesh said about vendors.
Vendors are required to notify Fox Rothschild of those kinds of changes, McCreary said.
Vendor concerns are less acute with the legal world’s leading technology tools, because the companies that make them understand that the protection of law firm data is integral to their ability to retain customers, McCreary said.
“I’m not real worried about somebody deciding they’re gonna save some money and go use DeepSeek,” he said.
—With assistance from Chris Opfer in New York
To contact the reporter on this story:
To contact the editor responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.