The embrace of AI meeting-transcription services in the workplace puts employers at risk of privacy lawsuits, breaches of confidentiality, and a host of other legal problems that could outweigh the technology’s benefits.
Otter.ai, Fireflies.ai, and similar services offer improved productivity by creating a searchable record of what happens in meetings along with summaries of the discussions, lists of “action items,” and other features.
But having a permanent record of a company’s internal workings is a mixed blessing if plaintiffs’ attorneys in a privacy or employment lawsuit come calling with a discovery demand that could expose awkward or damaging information, attorneys say.
Transcripts also might contain a trade secrets or information from meetings with outside counsel that ordinarily is privileged.
To manage these risks, attorneys advise companies to develop policies and training to guide employees on appropriate AI uses, create lists of approved vendors or adopt a single “enterprise solution,” revise contracts with AI transcription vendors to ensure acceptable data handling, and implement data-retention and deletion policies.
“Employees and companies are certainly going to be using these tools that are becoming more and more prevalent,” said Sarah F. Hutchins, leader of the Cybersecurity & Data Privacy Team at Parker Poe Adams & Bernstein LLP.
“The question that companies now face is how to give their workforce guidance on how AI transcription tools are to be used, and how to protect themselves using appropriate contract terms with providers of these tools, and appropriate management of the data that results.”
‘Hammering on the Gas’
Businesses are “hammering on the gas pedal, doing everything they can to get AI tools into their organizations and get them up and running,” said Gabriel Buehler, an attorney whose practice focuses on privacy and technology.
“And unfortunately, as these things become more ubiquitous, people get so numb to it and forget about the potential risks until they find themselves saying, ‘Holy Cow, that just cost us a million dollars,’” he said.
Meeting and phone-call transcription is “one of the first things businesses are turning to” when it comes to AI adoption, said W. Lyle Gravatt, a partner with Michael Best & Friedrich LLP, who focuses his practice on intellectual property and transactional law.
And those tools are noticeably ubiquitous.
It seems like every Zoom call and most meetings are now also attended by an AI transcription tool, Buehler said.
“I don’t know if I’ve been on a single remote or Zoom call without being greeted by a notification that the meeting is being recorded,” he said.
But AI transcription is more than a mere recording of a meeting or call, and that’s the trouble, Buehler said.
“The AI tool provider is touching meeting content, it’s listening to meeting content, it’s drafting the summaries, and storing all of this information on their side,” he said. “And all of that arguably makes the provider a third party to the conversations, with important legal consequences that follow.”
Third-Party Disclosure
An explosion of lawsuits over the past several years against online retailers, health-care providers, and financial institutions over the use of online tracking tools on their websites is evidence of those consequences, said Regina Gerhardt, a litigation associate with Frankfurt Kurnit Klein & Selz PC.
A key issue in those lawsuits is whether the site visitors consented to the disclosure of their data to the third-party providers of the tracking tools.
Disclosure of meeting data to an AI transcription service presents a similar dynamic, she said.
Defendants typically argue in customer-service call suits that the AI tool is analogous to a tape recorder. But many courts have found that a vendor that can make independent use of the collected information can be considered a third party for purposes of a wiretapping claim, she said.
Although no wiretapping lawsuits have yet been filed over the use of AI transcription tools in the office, the time may be coming, Gerhardt said. “One thing we’ve seen is that plaintiff-side attorneys are very creative, and they’re always looking for new ways to test existing laws.”
“I could see it happening,” said Daniel M. Goldberg, chair of the Data Strategy, Privacy & Security Group at Frankfurt Kurnit. “If you’ve got 10,000 employees and you’re rolling out a solution like this that works with Zoom, and you don’t have a consent mechanism I could see a disgruntled class arising.”
Sensitive Data
Another key risk from using AI transcription services is needing to manage an entirely new source of potentially sensitive company data.
The tools “create a lot of records of things you might not want records of, there’s just so much more information that’s discoverable down the road,” Gerhardt said.
“We’re already seeing efforts by opposing counsel in litigation to expand the sources of discovery to include both internal and external tools for recording,” Hutchins said.
Companies might think that their sensitive information is protected by attorney-client privilege, but the privilege may not apply if the information was generated by an AI transcription tool, she said.
The same issue extends to maintaining trade secrets, Hutchins said. “You might be eroding your rights by using this shortcut tool that’s very helpful but, unfortunately, you just disclosed your Coca Cola recipe to some third party as well.”
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
