Alex Rodriguez looked stern as he’s asked in a “60 Minutes” interview if he ever used performance-enhancing drugs.
“No,” Rodriguez, then the New York Yankees third baseman, said.
The 2007 interview was uploaded into an AI-powered tool that uses an algorithm to read Rodriguez’s tone, facial expressions, and speech patterns in what’s billed as a “truthfulness analysis.” Four red bars flash across the screen along with the word “lie.”
In the legal industry, there’s already plenty of software powered by generative AI and machine learning that helps lawyers summarize documents, conduct research, and draft motions. Now, applications like TruthOrLie and Wexler.ai are trying to go beyond completing legal tasks and provide a way to supplement a lawyer’s gut instinct. As the tech develops, lawyers will have to juxtapose the competitive advantages AI could bring against the potential for new ethical pitfalls.
“Truth isn’t some app you can download,” said Steven Delchin, a professional ethics attorney at Squire Patton Boggs. “I wish it was that easy, but this is a difficult area that we’re getting into.”
The video clip of Rodriguez, who admitted years after the “60 Minutes” interview that he used steroids, is featured alongside other snippets of notable people like former President Bill Clinton and former congressman Anthony Weiner in TruthOrLie’s demo video.
The company, founded by entrepreneur Michael Breyer, fed videos of known incidents of people lying into an algorithm and asked it to identify similarities in speech, facial expressions, and body language between the liars. Then, when a lawyer feeds TruthOrLie a video or live feed of a deposition, AI looks for the cues it picked up on in its training data to determine the likelihood the subject is lying.
Ajit Bharwani, whose company, NextPhase.ai, uses TruthOrLie to evaluate candidates in job interviews, is an early user of the platform and takes its findings with a grain of salt.
“Do we rely on it 100%? No,” Bharwani said. “It’s still a new technology, so we’re a little cautious.”
A Nebulous Concept
London-based Wexler.AI takes a different approach to its use of artificial intelligence. It recently launched a live fact-checking feature, into which lawyers can upload case documents and evidence. Then, in a deposition, the Wexler tool can flag if the subject says something that contradicts the uploaded documents.
“We prefer to call it fact checking because it’s all about facts, and we’re checking the facts,” Wexler’s founder Gregory Mostyn said. “And it’s not really up for us to determine whether something’s a lie or not. That’s quite a nebulous concept, you know, truth.”
Everything Wexler flags can be independently verified, Mostyn said. It’s speeding up the process of looking through files for inconsistencies and pointing those out when they come up. Lawyers are already trying to do that on their own, but they don’t have a perfect memory or quick access to the same wide range of facts that AI does.
“It’s not like you just plug it in, and you listen to the AI, and it tells you whether or not it’s true,” Mostyn said. “It basically shows you a possible contradiction. You check the source, and you decide what to do.”
Deposition Assistant
Breyer, the TruthOrLie founder, said his platform can help make fishing expedition depositions more focused.
If a deposition subject tells a lawyer, for example, that they don’t have a close relationship with a board member, and TruthOrLie comes back with a four-bar lie rating, Breyer said that lawyer is more likely to ask follow-up questions like “Have you ever had lunch with this board member?” or “How many times have you gone on vacation with this board member?”
The tool doesn’t suggest specific follow-up questions, or even tell the lawyer directly to pursue certain details, but it points them in the right direction, Breyer said.
“All of a sudden, you know that this might be a relevant avenue to explore,” Breyer said. “It could be critical for your trial, and it undermines the witness’ credibility if they indeed were lying just a moment ago.”
Breyer said internal testing shows TruthOrLie’s accuracy rate as “constantly over 70%,” which he said is an improvement over a lawyer making an educated guess. Humans can tell lies about as well as a coin toss: One study found that humans detect lies just 54% of the time.
Breyer has strong ties to the legal world. His father, Stephen, is a retired Supreme Court justice. The younger Breyer founded the courtroom transcription service CourtScribes. He said he’s been developing the TruthOrLie technology for over two years.
Ethical Concerns
Lawyers have ethical obligations to understand the tech underlying these new AI tools, the data behind them, and their potential for errors, Delchin said. Those obligations fall under ethical rules for competence, which the American Bar Association has said includes tech usage.
Jonathan Sherman, a partner at Sterlington PLLC who has used TruthOrLie in witness preparation, said he doesn’t have ethical concerns about using it, but would notify opposing counsel and offer it to them to use as well.
Mostyn and Breyer, meanwhile, said human involvement in using the tools minimizes the ethical questions they raise.
“AI in general is kind of breaking new ground in terms of the laws that are being developed,” Breyer said.
He has even loftier hopes about his tech being useful in any situation that depends on the veracity of what someone is saying. What if, for example, TruthOrLie could assist when hiring a baby sitter? Or buying a house? Or going on a first date?
“I do see this technology as doing something that is positive, because just think of the costs that are involved in fraud or deception,” Breyer said.
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
