Deepfakes Grow More Sophisticated, Putting Companies on Alert
Last summer, the deepfake detection company Reality Defender began asking technology conference attendees to play an online game: Look at 30 photographs and videos and decide which were real and which were digitally created.
Even among hacking and cybersecurity professionals the results rarely changed—a majority got half wrong.
“Here were people who understand the technology,” said Jennifer Ewbank, a former high-ranking CIA staffer in cybersecurity and artificial intelligence who sits on Reality Defender’s advisory council. “They knew that they were trying to spot the deepfakes, and the results were pretty sobering.”
In the months since, deepfake technology has become cheaper, more prevalent and more convincing. Using a publicly available photo, short video or audio clip, perpetrators create what might seem to be ordinary interactions—a phone message from the CEO, a quick video chat with a colleague—and add a hint of urgency to act. Fintechs and insurers have increasingly become popular targets.
A scammer in January, for example, duped a Swiss company owner into transferring millions of francs to an untraceable account in Asia by using an AI-voice manipulator and posing as the owner’s business partner on a phone call. The suspects behind the heist remain unknown, Swiss police said.
Companies are bracing for more attacks. They’re bolstering security and training, with some asking consultants to create video and audio deepfakes of their top executives to show their staff how realistic the threat is.
Though the best known deepfakes tend to relate to politics or global events like the Iran war, some of the most consequential attacks are unfolding inside companies, which are often reluctant to publicize or share them, security specialists said. Evidence can be scant as well, with many deepfakes going unrecorded because employees think they are talking to a co-worker.
“There’s been many, many more million-dollar-plus wire transfers,” said Matthew Moynahan, the CEO of GetReal Security, which offers services to protect companies from AI-powered deception and whose clients include
Gray Area
No US laws require a company to disclose an attack just because it happened. Even when an attack succeeds, a gray area exists about when a company must disclose it.
Many are traced to eastern Europe, China, Russia, Myanmar and India. Security specialists say one common scheme involves bad actors pretending to be a vendor and, using deepfake audio, calling the targeted company to ask it to switch the bank account where it sends payments.
The deception can also be multi-layered.
Scammers broke into the cell phone of one Fortune 500 CEO, accessed his WhatsApp account and used it to message the company’s chief financial officer and schedule a Zoom call about a wire transfer, according to Ben Colman, Reality Defender’s CEO and co-founder.
“Then, on Zoom, it was what appeared to be the CEO confirming the details and the urgency and to just get the wire out,” he said.
Ultimately the transfer didn’t occur—but not because the deepfake was detected, said Colman, whose clients include
Rising Anxiety
One sign of mounting corporate anxiety: More companies are asking for simulated deepfakes of their CEOs and public facing executives, said Ryan Anschutz, the North America Leader for IBM’s X-Force Incident Response team.
IBM runs “cyber ranges” in Washington, Cambridge, Mass., and locations abroad to teach Fortune 500 executives and other clients how to respond to cyberthreats.
The team gathers executives in one place—"essentially a boardroom in the middle,” Anschutz said—then creates a simulated attack, using phone calls and other bits of information that require them to react.
“You’re getting high-pressure situations where you need to be able to communicate,” he said.
Anschutz, who previously investigated cyber and electronic crimes as a member of FBI and Secret Service task forces, mentioned an incident in Ohio where a company’s help desk took a call from what sounded like its director of engineering, asking for a password reset. The caller, using a voice clone of the director, succeeded in getting access.
“It wasn’t for money but it was for credential and initial access into that organization’s infrastructure,” said Anschutz.
A new generation of AI tools is making it easier than ever before to mount these attacks. The latest versions of Kling AI, from a Chinese company, can produce near-perfect video without the obvious glitches of previous tools, said Moynahan, of GetReal.
It’s also increasingly harder to catch the inconsistencies in deepfake audio or video.
A sixth finger, weird body motions or monotone voices are increasingly giveaways of the past. But there are still other ways to detect them, said Matthew Stamm, a computer engineering professor who leads Drexel University’s multimedia and information security lab.
“Our eyes can’t see them but it leaves behind statistical traces, the same way that if I were to break into your house to steal your TV, I’m going to leave behind fingerprints or hair,” Stamm, who has collaborated with Nvidia, said.
Some detection products spit out scores on how likely audio, video, image or texts—including attendees on Zoom or Teams calls—might have been manipulated using AI.
Human Factor
To spot deepfakes, companies need to take steps beyond technical controls and detection tools, said Richard Bird, the chief security officer at Singulr AI, which helps clients manage their artificial intelligence usage.
“What we’re seeing now can no longer be trusted,” he said. “That means we need to address the physical aspect of seeing things and go through the physical controls that allow us to confirm whether or not somebody is who they say they are.”
For example, he said, a video or audio request for payment approval should always trigger a next step—verifying the request by independently calling the executive or the department. Just like a company wouldn’t let anyone physically into a secure room without a badge or a security check, they shouldn’t do so for digital environments, Bird said.
And yet too few companies focus on the human layer beyond some basic security awareness training, said Sandy Kronenberg, the founder and CEO of Netarx, an AI-driven cybersecurity platform.
“They’re not actually addressing it other than the check box that they needed from some audit for some security awareness training,” he said.
That’s all the more necessary because companies are in a “daily fight” against these attacks, said John Surma, a former
“The overall price of poker has gone up,” Surma said. “And everybody’s going to have to be more vigilant.”
Fracturing Trust
Ewbank, the former CIA staffer, likes to say that the real damage of deepfakes goes beyond deception: It’s doubt.
“When anyone can convincingly sound like a CEO or look like a public official,” she wrote in a LinkedIn post, “the foundations of trust begins to fracture.”
Deepfakes fool executives by exploiting that trust, said Perry Carpenter, chief human risk management strategist at KnowBe4, a cybersecurity platform.
“The levers they’re pulling are always some kind of emotion, some kind of urgency, some kind of fear or authority or hope or something,” said Carpenter, who has spent the last two decades studying phishing campaigns and deepfake attacks.
The gateway to a successful deepfake campaign is still human frailty. A really good “bad actor” will build a psychological or narrative frame that prevents the target from even questioning what’s happening, according to Carpenter, who spoke at a Securities and Exchange Commission committee meeting about deepfakes in 2025.
“I’m not going to give you the opportunity,” he said.
And if things don’t go the way the attackers have planned, they can always claim a technical glitch to keep the ruse going.
“I’ll just have to go off camera and talk through chat,” Carpenter said.
Design and graphics: Irfan Uraizee/Bloomberg Law
To contact the reporter on this story:
To contact the editors responsible for this story: