
in generative artificial intelligence In the frenzy of the past few months, security researchers have been revisiting concerns that artificial intelligence-generated voices, or voice deepfakes, have become convincing enough and easy enough for scammers to start using them on a large scale.
In recent years there have been several high-profile incidents in which cybercriminals have reportedly used the deepfake voice of a company CEO in an attempt to steal large sums of money — not to mention a documentary filmmaker who created a deepfake after the death of Anthony Bourdain. But are criminals at a tipping point where any given spam call might contain the cloned voice of your sibling desperately seeking “bail money”? No, say the researchers — at least not yet.
The technique of creating convincing, robust voice deepfakes is powerful and increasingly common in controlled environments or where individual voices can be recorded in large numbers. In late February, Motherboard reporter Joseph Cox published his findings, saying he recorded five minutes of himself talking and then used ElevenLabs, a publicly available generative artificial intelligence service, to create voices that beat banks’ voice authentication systems. deep fake. But like the shortcomings of generative AI in other media, including the limitations of text-generating chatbots, voice deepfake services still cannot consistently produce perfect results.
“Depending on the attack scenario, real-time capabilities and the quality of the stolen speech samples have to be considered,” says Lea Schönherr, a security and adversarial machine learning researcher at CISPA’s Helmholtz Center for Information Security in Germany. “While it is often said that only a few seconds of stolen speech are needed, quality and length have a large impact on the outcome of an audio deepfake.”
Digital scams and social engineering attacks such as phishing appear to be a growing threat, but researchers point out that scams in which attackers call victims and try to impersonate someone the target knows have existed for decades—no artificial intelligence required. The fact of their longevity means that these scams are at least somewhat effective at tricking people into sending money to attackers.
“These scams go on and on. Most of the time, it doesn’t work, but sometimes they come across a victim who, for whatever reason, is ready to believe what they’re saying.” People will swear that the person they’re talking to is whoever they’re impersonating, when in reality, it’s just their brain filling in the blanks. “
Hassold said his grandmother was the victim of an impersonation scam in the mid-2000s when attackers called and pretended to be him, convincing her to send them $1,500.
“For my grandma, the scammers didn’t say who originally called, they just started talking about how they got arrested while they were at a music festival in Canada and needed her to send money for bail. Her response was ‘Crane, yes You? ’ And then they got what they needed,” he said. “Scammers are essentially leading victims to believe what they want them to believe.”
As with many social engineering scams, voice impersonation scammers work best when the target is caught in a sense of urgency and is simply trying to help someone or complete a task they feel is their responsibility.
“I was driving to work and my grandmother left me a voicemail saying something like ‘I hope you’re doing well.’ Don’t worry, I sent the money and I’m not telling anyone,'” Hassold said.
Justin Hutchens, director of research and development at cybersecurity firm Set Solutions, said he thinks deepfake voice scams are getting more attention, but he’s also concerned that AI-driven scams will become more automated in the future.
“I expect that in the near future we will start to see threat actors combine deepfake speech technology with conversational interactions powered by large language models,” Hutchens said of platforms like Open AI’s ChatGPT.
For now, though, Hassold cautions against being too quick to assume that voice-mimicking scams are powered by deepfakes. After all, the mock version of the scam still exists and still attracts the right target at the right time.