Understanding Deepfake Vishing Attacks and How to Combat Them

Understanding Deepfake Vishing Attacks and How to Combat Them

As technology advances, so do the methods of fraudsters who are increasingly using AI to clone voices for convincing scam calls. These deepfake vishing attacks can mimic a familiar voice, persuading targets to share sensitive information or transfer funds.

Both researchers and government agencies have raised alarms about the growing threat of synthetic media. The Cybersecurity and Infrastructure Security Agency highlighted an exponential rise in these threats in 2023. Similarly, Google’s Mandiant reported that these attacks are now executed with precision, creating more believable cons.

Anatomy of a Deepfake Scam Call

Security firm Group-IB elaborated on the steps to execute such attacks: collecting voice samples, feeding them into speech synthesis engines, and optionally spoofing phone numbers. Using AI tools like Tacotron 2 or Vall-E, fraudsters can recreate the tone and style of the voice they impersonate.

These calls either use scripts or generate responses in real-time, making them more convincing as they can adapt to the target's reactions. Despite real-time deepfake vishing being rare, advancements in technology suggest it might become more common.

Once a person is convinced, the scam proceeds to its final stage where targets are manipulated into giving away money or sensitive information.

Keeping Safe From Scammers

Mandiant showcased the simplicity of executing these scams in simulated tests. By exploiting trust in a familiar voice, attackers could breach security systems. Preventative measures include using a shared secret word for verification or independently contacting the caller using a known number.

Maintaining vigilance during unexpected calls is challenging, especially under stress. As technology evolves, so must our awareness and defenses against vishing attacks, ensuring we don’t become victims of these clever deceits.