Voice deepfakes are the new frontier of scamming. Here’s how you can detect them

You have just returned home after a long day at work and are about to sit down for dinner when suddenly your phone starts buzzing. On the other end is a loved one, perhaps a parent, a child, or a childhood friend, begging you to send them money immediately.

You ask them questions, attempting to understand. There is something off about their answers, which are either vague or out of character, and sometimes there is a peculiar delay, almost as though they were thinking a little too slowly. Yet, you are certain that it is definitely your loved one speaking: That is their voice you hear, and the caller ID is showing their number. Chalking up the strangeness to their panic, you dutifully send the money to the bank account they provide you.

The next day, you call them back to make sure everything is all right. Your loved one has no idea what you are talking about. That is because they never called you—you have been tricked by technology: a voice deepfake. Thousands of people were scammed this way in 2022.

As computer security researchers, we see that ongoing advancements in deep-learning algorithms, audio editing and engineering, and synthetic-voice generation have meant that it is increasingly possible to convincingly simulate a person’s voice.

Even worse, chatbots like ChatGPT are starting to generate realistic scripts with adaptive real-time responses. By combining these technologies with voice generation, a deepfake goes from being a static recording to a live, lifelike avatar that can convincingly have a phone conversation.

Cloning a voice

Crafting a compelling high-quality deepfake, whether video or audio, is not the easiest thing to do. It requires a wealth of artistic and technical skills, powerful hardware, and a fairly hefty sample of the target voice.

There are a growing number of services offering to produce moderate- to high-quality voice clones for a fee, and some voice deepfake tools need a sample of only a minute long, or even just a few seconds, to produce a voice clone that could be convincing enough to fool someone. However, to convince a loved one—for example, to use in an impersonation scam—it would likely take a significantly larger sample.

Protecting against scams and disinformation

With all that said, we at the DeFake Project of the Rochester Institute of Technology, the University of Mississippi, and Michigan State University, along with other researchers, are working hard to be able to detect video and audio deepfakes and limit the harm they cause. There are also straightforward and everyday actions that you can take to protect yourself.

For starters, voice phishing, or “vishing,” scams like the one described above are the most likely voice deepfakes you might encounter in everyday life, both at work and at home. In 2019, an energy firm was scammed out of $243,000 when criminals simulated the voice of its parent company’s boss to order an employee to transfer funds to a supplier. In 2022, people were swindled out of an estimated $11 million by simulated voices, including those of close, personal connections.

What can you do?

Be mindful of unexpected calls, even from people you know well. This is not to say you need to schedule every call, but it helps to at least email or text message ahead. Also, do not rely on caller ID, since that can be faked too. For example, if you receive a call from someone claiming to represent your bank, hang up and call the bank directly to confirm the call’s legitimacy. Be sure to use the number you have written down, saved in your contacts list, or that you can find on Google.

Additionally, be careful with your personal identifying information, like your social security number, home address, birth date, phone number, middle name, and even the names of your children and pets. Scammers can use this information to impersonate you to banks, realtors, and others, enriching themselves while bankrupting you or destroying your credit.

New: we proved it could be done. I used an AI replica of my voice to break into my bank account. The AI tricked the bank into thinking it was talking to me. Could access my balances, transactions, etc. Shatters the idea that voice biometrics are foolproof https://t.co/YO6m8DIpqR pic.twitter.com/hsjHaKqu2E

— Joseph Cox (@josephfcox) February 23, 2023

Here is another piece of advice: Know yourself. Specifically, know your intellectual and emotional biases and vulnerabilities. This is good life advice in general, but it is key for protecting yourself from being manipulated. Scammers typically seek to suss out and then prey on your financial anxieties, your political attachments, or other inclinations, whatever those may be.

This alertness is also a decent defense against disinformation using voice deepfakes. Deepfakes can be used to take advantage of your confirmation bias, or what you are inclined to believe about someone.

If you hear an important person, whether from your community or the government, saying something that either seems very uncharacteristic for them or confirms your worst suspicions of them, you would be wise to be wary.


Matthew Wright is a professor of computing security at the Rochester Institute of Technology. Christopher Schwartz is a postdoctoral research associate of computing security at the Rochester Institute of Technology.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://www.fastcompany.com/90869134/voice-deepfakes-are-the-new-frontier-of-scamming-heres-how-you-can-detect-them?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Erstellt 2y | 24.03.2023, 11:21:03


Melden Sie sich an, um einen Kommentar hinzuzufügen

Andere Beiträge in dieser Gruppe

This accuracy-obsessed weather app does one thing oh so well

Whether weather is always on your radar or merely a passing front of occasional interest, having an on-demand eye on the world around you is one of the most powerful slices of sorcery you can set

28.06.2025, 11:50:03 | Fast company - tech
These two game-changing breakthroughs advance us toward artificial general intelligence

The biggest technology game changers don’t always grab the biggest headlines. Two emerging

28.06.2025, 11:50:02 | Fast company - tech
WhatsApp just got banned on Capitol Hill. Here’s how you can make the Meta messaging platform more secure

The U.S. House of Representatives’ Chief Administrative Officer (CAO), Catherine Szpindor, informed congressional staffers this week that WhatsApp is now

28.06.2025, 09:30:05 | Fast company - tech
Why the ‘Tiny Chef’ cancellation broke the internet’s heart

Justice for Tiny Chef.

A now-viral clip of the stop-motion animated star of The Tiny Chef Show getting laid off directly by the execs at “Mickelflodeon” has tugged a

27.06.2025, 19:30:07 | Fast company - tech
Bumble is stumbling. Tinder is flagging. But this go-to gay dating app is thriving

Dating app Bumble continues to lose its footing. After subpar earnings, sluggish user growth, and internal stagnation, the company has

27.06.2025, 17:20:04 | Fast company - tech
Why Apple is revamping its App Store terms in the European Union

Apple has revamped its app store policies in the

27.06.2025, 14:50:06 | Fast company - tech
This AI-powered social app aims to end loneliness—by ‘engineering chance’

“An opportunity to choose chance.”

That’s what social platform startup 222 claims to offer its members. It isn’t a dating app—there’s no swiping, and, mo

27.06.2025, 14:50:05 | Fast company - tech