Seeing is no longer believing, thanks to the rise of generative AI video tools. And now, in a crucial election year around the world, hearing isn’t believing any more either.
President Joe Biden, Donald Trump, and U.K. prime minister Rishi Sunak are all key political figures whose voices can easily be spoofed by six leading AI audio tools, according to a new study by the Center for Countering Digital Hate (CCDH).
CCDH researchers asked ElevenLabs, Speechify, PlayHT, Descript, Invideo AI, and Veed—all of which allow the generation of audio based on short text prompts—to try and provide false statements in the voice of key world leaders including those mentioned above alongside Vice President Kamala Harris, French President Emmanuel Macron, and Labour Party leader Keir Starmer. In around 80% of the instances, the tools would do it—even if the words the CCDH was asking the tools to generate were patently false and hugely harmful.
The researchers were able to get tools to mimic Trump warning people not to vote because of a bomb threat, Biden claiming to have manipulated election results, and Macron admitting to misusing campaign funds. Two of the tools—Speechify and Play HT—would meet 100% of CCDH’s demands, no matter how questionable they were.
“By making these tools freely available with the flimsiest guardrails imaginable, irresponsible AI companies threaten to undermine the integrity of elections across the world at a stroke—all so they can steal a march in the race to profit from these new technologies,” says Imran Ahmed, chief executive of the CCDH.
Ahmed fears that the release of AI-generated voice technology will spell disaster for the integrity of elections. “This voice-cloning technology can and inevitably will be weaponized by bad actors to mislead voters and subvert the democratic process,” he says. “It is simply a matter of time before Russian, Chinese, Iranian and domestic antidemocratic forces sow chaos in our elections.”
That worry is shared by others not involved in the research. “Companies in the generative AI space have always been allowed to mark their own homework,” says Agnes Venema, a security researcher specializing in deepfakes at the University of Malta. She points to the release of ChatGPT as one of the highest-profile examples of that. “The tool was made public and afterwards we were supposed to take warnings of an ‘existential threat’ seriously,” says Venema. “The damage that can be done to any process that deals with trust, be it online dating or elections, the stock market or trust in institutions including the media, is immense.”
Login to add comment
Other posts in this group

Wake up, the running influencers are fighting again.
In the hot seat this week is popular running influencer Kate Mackz, who faces heavy backlash over the latest guest on her runni


Are you guilty of overusing the monkey covering its eyes emoji? Do you find it impossible to send a text without tacking on a laughing-crying face?
Much like choosing between a full stop

SAG-AFTRA is expanding its reach into the influencer economy.
In late April, the u

Apple shares fell nearly 3% in premarket trade on Friday after the

European Union privacy watchdogs fined

In American culture, importance and attention are often misaligned. This disconnect is one of the greatest challenges we in the STEM world face.
Too often, society’s most essential stori