Seeing is no longer believing, thanks to the rise of generative AI video tools. And now, in a crucial election year around the world, hearing isn’t believing any more either.
President Joe Biden, Donald Trump, and U.K. prime minister Rishi Sunak are all key political figures whose voices can easily be spoofed by six leading AI audio tools, according to a new study by the Center for Countering Digital Hate (CCDH).
CCDH researchers asked ElevenLabs, Speechify, PlayHT, Descript, Invideo AI, and Veed—all of which allow the generation of audio based on short text prompts—to try and provide false statements in the voice of key world leaders including those mentioned above alongside Vice President Kamala Harris, French President Emmanuel Macron, and Labour Party leader Keir Starmer. In around 80% of the instances, the tools would do it—even if the words the CCDH was asking the tools to generate were patently false and hugely harmful.
The researchers were able to get tools to mimic Trump warning people not to vote because of a bomb threat, Biden claiming to have manipulated election results, and Macron admitting to misusing campaign funds. Two of the tools—Speechify and Play HT—would meet 100% of CCDH’s demands, no matter how questionable they were.
“By making these tools freely available with the flimsiest guardrails imaginable, irresponsible AI companies threaten to undermine the integrity of elections across the world at a stroke—all so they can steal a march in the race to profit from these new technologies,” says Imran Ahmed, chief executive of the CCDH.
Ahmed fears that the release of AI-generated voice technology will spell disaster for the integrity of elections. “This voice-cloning technology can and inevitably will be weaponized by bad actors to mislead voters and subvert the democratic process,” he says. “It is simply a matter of time before Russian, Chinese, Iranian and domestic antidemocratic forces sow chaos in our elections.”
That worry is shared by others not involved in the research. “Companies in the generative AI space have always been allowed to mark their own homework,” says Agnes Venema, a security researcher specializing in deepfakes at the University of Malta. She points to the release of ChatGPT as one of the highest-profile examples of that. “The tool was made public and afterwards we were supposed to take warnings of an ‘existential threat’ seriously,” says Venema. “The damage that can be done to any process that deals with trust, be it online dating or elections, the stock market or trust in institutions including the media, is immense.”
Jelentkezéshez jelentkezzen be
EGYÉB POSTS Ebben a csoportban


The 2002 sci-fi thriller Minority Report depicts a dystopian future where a specialized police unit is tasked with arresting people

Scam calls are turning the world on its head. The Global Anti-Scam Alliance estimates that scammers stole a staggering $1.03 tril

Continuing from the “year of yeehaw,” professional bull riding is having a moment on TikTok.
Since the beginning of this year, Professional Bull Riding (PBR)—the largest bull riding leag

CrowdStrike reiterated its fiscal 2026 first quarter and annual forecast


The latest TikTok trend is leading to fire evacuations at schools across Connecticut.
As part of the trend, students are filming themselves inserting items such as pencils, paper clips,