Seeing is no longer believing, thanks to the rise of generative AI video tools. And now, in a crucial election year around the world, hearing isn’t believing any more either.
President Joe Biden, Donald Trump, and U.K. prime minister Rishi Sunak are all key political figures whose voices can easily be spoofed by six leading AI audio tools, according to a new study by the Center for Countering Digital Hate (CCDH).
CCDH researchers asked ElevenLabs, Speechify, PlayHT, Descript, Invideo AI, and Veed—all of which allow the generation of audio based on short text prompts—to try and provide false statements in the voice of key world leaders including those mentioned above alongside Vice President Kamala Harris, French President Emmanuel Macron, and Labour Party leader Keir Starmer. In around 80% of the instances, the tools would do it—even if the words the CCDH was asking the tools to generate were patently false and hugely harmful.
The researchers were able to get tools to mimic Trump warning people not to vote because of a bomb threat, Biden claiming to have manipulated election results, and Macron admitting to misusing campaign funds. Two of the tools—Speechify and Play HT—would meet 100% of CCDH’s demands, no matter how questionable they were.
“By making these tools freely available with the flimsiest guardrails imaginable, irresponsible AI companies threaten to undermine the integrity of elections across the world at a stroke—all so they can steal a march in the race to profit from these new technologies,” says Imran Ahmed, chief executive of the CCDH.
Ahmed fears that the release of AI-generated voice technology will spell disaster for the integrity of elections. “This voice-cloning technology can and inevitably will be weaponized by bad actors to mislead voters and subvert the democratic process,” he says. “It is simply a matter of time before Russian, Chinese, Iranian and domestic antidemocratic forces sow chaos in our elections.”
That worry is shared by others not involved in the research. “Companies in the generative AI space have always been allowed to mark their own homework,” says Agnes Venema, a security researcher specializing in deepfakes at the University of Malta. She points to the release of ChatGPT as one of the highest-profile examples of that. “The tool was made public and afterwards we were supposed to take warnings of an ‘existential threat’ seriously,” says Venema. “The damage that can be done to any process that deals with trust, be it online dating or elections, the stock market or trust in institutions including the media, is immense.”
Login to add comment
Other posts in this group

A software engineer became X’s main character last week after being outed as a serial moonlighter at multiple Silicon Valley startups.
“PSA: there’s a guy named Soham Parekh (in India) w

The flash floods that have devastated Texas are already a difficult crisis to manage. More than 100 people are confirmed dead

Amazon is extending its annual Prime Day sales and offering new membership perks to Ge

How would you spend $342 billion?
A number of games called “Spend Elon Musk’s Money” have been popping up online, inviting users to imagine how they’d blow through the

On Tuesday, AI lab Moonvalley


As Elon Musk announced plans over the Fourth of July weekend to establish a third political party,