Seeing is no longer believing, thanks to the rise of generative AI video tools. And now, in a crucial election year around the world, hearing isn’t believing any more either.
President Joe Biden, Donald Trump, and U.K. prime minister Rishi Sunak are all key political figures whose voices can easily be spoofed by six leading AI audio tools, according to a new study by the Center for Countering Digital Hate (CCDH).
CCDH researchers asked ElevenLabs, Speechify, PlayHT, Descript, Invideo AI, and Veed—all of which allow the generation of audio based on short text prompts—to try and provide false statements in the voice of key world leaders including those mentioned above alongside Vice President Kamala Harris, French President Emmanuel Macron, and Labour Party leader Keir Starmer. In around 80% of the instances, the tools would do it—even if the words the CCDH was asking the tools to generate were patently false and hugely harmful.
The researchers were able to get tools to mimic Trump warning people not to vote because of a bomb threat, Biden claiming to have manipulated election results, and Macron admitting to misusing campaign funds. Two of the tools—Speechify and Play HT—would meet 100% of CCDH’s demands, no matter how questionable they were.
“By making these tools freely available with the flimsiest guardrails imaginable, irresponsible AI companies threaten to undermine the integrity of elections across the world at a stroke—all so they can steal a march in the race to profit from these new technologies,” says Imran Ahmed, chief executive of the CCDH.
Ahmed fears that the release of AI-generated voice technology will spell disaster for the integrity of elections. “This voice-cloning technology can and inevitably will be weaponized by bad actors to mislead voters and subvert the democratic process,” he says. “It is simply a matter of time before Russian, Chinese, Iranian and domestic antidemocratic forces sow chaos in our elections.”
That worry is shared by others not involved in the research. “Companies in the generative AI space have always been allowed to mark their own homework,” says Agnes Venema, a security researcher specializing in deepfakes at the University of Malta. She points to the release of ChatGPT as one of the highest-profile examples of that. “The tool was made public and afterwards we were supposed to take warnings of an ‘existential threat’ seriously,” says Venema. “The damage that can be done to any process that deals with trust, be it online dating or elections, the stock market or trust in institutions including the media, is immense.”
Autentifică-te pentru a adăuga comentarii
Alte posturi din acest grup

A fintech company called Slash offers business banking accounts tailored to the needs of specific kinds of entrepreneurs.
Slash provides business che

On April 14, 2025, Blue Origin launched six women—Aisha Bowe, Amanda Nguyễn, Gayle Ki

Google is rapidly expanding its AI search capabilities, as reflected in the announcements it made Tuesday at its Google I/O developer conference. The search giant announced the general availabilit

A humanoid robotics startup co-founded by prominent artificial-intelligence futurist Ray Kurzweil said on Tuesday that ven

The companies behind AI models are keen to share granular data about their performance on benchmarks that demonstrate how well they operate. What they are less eager to disclose is information abo

Twenty-four-hour customer support with zero hold time, infinite personalization, customized care, and behavior-based response are all aspects of the customer experience that will be expected soone
