Raising awareness around the risks of AI-powered deepfake content has become a prime concern for those worried about election integrity and mis- and disinformation spreading on social media. But efforts to warn people about deepfakes can inadvertently make them more suspicious of authentic content too.
That’s the finding of a new study by John Twomey, a researcher in applied psychology at University College Cork in Ireland, and colleagues. The researchers analyzed discussions about more than 1,200 tweets about the Russian invasion of Ukraine, some of which warned about the risk of encountering deepfake content on social media.
“It’s kind of a double-edged sword with deepfakes, in many ways,” says Twomey. “It’s going to increase our distrust in real media as well.” In other words, when deepfakes have become the norm, it’s easier for people to point to legitimate, real information and say it’s also a bogus piece of content (similar to the way the term “fake news” was co-opted and redefined by former U.S. president Donald Trump).
Twomey says that real-time redefinition was happening in the Twitter conversations he and his colleagues analyzed. “It’s what we saw, where people were throwing ‘deepfakes’ around like a buzzword,” he says. Some of the tweets he encountered even co-opted the term ‘deepfake’ to describe (authentic) individual accounts on the social platform. “That’s the main worry for me,” he says.
The issue will become particularly pertinent in the next year, as more than 150 elections worldwide will take place in an era where it’s never been easier to produce fake content using AI.
Twomey and his team witnessed plenty of positive, nuanced conversation on Twitter (now officially called X) about the risks of deepfakes. “In many cases, it was positive skepticism,” says Twomey. “People were saying, ‘Oh, we should be we should be careful when we share this video which might be a deepfake.’” But he was concerned with the way that conspiratorially minded corners of social media weaponized the labeling of content as deepfaked, muddying the understanding of what was and was not legitimate content.
Twomey was less certain about how to tackle the real and present danger deepfakes can cause to the public conversation without fueling the risk of its meaning being twisted by bad actors. “The genie’s kind of out of the bottle,” he admits. He suggests platforms should invest more to stop the sharing of deepfake information to begin with. As individuals, he has two bits of advice. One is to ask for proof to back up claims that a video is somehow inauthentic. “If someone calls a video deepfake, they should be providing evidence for it,” he says.
The second is to consider your personal responsibility to others. “With a lot of these videos, it can take a while to figure out if it’s a fake,” he says. “No-one really wants to hear that they should have to wait a while before posting or retweeting. But it can sometimes be good just to wait for opinions to come in that have some elements of expertise.”
Autentifică-te pentru a adăuga comentarii
Alte posturi din acest grup

The U.S. Army is turning to sponcon to reach Gen Z.
Steven Kelly, who has more than 1.3 million Instagram followe

EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

Meghan, Duchess of Sussex’ latest season of her reality show, With Love, Meghan, drops today on Netflix. In line with the stream

For understandable reasons, most technology coverage tends to focus more on the physical or visual

The United States’ hourly demand for electricity broke two records last month, reaching its highest-ever level—759,190 megawatts

A typical physician’s job is much more than just seeing patients. In fact, most doctors spend hours every week outside of clinic hours catching up on typing notes and getting visits and trea

Agentic AI is being heralded as the future of the generative AI revolu