Meta announced on Tuesday it’s taking steps to label AI-generated content, including misinformation and deepfakes, on its Facebook, Instagram, and Threads social platforms. But its mitigation strategy has some major holes, and it’s arriving long after the threat of deepfakes has become real.
Meta said that “in the coming months” (when we’re in the thick of the 2024 presidential election), it will be able to detect AI-generated content on its platforms created by tools from the likes of Adobe and Microsoft. It’ll rely on the toolmakers to inject encrypted metadata into AI-generated content, according to the specifications of an industry standards body. Meta points out that it’s always added “visible markers, invisible watermarks, and metadata” to identify, and label, images generated by its own AI tools.
But those labeling tools are for the good guys; bad actors that spread AI-generated mis/disinformation use lesser-known, open-source tools to create content that’s hard to trace back to the tool or the creator. Or they may select tools that make it easy to disable the addition of metadata or watermarks.
There’s little data to suggest that Meta has the technology to detect and label that kind of content at scale. The company says it’s “working hard” to develop classifier AI models to detect AI-generated content that lacks watermarks or metadata. It also says it isn’t yet able to detect AI-generated videos or audio recordings. Instead, Meta says it’s relying on users to label “photorealistic video or realistic-sounding audio that was digitally created or altered” when they post it, and says it may “apply penalties” for those that don’t.
In a blog post, Meta public policy guy Nick Clegg portrays the problem of AI-generated mis/disinformation as an industry problem, a society-wide problem, and a problem of the future. “As it becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content.” But Meta controls, by far, the biggest distribution network for such content now—and the need to detect and label deepfakes is now, not a few months from now. Just ask Joe Biden or Taylor Swift. It’s too late to be talking about future plans and approaches when another high-stakes election cycle is already upon us.
“Meta has been a pioneer in AI development for more than a decade,” Clegg says. “We know that progress and responsibility can and must go hand in hand.”
Meta has been developing its own generative AI tools for years now. Can the company really say that it has devoted equal time, resources, and brain power to mitigating the disinformation risk of the technology?
Since its Facebook days, and for more than the past decade, the company has played a profound role in blurring the lines between truth and misinformation, and now it appears to be slow-walking its response to the next major threat to truth, trust, and authenticity.
Zaloguj się, aby dodać komentarz
Inne posty w tej grupie

Every so often, Microsoft design director Diego Baca boots up an old computer so he can play around with Windows 95 again.
Baca has made a hobby of assembling old PCs with new-in-box vin

Twitter cofounder Jack Dorsey is back with a new app that tracks sun exposure and vitamin D levels.
Sun Day uses location-based data to show the current UV index, the day’s high, and add


AI chatbot therapists have made plenty of headlines in recent months—s

The latest version of Elon Musk’s artificial intelligence chatbot Grok is echoing the views of its

When an emergency happens in Collier County, Florida, the

A gleaming Belle from Beauty and the Beast glided along the exhibition floor at last year’s San Diego Comic-Con adorned in a yellow corseted gown with cascading satin folds. She could bare