Meta’s oversight board spotlights the social network’s problem with explicit AI deepfakes

Meta’s Oversight Board says the social media company fell short in its response to a pair of high-profile explicit, AI-generated images of female public figures on its sites and is calling for the company to update its policies and make the language of those policies even more clear to users.

The decision, announced Thursday, comes after a three-month investigation following user posts of a deepfake nude image of a public figure from India, as well as a more graphic image of a public figure from the U.S. Neither Meta nor the Oversight Board named the victims of the deepfakes.

The Board found that both images violated Meta’s rule prohibiting “derogatory sexualized photoshop” images, part of its Bullying and Harassment policy, even though Photoshop was not used in their creation.

“Removing both posts was in line with Meta’s human rights responsibilities,” the finding reads.

Unfortunately, that removal didn’t happen as fast as it should have. In the case of the Indian woman, a user reported the content to Meta, citing pornography. That report was closed as it was not reviewed within 48 hours. An appeal was made, but that was also automatically closed, meaning the image remained viewable. The user then appealed to the Oversight Board, at which point Meta reversed course and said the decision to leave the image up was a mistake and removed it.

However, the image of the American celebrity, which showed her nude while a man groped her breasts, was removed promptly and added to the company’s automated enforcement system, which instantly removes reposts of offensive images.

When the Oversight Committee asked about the discrepancy, Meta replied that the company relied on media reports to add the image resembling the American public figure to the bank, but there was no such news coverage in the Indian case.

“This is worrying because many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance,” the Board’s ruling reads.

Clearer language

One of the chief takeaways from the Board’s recommendations is that the language Meta uses to prohibit these sort of images needs to be updated. It suggested the company move the prohibition on “derogatory sexualized photoshop” into the Adult Sexual Exploitation Community Standard.”

It also asked for some changes to the “derogatory sexualized photoshop” term, urging Meta to modify “derogatory” to “non-consensual” and to replace “Photoshop” with a more generalized term for manipulated media.

Additionally, it recommended that policies on non-consensual content include AI-generated or manipulated images and more. “For content with this specific context, the policy should also specify that it need not be ‘non-commercial or produced in a private setting’ to be violating,” it wrote.

The Board also noted concerns about the auto-closing of appeals for image-based sexual abuse, saying “even waiting 48 hours for a review can be harmful given the damage caused. The Board does not yet have sufficient information on Meta’s use of auto­ closing generally but considers this an issue that could have a significant human rights impact, requiring risk assessment and mitigation.”

Deepfake sexually-explicit or nude images and videos are overwhelmingly aimed at women, who made up 99% of the targeted individuals, according to a 2023 report from Home Security Heroes.

“Experts consulted by the Board noted that this content can be particularly damaging in socially conservative communities,” the Oversight Board wrote. “For instance, an 18-year-old woman was reportedly shot dead by her father and uncle in Pakistan’s remote Kohistan region after a digitally altered photograph of her with a man went viral.”

As offensive as the posting of deepfakes often is, those images are not always necessarily meant to harass the victim. External research, the Board wrote, shows that “users post such content for many reasons besides harassment and trolling, including a desire to build an audience, monetize pages or direct users to other sites, including pornographic ones.”

By clarifying the rules, it said, and putting the emphasis on the lack of consent, that could potentially reduce their proliferation of AI deepfakes on the company’s social media sites.

https://www.fastcompany.com/91162426/meta-oversight-board-ai-created-deepfakes?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvorené 11mo | 25. 7. 2024, 12:20:09


Ak chcete pridať komentár, prihláste sa

Ostatné príspevky v tejto skupine

What is the ‘pearl earring theory’? The TikTok trend blaming jewelry for being single

“Girl With a Pearl Earring” has taken on a new meaning on social media.

TikT

1. 7. 2025, 21:10:02 | Fast company - tech
Yahoo CEO Jim Lanzone talks AI, reinvention, and reclaiming relevance

Yahoo is at a critical inflection point. Despite having a large user base—across Yahoo Finance, Yahoo Sports, and Yahoo News—the media company hasn’t reclaimed the

1. 7. 2025, 14:10:03 | Fast company - tech
This entrepreneur made billions on crypto. His next frontier is outer space

Perched on a dusty high desert plain about 100 miles north of downtown Los Angeles, the Mojave Air and Space Port looks more like a final destination for aerospace experiments than a stepping ston

1. 7. 2025, 11:40:04 | Fast company - tech
Inside Wikipedia’s AI revolt—and what it means for the media

Before generative AI, if you wanted an inexpensive way to build out lo

1. 7. 2025, 9:30:03 | Fast company - tech
Why this bank is hiring full-time AI employees

Banks are embracing the AI workforce—but some institutions are taking un

1. 7. 2025, 9:30:02 | Fast company - tech
Meet Picastro, the Instagram alternative for astrophotographers

For those who’ve had enough of scrolling AI slop, meet Picastro: an In

1. 7. 2025, 4:40:07 | Fast company - tech