We’re unprepared for the threat GenAI on Instagram, Facebook, and Whatsapp poses to kids

Waves of Child Sexual Abuse Material (CSAM) are inundating social media platforms as bad actors target these sites for their accessibility and reach.

The National Center for Missing and Exploited Children (NCMEC) reported 36 million cases of suspected CSAM in 2023, containing 100 million files. An overwhelming 85% came from Meta—primarily Facebook, Instagram, and WhatsApp.

As if NCMEC and law enforcement didn’t have their hands full identifying victims and perpetrators, we’re now seeing a new threat turbocharging the already rampant spread of this illicit content: Artificial Intelligence-Generated Child Sexual Abuse Material (AIG-CSAM). Bad actors are using widely available AI tools to make generative AI CSAM, which is still CSAM, and possession of it is still a federal crime.

President Biden recently signed the REPORT Act into law, which mandates social platforms report all kinds of CSAM, but the proliferation of AIG-CSAM is outpacing our institutions’ capacity to adequately combat it. Offenders are often creating these harmful and illegal deepfakes using both benign images of minors found online as well as manipulating existing CSAM, thereby revictimizing their subject. In June of last year, the FBI warned the public of AI-generated sextortion schemes being on the rise.

Navigating the complexities of detection

This urgent problem is only becoming more complex, creating torrential headwinds for the players involved. The influx in AIG-CSAM reports makes it harder for law enforcement to identify authentic CSAM endangering real minors. NCMEC has responded by adding a “Generative AI” field on their CyberTipline form to parse through the inbounds, but they’ve noted that many reports often lack this metadata. This may be because people can’t discern AI-generated content from the real thing, further hampering NCMEC by an influx of low-quality reports.

The good news is AI is increasingly getting better at policing itself, but there are limitations and challenges. OpenAI’s newly-released “Deepfake Detector” claims to detect synthetic content from its own image generator, DALL-E, but is not designed to detect images produced by other popular generators such as Midjourney and Stability AI. Companies like Meta are also increasingly flagging and labeling AI-generated content on their platforms, but most are relatively benign (think: Katy Perry at the Met), making AIG-CSAM detection like finding a needle in a haystack.

To fight AIG-CSAM, developers must dig into design

Much more can be done along the pipeline of responsibility, beginning with the AI developers making these tools inaccessible to those who exploit them. Developers must embrace a more stringent set of core design practices, including mitigations like removing CSAM from training data, which can lead AI models to generate or replicate such material, further spreading harmful content. Additionally, developers should invest in stress-testing models to understand how they can be misused, and limiting child-related queries users can ask.

Platforms must invest in CSAM detection

From a technological perspective, platform investment in CSAM protection involves a combination of digital fingerprint hashing against databases for known CSAM, machine-learning algorithms for unknown CSAM, and models that can detect AI-generated content.

But machine learning isn’t enough and is known to generate significant false positives in this area, making it difficult to find the signal in the noise. What’s more, bad actors are constantly changing their tactics, using seemingly innocuous hashtags and coded language known to their community to find each other and exchange illegal material.

Politicians must translate bipartisan support into funding

From a governmental perspective, child safety is thankfully an area with resounding bipartisan support. Although the REPORT Act represents positive governmental action to uphold platform accountability, the legislation has received criticism for compounding the overreporting problem NCMEC already faces. Platforms are now incentivized to err on the side of caution for fear of being fined. To address this, the government must appropriately fund organizations like NCMEC to tackle the surge of reports spurred by both the legislation and AI.

Parents must understand the latest threats

Finally, parents can play an integral role in protecting their children. They can discuss the very real risk of online predators with their kids. Parents should also keep their own social media profiles private, which likely contain images of their kids, and ensure privacy settings are in place on their kids’ profiles.

Reverse image searches on Google can be helpful for identifying photos parents don’t know are on the open web, and there are services like DeleteMe available that will remove private information that has been scraped and shared by shady data brokers.

The future of child safety in the AI era

Child sexual abuse material is not a new challenge, but its exacerbation by generative AI represents a troubling evolution in how such material proliferates. To effectively curb this, a unified effort from all stakeholders—AI developers, digital platforms, governmental bodies, nonprofits, law enforcement, and parents—is essential.

AI developers must prioritize robust, secure systems that are resistant to misuse. Platforms need to diligently identify and report any abusive content, while the government should ensure adequate funding for organizations like NCMEC. Meanwhile, parents must be vigilant and proactive.

The stakes could not be higher; the safety and well-being of our children in this new AI-driven age hang in the balance.

https://www.fastcompany.com/91136311/were-unprepared-for-the-threat-genai-on-instagram-facebook-and-whatsapp-poses-to-kids?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 1y | 7 juin 2024, 10:10:02


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

Windows 95’s look and feel are more impressive than ever

Every so often, Microsoft design director Diego Baca boots up an old computer so he can play around with Windows 95 again.

Baca has made a hobby of assembling old PCs with new-in-box vin

16 juil. 2025, 06:30:02 | Fast company - tech
Jack Dorsey’s new Sun Day app tells you exactly how long to tan before you burn

Twitter cofounder Jack Dorsey is back with a new app that tracks sun exposure and vitamin D levels.

Sun Day uses location-based data to show the current UV index, the day’s high, and add

15 juil. 2025, 21:10:06 | Fast company - tech
The CEO of Ciena on how AI is fueling a global subsea cable boom

Under the ocean’s surface lies the true backbone of the internet: an estimated

15 juil. 2025, 18:50:04 | Fast company - tech
AI therapy chatbots are unsafe and stigmatizing, a new Stanford study finds

AI chatbot therapists have made plenty of headlines in recent months—s

15 juil. 2025, 18:50:03 | Fast company - tech
Elon Musk’s chatbot Grok searches for his views before answering questions

The latest version of Elon Musk’s artificial intelligence chatbot Grok is echoing the views of its

15 juil. 2025, 16:30:06 | Fast company - tech
How this Florida county is using new 911 technology to save lives

When an emergency happens in Collier County, Florida, the

15 juil. 2025, 16:30:05 | Fast company - tech
How a ‘Shark Tank’-winning neuroscientist invented the bionic hand that stole the show at Comic-Con

A gleaming Belle from Beauty and the Beast glided along the exhibition floor at last year’s San Diego Comic-Con adorned in a yellow corseted gown with cascading satin folds. She could bare

15 juil. 2025, 14:20:03 | Fast company - tech