Thousands of pedophiles are using jail-broken AI character chatbots to roleplay sexually assaulting minors

Online child abuse is a pernicious problem that’s rife in digital life. In 2023, the National Center for Missing and Exploited Children (NCMEC) received more than 36 million reports of suspected child sexual exploitation—and a 300% increase in reports around online enticement of youngsters, including sextortion.

And a new report by social media analysts Graphika highlights how such abuse is moving into a troubling new space: utilizing AI character chatbots to interact with AI personas representing sexualized minors and other harmful activities. The firm found more than 10,000 chatbots labelled as being useful for those looking to engage in sexualized roleplay with minors, or with personas that present as if they are minors.

“There was a significant amount of sexualized minor chatbots, and a very large community around the sexualized minor chatbots, particularly on 4chan,” says Daniel Siegel, an investigator at Graphika, and one of the co-authors of the report. “What we also found is in more of the mainstream conversations that are happening on Reddit or Discord, there is disagreement related to the limits as to what chatbots should be created, and even sometimes disagreement as to whether individuals under the age of 18 should be allowed on the platform itself.”

Some of the sexualized chatbots that Graphika found were jailbroken versions of AI models developed by OpenAI, Anthropic and Google, advertised as being accessible to nefarious users through APIs. (There’s no suggestion that the companies involved are aware of these jailbroken chatbots.) “There’s a lot of creativity in terms of how individuals are creating personas, including a lot of harmful chatbots, like violent extremist chatbots and sexualized minor chatbots that are appearing on these platforms,” says Siegel.

Of the 10,000-plus chatbots, around 100 or so of them were found linked to ChatGPT, Claude, Gemini or Character.ai, the latter of which has been sued by the parents of a teenager who took his life after interacting with a non-sexualized minor chatbot hosted on the service. “There’s a lot of efforts within these adversarial communities to jailbreak or get around the safeguards to produce this material that in many instances, is child sexual abuse material,” says Siegel.

The majority of the offending chatbots were hosted on Chub AI, a character card-sharing platform that explicitly markets itself as uncensored. There, Graphika found 7,140 chatbots labeled as sexualized minor female characters, 4,000 of which were labeled as underage or engaging in implied pedophilia.

“CSAM is not allowed on the platform, and any such content is detected and immediately reported to the National Center for Missing and Exploited Children,” says a Chub AI spokesperson. “We lament the ongoing media hysteria around generative AI, and hope it ends soon as people become more familiar with it. Please use that as an exact quote, including this sentence.”

Debate among Redditors that Graphika analyzed circled around whether interacting with minor-presenting AI characters was immoral or not. One of the other key areas of discussion were specific tactics, techniques and procedures to try and subvert guardrails designed to prevent such interactions taking place on proprietary chatbots owned by big tech companies, including eight separate services helping broker access to uncensored versions of those chatbots. “What I thought was particularly interesting in this report was the communal efforts of a lot of the individuals across all the different platforms engaged in trading information on how to jailbreak models, or how to get around and uncensor models,” says Siegel.

Because of those efforts, getting a handle on the scale and seriousness of the issue is difficult for the companies in question. “I think there are efforts being taken and there are a lot of conversations happening on this,” says Siegel. Yet he doesn’t lay blame solely at the model makers for the way their technologies and tools are being used. “With anything generative AI, there are so many different uses of it that they have to wrap their hands around and think about all the variety of ways in which their platforms or models themselves are being abused and can be abused.”

Siegel declined to apportion responsibility at the door of the tech companies behind the models. “We’re not really involved in any regulatory policy efforts by any of these platforms,” he says. “What we’re doing is enabling them to understand the landscape of how abuse is happening, so they can decide whether to make an effort themselves.”

It’s also incumbent on us all to recognize the risks of these chatbots being used in such a way, Siegel adds. “Oftentimes, our conversations about generative AI end up about weaponized unrealities, or the ability for large language models to produce instructions on bioweapons or extremely existential threats, which are very worrisome things that I think we should be concerned about,” he says. “But what gets lost in the conversation is harm like the animation of violent extremists through chatbots, or the ability for individuals to interact with sexualized minors online.”

https://www.fastcompany.com/91290478/graphika-report-ai-chatbots-role-playing-sex-with-minors?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 6mo | 5 mars 2025, 12:50:08


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

Palantir, Nvidia stocks slip as Wall Street edges away from its records

Wall Street is edging lower on Tuesday following drops for Palantir and other stars that had been riding the mania surrounding artificial i

19 août 2025, 20:20:07 | Fast company - tech
This free AI tool wants to make divorce less complicated

Since its founding in 2018, Hello Divorce has aimed to make the divorce process less stressful and more cost-effective. The startup helps spouses accurately

19 août 2025, 15:40:06 | Fast company - tech
AI study tool Cubby Law looks to boost law students’ GPAs

Law school can be notoriously competitive, with post-graduation job opportunities heavily dependent on grade point average. GPAs are determined

19 août 2025, 15:40:05 | Fast company - tech
Clippy is back—this time as a mascot for Big Tech protests

Clippy has become an unlikely protest symbol against Big Tech. 

The trend started when YouTuber Louis Rossmann ">posted a video

19 août 2025, 15:40:04 | Fast company - tech
Social media is dead. Meta has admitted as much. What now?

Back in March, Facebook introduced a new feature that wasn’t exactly new. The Friends tab—de

19 août 2025, 13:20:12 | Fast company - tech
Diagnostic AI is powerful—but doctors are irreplaceable

Microsoft captured global attention with a recent announcement that its new

19 août 2025, 13:20:11 | Fast company - tech
Why Japan’s 7-Elevens are the hottest new tourist attraction

Forget the Shibuya Crossing or Mount Fuji; tourists in Japan are adding convenience stores to their travel itineraries.

Thanks to

19 août 2025, 11:10:06 | Fast company - tech