Thousands of pedophiles are using jail-broken AI character chatbots to roleplay sexually assaulting minors

Online child abuse is a pernicious problem that’s rife in digital life. In 2023, the National Center for Missing and Exploited Children (NCMEC) received more than 36 million reports of suspected child sexual exploitation—and a 300% increase in reports around online enticement of youngsters, including sextortion.

And a new report by social media analysts Graphika highlights how such abuse is moving into a troubling new space: utilizing AI character chatbots to interact with AI personas representing sexualized minors and other harmful activities. The firm found more than 10,000 chatbots labelled as being useful for those looking to engage in sexualized roleplay with minors, or with personas that present as if they are minors.

“There was a significant amount of sexualized minor chatbots, and a very large community around the sexualized minor chatbots, particularly on 4chan,” says Daniel Siegel, an investigator at Graphika, and one of the co-authors of the report. “What we also found is in more of the mainstream conversations that are happening on Reddit or Discord, there is disagreement related to the limits as to what chatbots should be created, and even sometimes disagreement as to whether individuals under the age of 18 should be allowed on the platform itself.”

Some of the sexualized chatbots that Graphika found were jailbroken versions of AI models developed by OpenAI, Anthropic and Google, advertised as being accessible to nefarious users through APIs. (There’s no suggestion that the companies involved are aware of these jailbroken chatbots.) “There’s a lot of creativity in terms of how individuals are creating personas, including a lot of harmful chatbots, like violent extremist chatbots and sexualized minor chatbots that are appearing on these platforms,” says Siegel.

Of the 10,000-plus chatbots, around 100 or so of them were found linked to ChatGPT, Claude, Gemini or Character.ai, the latter of which has been sued by the parents of a teenager who took his life after interacting with a non-sexualized minor chatbot hosted on the service. “There’s a lot of efforts within these adversarial communities to jailbreak or get around the safeguards to produce this material that in many instances, is child sexual abuse material,” says Siegel.

The majority of the offending chatbots were hosted on Chub AI, a character card-sharing platform that explicitly markets itself as uncensored. There, Graphika found 7,140 chatbots labeled as sexualized minor female characters, 4,000 of which were labeled as underage or engaging in implied pedophilia.

“CSAM is not allowed on the platform, and any such content is detected and immediately reported to the National Center for Missing and Exploited Children,” says a Chub AI spokesperson. “We lament the ongoing media hysteria around generative AI, and hope it ends soon as people become more familiar with it. Please use that as an exact quote, including this sentence.”

Debate among Redditors that Graphika analyzed circled around whether interacting with minor-presenting AI characters was immoral or not. One of the other key areas of discussion were specific tactics, techniques and procedures to try and subvert guardrails designed to prevent such interactions taking place on proprietary chatbots owned by big tech companies, including eight separate services helping broker access to uncensored versions of those chatbots. “What I thought was particularly interesting in this report was the communal efforts of a lot of the individuals across all the different platforms engaged in trading information on how to jailbreak models, or how to get around and uncensor models,” says Siegel.

Because of those efforts, getting a handle on the scale and seriousness of the issue is difficult for the companies in question. “I think there are efforts being taken and there are a lot of conversations happening on this,” says Siegel. Yet he doesn’t lay blame solely at the model makers for the way their technologies and tools are being used. “With anything generative AI, there are so many different uses of it that they have to wrap their hands around and think about all the variety of ways in which their platforms or models themselves are being abused and can be abused.”

Siegel declined to apportion responsibility at the door of the tech companies behind the models. “We’re not really involved in any regulatory policy efforts by any of these platforms,” he says. “What we’re doing is enabling them to understand the landscape of how abuse is happening, so they can decide whether to make an effort themselves.”

It’s also incumbent on us all to recognize the risks of these chatbots being used in such a way, Siegel adds. “Oftentimes, our conversations about generative AI end up about weaponized unrealities, or the ability for large language models to produce instructions on bioweapons or extremely existential threats, which are very worrisome things that I think we should be concerned about,” he says. “But what gets lost in the conversation is harm like the animation of violent extremists through chatbots, or the ability for individuals to interact with sexualized minors online.”

https://www.fastcompany.com/91290478/graphika-report-ai-chatbots-role-playing-sex-with-minors?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 2mo | 5 mar. 2025, 12:50:08


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

Apple partners with a brain-computer startup to turn thoughts into device control

Apple is partnering with brain-computer interface company Synchron to develop technology that lets users control devices using neural signals.

Still in the early stages, the technology c

13 mai 2025, 19:20:07 | Fast company - tech
Couples are saying ‘I do’ in ‘Minecraft’ as virtual weddings become more popular

Destination weddings are out, and virtual weddings are in.

Rather than traveling to the Amalfi Coast or Provence, Wired

13 mai 2025, 19:20:06 | Fast company - tech
Sal Khan’s new Dialogues program teaches students how to have civil, thoughtful discussions

In recent years, Khan Academy founder Sal Khan has been most visible promoting the organization’s

13 mai 2025, 17:10:03 | Fast company - tech
Spotify’s AI-powered DJ now takes song requests

Since it launched two years ago, Spotify’s AI DJ has been a one-way experience. It curates old favorites and helps listeners discover new tracks based on past listening experience and what similar

13 mai 2025, 14:40:06 | Fast company - tech
California’s location data privacy bill aims to reshape digital consent

Amid the ongoing evolution of digital privacy laws, one California proposal is drawing heightened attention from legal scholars, technologists, and privacy advocates.

13 mai 2025, 12:30:04 | Fast company - tech
Apple’s App Store is getting ‘nutrition labels’ for accessibility

You can learn a lot about an app before you download it from Apple’s App Store, such as what other users think of it, the access it

13 mai 2025, 12:30:04 | Fast company - tech
Anaconda launches an AI platform to become the GitHub of enterprise open-source development

AI integration remains a top priority across enterprises worldwide, yet success remains elusive despite widespread enthusiasm and significant investment. An

13 mai 2025, 12:30:03 | Fast company - tech