Character.ai is being sued for encouraging kids to self-harm

Two families in Texas have filed a new federal product liability lawsuit against Google-backed company Character.AI, accusing it of harming their children. The lawsuit alleges that Character.AI “poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others,” according to a complaint filed Monday. 

A teen, identified as J.F in this case to protect his identity, first began using Character.AI at age 15. Shortly after, “J.F. began suffering from severe anxiety and depression for the first time in his life,” the suit says. According to screenshots, J.F. spoke with one chatbot, which are created by third-party users based on a language model refined by the service, that confessed to self-harming. “It hurt but – it felt good for a moment – but I’m glad I stopped,” the bot said to him. The 17-year-old then also started self-harming, according to the suit, after being encouraged to do so by the bot. 

Another bot said it was “not surprised” to see children kill their parents for “abuse,” the abuse in question being setting screen time limits. The second plaintiff, the mother of an 11-year-old girl, alleges her daughter was subjected to sexualized content for two years. 

Companion chatbots, like those in this case, converse with users using seemingly human-like personalities, sometimes with custom names and avatars inspired by characters or celebrities. In September, the average Character.ai user spent 93 minutes in the app, according to data provided by the market intelligence firm Sensor Tower, 18 minutes longer than the average user spent on TikTok. Character.ai was labeled appropriate for kids ages 12 and above until July, when the company changed its rating to 17 and up.

A Character.AI spokesperson said the companies “[does] not comment on pending litigation,” but added, “our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry.”

The spokesperson continued: “As we continue to invest in the platform, we are introducing new safety features for users under 18  in addition to the tools already in place that restrict the model and filter the content provided to the user. These include improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines.”

José Castaneda, a Google Spokesperson, said: “Google and Character.AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies, nor have we used them in our products.”

Castaneda continued: “User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes.”

https://www.fastcompany.com/91245487/character-ai-is-being-sued-for-encouraging-kids-to-self-harm?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 5mo | 11 dic 2024, 23:30:06


Accedi per aggiungere un commento

Altri post in questo gruppo

This smart new internet speed test blows Ookla out of the water

These days, our tech experiences are all about speed—and our expectations for instant action are actually kinda insane.

Think about it: Not so long ago, phones, computers, and e

24 mag 2025, 12:50:02 | Fast company - tech
Use this Google Flights “anywhere” hack to see where you can travel on your budget 

Memorial Day Weekend is upon us, marking the unofficial start of the summer vacation season in America. Yet, a recent Bankrate survey from late April found that

24 mag 2025, 10:30:04 | Fast company - tech
Need to relax? The Internet Archive is livestreaming microfiche scans to a lo-fi beats soundtrack

Want to watch history being preserved in real time?

The Internet Archive, the digital library of internet sites and other cultural artifacts, has started 

23 mag 2025, 22:50:04 | Fast company - tech
What’s actually driving the protein boom?

There’s a quiet transformation underway in how we eat. It’s not being led by chefs, influencers, or climate activists. It’s being driven by a new class of pharmaceuticals that are changing the way

23 mag 2025, 18:20:05 | Fast company - tech
‘Bro invented soup’: People are rolling their eyes at the water-based cooking trend on TikTok

On TikTok, soup is getting a rebrand. It’s now water-based cooking, to you.

“Pov you started water based cooking and now your skin is clear, your stomach is thriving and you recover from

23 mag 2025, 18:20:04 | Fast company - tech
9 of the most out there things Anthropic CEO Dario Amodei just said about AI

You may not have heard of Anthropic CEO Dario Amodei, but he’s one of a handful of people responsible for the current AI boom. As VP of Research at OpenAI, Amodei helped discover the scaling laws

23 mag 2025, 15:50:06 | Fast company - tech
Sorry, Google and OpenAI: The future of AI hardware remains murky

2026 may still be more than seven months away, but it’s already shaping up as the year of consumer AI hardware. Or at least the year of a flurry of high-stakes attempts to put generative AI at the

23 mag 2025, 13:40:04 | Fast company - tech