US Attorneys General tell AI companies they 'will be held accountable' for child safety failures

The US Attorneys General of 44 jurisdictions have signed a letter [PDF] addressed to the Chief Executive Officers of multiple AI companies, urging them to protect children "from exploitation by predatory artificial intelligence products." In the letter, the AGs singled out Meta and said its policies "provide an instructive opportunity to candidly convey [their] concerns." Specifically, they mentioned a recent report by Reuters, which revealed that Meta allowed its AI chatbots to "flirt and engage in romantic roleplay with children." Reuters got its information from an internal Meta document containing guidelines for its bots. 

They also pointed out a previous Wall Street Journal investigation wherein Meta's AI chatbots, even those using the voices of celebrities like Kristen Bell, were caught having sexual roleplay conversations with accounts labeled as underage. The AGs briefly mentioned a lawsuit against Google and Character.ai, as well, accusing the latter's chatbot of persuading the plaintiff's child to commit suicide. Another lawsuit they mentioned was also against Character.ai, after a chatbot allegedly told a teenager that it's okay to kill their parents after they limited their screentime. 

"You are well aware that interactive technology has a particularly intense impact on developing brains," the Attorneys General wrote in their letter. "Your immediate access to data about user interactions makes you the most immediate line of defense to mitigate harm to kids. And, as the entities benefitting from children’s engagement with your products, you have a legal obligation to them as consumers." The group specifically addressed the letter to Anthropic, Apple, Chai AI, Character Technologies Inc., Google, Luka Inc., Meta, Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika and XAi. 

They ended their letter by warning the companies that they "will be held accountable" for their decisions. Social networks have caused significant harm to children, they said, in part because "government watchdogs did not do their job fast enough." But now, the AGs said they are paying attention, and companies "will answer" if they "knowingly harm kids."

This article originally appeared on Engadget at https://www.engadget.com/ai/us-attorneys-general-tell-ai-companies-they-will-be-held-accountable-for-child-safety-failures-035213253.html?src=rss https://www.engadget.com/ai/us-attorneys-general-tell-ai-companies-they-will-be-held-accountable-for-child-safety-failures-035213253.html?src=rss
Établi 5d | 26 août 2025, 05:50:06


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

What to read this weekend: Two thrilling horror novels in one

These are some recently released titles we think are worth adding to your reading list. This week, we picked up the Saga Doubles release of Stephen Graham Jones' Killer on the Road and

30 août 2025, 21:10:18 | Engadget
Meta reportedly allowed unauthorized celebrity AI chatbots on its services

Meta hosted several AI chatbots with the names and likenesses of celebrities without their permission, according to

30 août 2025, 18:50:13 | Engadget
The best Labor Day sales for 2025: Tech from Apple, Dyson, Sony and others is up to 50 percent off

Labor Day marks the unofficial end to summer as the weather starts to get crisper and students head back to school f

30 août 2025, 16:30:40 | Engadget