Why Elon Musk’s ‘Baby Grok’ has child safety advocates alarmed

The AI companion space will soon see another new entrant. Elon Musk, the owner of xAI and social media platform X, announced recently, “We’re going to make Baby Grok @xAI, an app dedicated to kid-friendly content.”

We’re going to make Baby Grok @xAI, an app dedicated to kid-friendly content

— Elon Musk (@elonmusk) July 20, 2025

The decision to enter the AI chatbot and companion market seems logical for X: Around three in every four U.S. teens have already used AI companions, and the platform will naturally want to build brand loyalty early.

However, experts in child protection and social media use are raising concerns. Musk, xAI, and child-focused apps may not be a good combination, they warn. “The concern is that if X or xAI are going to try to get into the children products zone, clearly they just have a terrible track record with prioritizing child safety,” says Haley McNamara, SVP of strategic initiatives and programs at the National Center on Sexual Exploitation (NCOSE). “They’ve just proven themselves to not really care, so I think that they should stay away from kids.”

McNamara is not alone in her concerns. The apprehension is shared internationally. “Elon Musk’s plans to launch a child-focused version of Grok will cause alarm across civil society, with growing evidence about the risks posed by persuasive design choices in AI chatbots, a lack of effective safeguarding in most major industry models, and no clear strategy to prevent hallucinations,” says Andy Burrows, CEO of the Molly Rose Foundation, an organization founded by the parents of U.K. teenager Molly Russell, a 14-year-old who died by suicide after being exposed to harmful content on social media.

Beyond the fact that “Baby Grok” would come from the same organization that developed “Ani,” a sexualized AI chatbot that users have quickly coerced into explicit conversations, and “Bad Rudi,” a red panda chatbot that defaults to insults, experts see broader dangers. Burrows is particularly worried about introducing AI chatbots to children since they may easily form emotional attachments to such technology.

“Chatbots can simulate deep and emotional relationships with child users, and there are evident risks that children may use chatbots to seek mental health support or advice in ways that may ultimately prove harmful,” Burrows says. Even adults have formed inappropriate emotional bonds with AI chatbots, struggling to differentiate between artificial and real relationships.

For more impressionable children, these connections could take hold more quickly, with potential long-term effects on their mental health. McNamara says companies have an obligation to consider how their platforms affect kids and to take steps to protect them—something she believes a Grok-bot for children fails to do. (Neither xAI nor Musk responded to Fast Company’s request for comment.)

NCOSE also raises concerns about whether Musk’s platforms can adequately protect young users. McNamara notes that after Musk acquired what was then Twitter, many child safety staff were let go.

“X also allows pornography on its platform, which does not require any kind of stringent age or consent verification for those videos,” she says, contending that such “lax policies have led to a widespread presence of abusive material,” and so far there’s been little sign that the company is taking meaningful action to address these issues.

Burrows, for his part, points to the U.K.’s new Online Safety Act as one layer of oversight that would apply to Baby Grok, though he notes that X has been slow to meet the requirements of the legislation. His larger concern is global. In many countries, he warns, “the lack of regulation will mean the rollout of badly designed products will go effectively unchecked.”

Musk may see a business opportunity. But for those responsible for protecting children online, the stakes are far higher.


https://www.fastcompany.com/91375163/why-elon-musks-baby-grok-has-child-safety-advocates-alarmed?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvořeno 8h | 28. 7. 2025 12:50:03


Chcete-li přidat komentář, přihlaste se

Ostatní příspěvky v této skupině

Tesla signs $16.5 billion deal with Samsung to make AI chips

Tesla has signed a $16.5 billion deal to source chips from

28. 7. 2025 19:40:08 | Fast company - tech
Estée Laundry is back—and this time it’s got a newsletter

Estée Laundry, the anonymous Instagram account and self-proclaimed beauty industry “watchdog,” is back after a two-year hiatus.

Estée Laundry—the name a play on beauty giant Estée Lauder

28. 7. 2025 17:30:05 | Fast company - tech
What content strategy looks like in the age of AI

There’s an air of panic in the media world. The specter of AI ha

28. 7. 2025 10:30:06 | Fast company - tech
Is ChatGPT making us stupid? Depends on how it’s used

Back in 2008, The Atlantic sparked controversy with a provocative cover story: “Is Google

27. 7. 2025 8:50:07 | Fast company - tech
LinkedIn’s Aneesh Raman says the career ladder is disappearing in the AI era

As AI evolves, the world of work is getting even better for the most c

26. 7. 2025 12:10:04 | Fast company - tech