Why Elon Musk’s ‘Baby Grok’ has child safety advocates alarmed

The AI companion space will soon see another new entrant. Elon Musk, the owner of xAI and social media platform X, announced recently, “We’re going to make Baby Grok @xAI, an app dedicated to kid-friendly content.”

We’re going to make Baby Grok @xAI, an app dedicated to kid-friendly content

— Elon Musk (@elonmusk) July 20, 2025

The decision to enter the AI chatbot and companion market seems logical for X: Around three in every four U.S. teens have already used AI companions, and the platform will naturally want to build brand loyalty early.

However, experts in child protection and social media use are raising concerns. Musk, xAI, and child-focused apps may not be a good combination, they warn. “The concern is that if X or xAI are going to try to get into the children products zone, clearly they just have a terrible track record with prioritizing child safety,” says Haley McNamara, SVP of strategic initiatives and programs at the National Center on Sexual Exploitation (NCOSE). “They’ve just proven themselves to not really care, so I think that they should stay away from kids.”

McNamara is not alone in her concerns. The apprehension is shared internationally. “Elon Musk’s plans to launch a child-focused version of Grok will cause alarm across civil society, with growing evidence about the risks posed by persuasive design choices in AI chatbots, a lack of effective safeguarding in most major industry models, and no clear strategy to prevent hallucinations,” says Andy Burrows, CEO of the Molly Rose Foundation, an organization founded by the parents of U.K. teenager Molly Russell, a 14-year-old who died by suicide after being exposed to harmful content on social media.

Beyond the fact that “Baby Grok” would come from the same organization that developed “Ani,” a sexualized AI chatbot that users have quickly coerced into explicit conversations, and “Bad Rudi,” a red panda chatbot that defaults to insults, experts see broader dangers. Burrows is particularly worried about introducing AI chatbots to children since they may easily form emotional attachments to such technology.

“Chatbots can simulate deep and emotional relationships with child users, and there are evident risks that children may use chatbots to seek mental health support or advice in ways that may ultimately prove harmful,” Burrows says. Even adults have formed inappropriate emotional bonds with AI chatbots, struggling to differentiate between artificial and real relationships.

For more impressionable children, these connections could take hold more quickly, with potential long-term effects on their mental health. McNamara says companies have an obligation to consider how their platforms affect kids and to take steps to protect them—something she believes a Grok-bot for children fails to do. (Neither xAI nor Musk responded to Fast Company’s request for comment.)

NCOSE also raises concerns about whether Musk’s platforms can adequately protect young users. McNamara notes that after Musk acquired what was then Twitter, many child safety staff were let go.

“X also allows pornography on its platform, which does not require any kind of stringent age or consent verification for those videos,” she says, contending that such “lax policies have led to a widespread presence of abusive material,” and so far there’s been little sign that the company is taking meaningful action to address these issues.

Burrows, for his part, points to the U.K.’s new Online Safety Act as one layer of oversight that would apply to Baby Grok, though he notes that X has been slow to meet the requirements of the legislation. His larger concern is global. In many countries, he warns, “the lack of regulation will mean the rollout of badly designed products will go effectively unchecked.”

Musk may see a business opportunity. But for those responsible for protecting children online, the stakes are far higher.


https://www.fastcompany.com/91375163/why-elon-musks-baby-grok-has-child-safety-advocates-alarmed?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Erstellt 9d | 28.07.2025, 12:50:03


Melden Sie sich an, um einen Kommentar hinzuzufügen

Andere Beiträge in dieser Gruppe

Google wants you to be a citizen data scientist

For more than a decade, enterprise teams bought into the promise of business intelligence platforms delivering “decision-making at the speed of thought.” But most discovered the opposite: slow-mov

06.08.2025, 19:30:04 | Fast company - tech
Apple to invest another $100 billion in the U.S.

President Donald Trump on Wednesday is expected to celebrate at the White House a commitment by

06.08.2025, 19:30:03 | Fast company - tech
Character.AI launches social feed to let users interact, create, and share with AI personas

Character.AI is going social, adding an interactive feed to its mobile apps. 

Rolled out on Monday, the new social feed may initially look similar

06.08.2025, 17:10:05 | Fast company - tech
Exclusive: Google Gemini adds AI tutoring, heating up the fight for student users

Just in time for the new school year, Google has introduced a tool called Guided Learning within its Gemini chatbot. Unlike tools that offer instant answers, Guided Learning breaks down complex pro

06.08.2025, 17:10:04 | Fast company - tech
Pinterest’s male audience is booming. Here’s what they’re searching for

A growing number of men are flocking to Pinterest.

The company’s first-ever trend report reveals that male users now make up

06.08.2025, 12:30:04 | Fast company - tech