Why AI companies keep raising the specter of sentience

The generative AI revolution has seen more leaps forward than missteps—but one clear stumble was the sycophantic smothering of OpenAI’s 4o large language model (LLM), which the ChatGPT maker eventually had to withdraw after users began worrying it was too unfailingly flattering. The model became so eager to please, it lost authenticity.

In their blog post explaining what went wrong, OpenAI described “ChatGPT’s default personality” and its “behavior”—terms typically reserved for humans, suggesting a degree of anthropomorphization. OpenAI isn’t alone in this: humans often describe AI as “understanding” or “knowing” things, largely because media coverage has consistently framed it that way—incorrectly. AI doesn’t possess knowledge or a brain, and some argue it never will (though that view is disputed).

Still, talk of sentience, personality, and humanlike qualities in AI appears to be growing. Last month, OpenAI competitor Anthropic—founded by former OpenAI employees—published a blog post expressing concern about developing AI that benefits human welfare. “But as we build those AI systems, and as they begin to approximate or surpass many human qualities, another question arises,” the firm wrote. “Should we also be concerned about the potential consciousness and experiences of the models themselves? Should we be concerned about model welfare, too?”

Why is this kind of language on the rise? Are we witnessing a genuine shift toward AI sentience—or is it simply a strategy to juice a sector already flush with hype? In 2024 alone, private equity and venture capital poured $56 billion into generative AI startups.

“Anthropomorphization, starting with the interface that presents as a person, using ‘I’, is part of the strategy here,” says Eerke Boiten, a professor at De Montfort University in Leicester, U.K. “It deflects from the moral and technical issues,” Boiten says. “When I complain that AI systems make mistakes in an unmanageable way, people tell me that humans do, too.” In this way, errors—like the misconfiguration of the core prompt that guided ChatGPT’s botched 4o model—can be framed as humanlike mistakes by the model, rather than human errors by its creators.

Whether this humanization is a deliberate choice is another question. “I think that people actually believe that sentience is possible and is starting to happen,” says Margaret Mitchell, a researcher and chief ethics scientist at Hugging Face. Mitchell sees less cynicism than some when it comes to how AI employees and companies talk about sentience and personality. “There’s a cognitive dissonance when what you believe as a person clashes with what your company needs you to say you believe,” she explains. “Within a few years of working at a company, your beliefs as an individual meld with the beliefs that would be useful for you to have for your company.”

So it’s not that AI company employees are necessarily trying to overstate their systems’ capabilities—they may genuinely believe what they’re saying, shaped by industry incentives. “If sentience pumps up valuation, then the domino effect from that—if you don’t step out of the bubble enough—is believing that the systems are sentient,” Mitchell adds.

But coding humanlike qualities into AI systems doesn’t just exaggerate their abilities—it can also obscure scrutiny, says Boiten. “Dressing up AI systems as humans leads them to make the wrong analogy,” he explains. “We don’t want our tools or calculators to be systemically and unpredictably wrong.”

To be fair, Anthropic’s blog post doesn’t declare sentient AI inevitable. The word “when” is balanced by “if” when considering the moral treatment of AI models. The company also notes, “There’s no scientific consensus on whether current or future AI systems could be conscious, or could have experiences that deserve consideration.” Even OpenAI CEO Sam Altman, in a January blog post reflecting on the past year, conceded that ubiquitous, superintelligent AI “sounds like science fiction right now, and somewhat crazy to talk about.”

Still, by broaching the subject, AI companies are planting the idea of sentient AI in the public consciousness. The question—one we may never definitively answer unless AI actually becomes sentient—is whether this talk makes AI companies and their employees the boy who cried wolf, as former Google engineer Blake Lemoine learned after claiming in 2022 that a model he worked on was sentient. Or are they issuing an early warning?

And while such talk may be a savvy fundraising tactic, it might also be worth tolerating—at least in part. Preparing mitigation strategies for AI’s future capabilities and fueling investor excitement may just be two sides of the same coin. As Boiten, a committed AI sentience skeptic, puts it: “The responsibility for a tool is with whoever employs it, and the buck also stops with them if they don’t fully know what the tool actually does.”

https://www.fastcompany.com/91325961/why-ai-companies-keep-raising-the-specter-of-sentience?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Erstellt 4mo | 01.05.2025, 17:40:05


Melden Sie sich an, um einen Kommentar hinzuzufügen

Andere Beiträge in dieser Gruppe

This $200 million sports streamer is ready to take on ESPN and Fox

Recent Nielsen data confirmed what many of us had already begun to sense: Streaming services

15.08.2025, 11:50:09 | Fast company - tech
This new flight deck technology is making flying safer, reducing delays, and curbing emissions

Ever wondered what goes on behind the scenes in a modern airliner’s cockpit? While you’re enjoying your in-flight movie, a quiet technological revolution is underway, one that’s

15.08.2025, 11:50:07 | Fast company - tech
The case for personality-free AI

Hello again, and welcome to Fast Company’s Plugged In.

For as long as there’s been software, upgrades have been emotionally fraught. When people grow accustomed to a pr

15.08.2025, 11:50:07 | Fast company - tech
Why AI is vulnerable to data poisoning—and how to stop it

Imagine a busy train station. Cameras monitor everything, from how clean the platforms are to whether a docking bay is empty or occupied. These cameras feed into an

15.08.2025, 09:40:03 | Fast company - tech
5 ways to keep your electronic devices from overheating this summer

The summer holidays are here and many of us will heading off on trips to hot and sunny destinations,

14.08.2025, 17:30:04 | Fast company - tech
Why Nvidia and AMD’s China pay-to-play deal with Trump could backfire

Welcome to AI Decoded, Fast Company’s weekly new

14.08.2025, 17:30:02 | Fast company - tech
Here are the countries restricting access to WhatsApp

Russia on Wednesday became the latest country to restrict some

14.08.2025, 15:10:04 | Fast company - tech