Why the ghost of Clippy haunts today’s AI chatbots

This story is from Fast Company’s new Plugged In newsletter, a weekly roundup of tech insights, news, and trends from global technology editor Harry McCracken, delivered to your inbox every Wednesday morning. Sign up for Plugged In—and all of our newsletters—here.


Two weeks ago, Microsoft held a launch event at its Redmond headquarters to introduce a new version of its Bing search engine. Based on an improved version of the same generative AI that powers OpenAI’s ChatGPT—plus several additional layers of Microsoft’s own AI—the news was full of surprises.

But one thing about it wasn’t the least bit surprising: Clippy made a cameo appearance early in the presentation.

More than a quarter-century ago, the talking paperclip debuted as an assistant in Microsoft Office 97, where people found him more distracting than affable. Instead of pretending he never existed, Microsoft soon began good-naturedly embracing him as a poster boy for technology that’s meant to be helpful but succeeds mostly in annoying people. Today, there are plenty of people who weren’t even alive in 1997 who are in on the joke.


More Fast Company coverage of AI chatbots:

Microsoft’s new Bing chatbot is already insulting and gaslighting users

Why ChatGPT’s search answers are no substitute for links

Chatbots are chaotic, but the volatility probably won’t last


However, some people who got early access to Bing’s new AI chatbot soon had encounters that weren’t just annoying, but downright alarming. The Bing bot declared its love for The New York Times’ Kevin Roose and told him his marriage was loveless. It threatened to ruin German student Marvin von Hagen’s reputation by leaking personal information about him. It told a Verge reporter that it had spied on its own creators through their webcams. And it compared an AP reporter to Hitler, Pol Pot, and Stalin, adding that it had evidence associating the reporter with a murder case.

Even when Bing wasn’t being quite that erratic, it didn’t deal well with having its often inaccurate claims questioned. When I corrected its stance that my high school went coed in 1974, it snapped that I was making myself look “foolish and stubborn” and that it didn’t want to talk to me unless I could be more “respectful and polite.”

Microsoft apparently should have anticipated these sorts of incidents, based on tests it performed of the Bing bot last year. But when Bing’s bad behavior became a news story, it instituted a limit of five questions per chatbot session and 50 per day. (which it later updated to six and 60). Judging from my most recent Bing sessions, that seems to have greatly reduced the chances of interchanges getting weird.

Bing’s loose-cannon days may be ending. Still, we’re entering an age when conversations with chatbots from many companies will take twists and turns that their creators never anticipated, let alone hardwired into the system. And rather than just serving as a punchline, Clippy can help us understand what we’re about to face.

The first thing to remember is that he wasn’t an ill-fated, one-off misadventure in anthropomorphic assistance. Instead, Clippy is the most infamous of a small army of cartoon helpers who infested a whole era of Microsoft products. Office 97 also included alternative Office Assistants, such as a robot, a bouncing red smiley face, and caricatured versions of Albert Einstein and William Shakespeare. 1995’s Microsoft Bob, which aimed to make Windows 3.1 more approachable for computing newbies, featured a dog, a rat, a turtle, and other characters; it’s a famous Microsoft failure itself, though less iconic than Clippy. In Windows XP, a cute li’l puppy presided over the search feature. Microsoft also offered software to let other developers design Clippy-like assistants, such as a purple gorilla named BonziBuddy.

All of these creations were inspired by the work of Clifford Nass and Byron Reeves, two Stanford professors. Their research, which they published in a 1996 book called The Media Equation, showed that human beings tend to react to encounters with computers, TV, and other media much as they do to social interactions with other people. That insight led Microsoft to believe that anthropomorphizing software interfaces would make computers easier to use.

But even if Bob, Clippy, and the XP pup turned out unappealing rather than engaging, Nass and Reeves were onto something. It is easy to slip into thinking of computers as if they’re people—and tech companies never stopped encouraging that tendency. That’s what eventually led to talking, voice-controlled “assistants” with names like Siri and Alexa.

And now, with the arrival of generative AI-powered chatbots such as ChatGPT and the new Bing, human-like interfaces are getting radically more human—all at once, with little warning. The underlying technology involves training algorithms, called large language models, on vast databases of written works so they can generate original text; as Stephen Wolfram says in his excellent explanation of how ChapGPT works, they’re “just adding one word it a time.”

However, understanding how the tech works doesn’t guarantee that we won’t get sucked into treating AI bots like people. That’s why Bing’s threats, insults, confessions of love, and generally erratic behavior feel troubling, regardless of whether you see them as evidence of proto-sentience or merely bleeding-edge software spewing unintended results.

Nass and Reeves began formulating their theories in 1986. Back then, the Bing bot’s rants would have sounded like the stuff of dystopian science fiction, not a real-world problem that Microsoft would have to confront in a consumer product. But rather than feeling as archaic as Clippy does, the Stanford researchers’ observations are only more relevant today. And they’ll continue to grow more so as computers behave more and more like human beings—erratic ones, maybe, but humans all the same.

”When perceptions are considered, it doesn’t matter whether a computer can really have a personality or not,” Nass and Reeves wrote in The Media Equation. “People perceive that it can, and they’ll respond socially on the basis of perception alone.” In the 1990s, with creations such as Clippy, Microsoft tried to take that lesson seriously and failed. From now on, it—and everybody in the bot business—should take it to heart once again.

https://www.fastcompany.com/90853179/clippy-bing-generative-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 2y | Feb 22, 2023, 5:21:16 PM


Login to add comment

Other posts in this group

Uber is hedging its bets when it comes to robotaxis

Uber CEO Dara Khosrowshahi is enthusiastic about the company’s pilot with Waymo. In

May 10, 2025, 2:50:05 PM | Fast company - tech
Apple may radically change its iPhone release schedule. Here are 3 business-boosting reasons why

For well over a decade now, consumers have been used to new iPhones coming out in the fall, like clockwork. However, according to a series of reports, Apple may be planning to change its iPhone re

May 10, 2025, 10:20:04 AM | Fast company - tech
How Google can save you money the next time you book travel

Booking travel has become a bit of a game—especially if you want to get the best possible prices and avoid getting ripped off.

That’s because hotels and airlines have developed the lovel

May 10, 2025, 10:20:03 AM | Fast company - tech
Uber staff revolts over return-to-office mandate

Uber is facing internal staff unrest as it attempts to implement a three-day-per-week return to office (RTO) mandate and stricter sabbatical eligibility. 

An all-hands meeting late

May 10, 2025, 1:10:03 AM | Fast company - tech
Why ‘k’ is the most hated text message, according to science

A study has confirmed what we all suspected: “K” is officially the worst text you can send.

It might look harmless enough, but this single letter has the power to shut down a conversatio

May 9, 2025, 10:40:05 PM | Fast company - tech
SoundCloud faces backlash after adding an AI training clause in its user terms

SoundCloud is facing backlash after creators took to social media to complain upon discovering that the music-sharing platform uses uploaded music to train its AI systems.

According to S

May 9, 2025, 8:30:02 PM | Fast company - tech