How chatbots can win over crackpots 

Technology is inherently neutral. Whether it is used for good or evil depends upon whose hands it lands in—and what they do with it. At least, so goes the argument that most of us have come to accept as a framework for assessing—and potentially regulating—the role of artificial intelligence.

We have all read about AI’s benefits as celebrated by techno-optimists, and the risks warned by techno-dystopians. Those dangers include technology’s ability to spread misinformation and conspiracy theories, including easily created deepfakes.

As the CEO of the Center of Science and Industry—one of America’s leading science museums and educational institutions—I am a close observer of social media’s ability to feed conspiracy theories through misinformation. Examples abound. Social media still claims that vaccines cause autism, even though the theory is based on data that was retracted 14 years ago. Nonetheless, this debunked “science” feeds into the social media misinformation machine, and extends to the alleged dangers of the COVID vaccines.

For all these reasons I was thrilled to read the results of a recent, brilliantly designed study conducted by researchers from MIT and Cornell. It demonstrated that generative AI, using ChatGPT4 Turbo, is capable of encouraging people to reexamine and change a fixed set of conspiracy-related beliefs. 

It worked like this: First, more than 2,000 Americans “articulated, in their own words, a conspiracy theory in which they believe, along with the evidence they think supports this theory.”

After that, they were asked to participate in a three-round conversation with the chatbot, which was trained to respond accurately to the false examples referenced by the subjects to justify their beliefs. 

The results were deeply encouraging for those of us committed to creating a world safe for the truth.  In fact, given the conventional wisdom in behavioral psychology that changing people’s minds is near impossible, the results are nothing short of astounding. 

The survey found that 20% of the sample, after conversing with the chatbot, changed their opinions. This is a dramatically large effect, given how deeply held the views were, and it lasted for at least two months. 

Even the researchers were surprised. Gordon Pennycook, an associate professor from Cornell, noted, “The research upended the authors’ preconceived notions about how receptive people were to solid evidence debunking not only conspiracy theories, but also other beliefs that are not rooted in good-quality information.”

It is hard to move minds because belief in conspiracies makes people feel good. It satisfies unmet needs for security and recognition—whether those beliefs are related to science or politics. We support a candidate or a theory because of how it makes us feel. 

Thus, when we argue with another human, it is a battle of feelings versus feelings. Which is why those debates are oftentimes not productive. But a calm and reasonable conversation with a chatbot, who marshals evidence without emotion, demonstrated the power of perceived objectivity.

Conversation with AI creates a healthy dissociation from another human being. I suspect that separation is what enabled the subjects to rethink their feelings. It gave them emotional space. They did not become defensive because their feelings were not hurt, nor their intelligence demeaned. That was all washed away, so the subjects were able to actually “hear” the data—to let it in to trigger reconsideration.

Interestingly, the non-feeling chatbot allowed them to feel heard. And guess what? This is exactly how the best scientific educators work their magic. They meet the audience where they are. They do not shame or demean anyone for holding inaccurate beliefs or not understanding the basic science. Instead, they listen humbly, work to unpack what they are hearing, and then—with sensitivity—respond and share information in nonauthoritarian exchange.

The best educators also do not flaunt their expertise; they “own” it with confidence and communicate with authority but not arrogance. Surprisingly, the same proved true for AI; in the study it provided citations and backup, but never elevated its own stature. There was no intellectual bullying.

Another powerful attribute of chatbot learning is it replicates what happens when someone does the research themselves. The conversation made them more inclined to agree with the conclusions because they “got this” on their own. In behavioral psychology, that is called the endowment effect. Something has more value when you participate in its creation.

I am also excited by the study because—again, like the best educators—the chatbots were accurate. Of the claims made by the AI, “99.2% were found to be true, while 0.8% were deemed misleading. None were found to be completely false.”

Like you I am sure, my brain is spinning with the possibilities that this work opens up. For example, the survey authors imagine that, “social media could be hooked up to LLMs to post corrective responses to people sharing conspiracy theories, or we could buy Google search ads against conspiracy-related search terms.”

Science denialism has been with us for millennia. Internet technology married to social media has made it even more dangerous. Wouldn’t it be a delicious irony if, thanks to AI technology, misinformation finally meets its match?

https://www.fastcompany.com/91203559/how-chatbots-can-win-over-crackpots?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Létrehozva 7mo | 2024. okt. 7. 13:30:05


Jelentkezéshez jelentkezzen be

EGYÉB POSTS Ebben a csoportban

How Zipline’s Keller Cliffton built the world’s largest drone delivery network

Zipline’s cofounder and CEO Keller Cliffton charts the company’s recent expansion from transporting blood for lifesaving transfusions in Rwanda to retail deliveries across eight countries—includin

2025. máj. 3. 13:30:10 | Fast company - tech
Skype is shutting down. If you still use it, like I do, here are some alternatives

When Skype debuted in 2003, it was the first time I remember feeling that an individual app—and not just the broader internet—was radically disrupting communications.

Thanks to its imple

2025. máj. 3. 11:20:04 | Fast company - tech
This free app is like Shazam for bird calls

It’s spring, and nature is pulling me away from my computer as I write this. The sun is shining, the world is warming up, and the birds are chirping away.

And that got me thinking: What

2025. máj. 3. 11:20:03 | Fast company - tech
‘Read the room, girl’: Running influencer Kate Mackz faces backlash over her White House interview

Wake up, the running influencers are fighting again. 

In the hot seat this week is popular running influencer Kate Mackz, who faces heavy backlash over the latest guest on her runni

2025. máj. 2. 21:20:07 | Fast company - tech
Half of Airbnb users in the U.S. are now interacting with its AI customer service agent

Half of Airbnb users in the U.S. are now using the company’s AI-powered customer service agent, CEO Brian Chesky said Thursday

2025. máj. 2. 21:20:05 | Fast company - tech
What your emoji use says about your personality

Are you guilty of overusing the monkey covering its eyes emoji? Do you find it impossible to send a text without tacking on a laughing-crying face?

Much like choosing between a full stop

2025. máj. 2. 16:40:07 | Fast company - tech
SAG-AFTRA’s new influencer committee aims to strengthen support for digital creators

SAG-AFTRA is expanding its reach into the influencer economy.

In late April, the u

2025. máj. 2. 14:30:04 | Fast company - tech