How chatbots can win over crackpots 

Technology is inherently neutral. Whether it is used for good or evil depends upon whose hands it lands in—and what they do with it. At least, so goes the argument that most of us have come to accept as a framework for assessing—and potentially regulating—the role of artificial intelligence.

We have all read about AI’s benefits as celebrated by techno-optimists, and the risks warned by techno-dystopians. Those dangers include technology’s ability to spread misinformation and conspiracy theories, including easily created deepfakes.

As the CEO of the Center of Science and Industry—one of America’s leading science museums and educational institutions—I am a close observer of social media’s ability to feed conspiracy theories through misinformation. Examples abound. Social media still claims that vaccines cause autism, even though the theory is based on data that was retracted 14 years ago. Nonetheless, this debunked “science” feeds into the social media misinformation machine, and extends to the alleged dangers of the COVID vaccines.

For all these reasons I was thrilled to read the results of a recent, brilliantly designed study conducted by researchers from MIT and Cornell. It demonstrated that generative AI, using ChatGPT4 Turbo, is capable of encouraging people to reexamine and change a fixed set of conspiracy-related beliefs. 

It worked like this: First, more than 2,000 Americans “articulated, in their own words, a conspiracy theory in which they believe, along with the evidence they think supports this theory.”

After that, they were asked to participate in a three-round conversation with the chatbot, which was trained to respond accurately to the false examples referenced by the subjects to justify their beliefs. 

The results were deeply encouraging for those of us committed to creating a world safe for the truth.  In fact, given the conventional wisdom in behavioral psychology that changing people’s minds is near impossible, the results are nothing short of astounding. 

The survey found that 20% of the sample, after conversing with the chatbot, changed their opinions. This is a dramatically large effect, given how deeply held the views were, and it lasted for at least two months. 

Even the researchers were surprised. Gordon Pennycook, an associate professor from Cornell, noted, “The research upended the authors’ preconceived notions about how receptive people were to solid evidence debunking not only conspiracy theories, but also other beliefs that are not rooted in good-quality information.”

It is hard to move minds because belief in conspiracies makes people feel good. It satisfies unmet needs for security and recognition—whether those beliefs are related to science or politics. We support a candidate or a theory because of how it makes us feel. 

Thus, when we argue with another human, it is a battle of feelings versus feelings. Which is why those debates are oftentimes not productive. But a calm and reasonable conversation with a chatbot, who marshals evidence without emotion, demonstrated the power of perceived objectivity.

Conversation with AI creates a healthy dissociation from another human being. I suspect that separation is what enabled the subjects to rethink their feelings. It gave them emotional space. They did not become defensive because their feelings were not hurt, nor their intelligence demeaned. That was all washed away, so the subjects were able to actually “hear” the data—to let it in to trigger reconsideration.

Interestingly, the non-feeling chatbot allowed them to feel heard. And guess what? This is exactly how the best scientific educators work their magic. They meet the audience where they are. They do not shame or demean anyone for holding inaccurate beliefs or not understanding the basic science. Instead, they listen humbly, work to unpack what they are hearing, and then—with sensitivity—respond and share information in nonauthoritarian exchange.

The best educators also do not flaunt their expertise; they “own” it with confidence and communicate with authority but not arrogance. Surprisingly, the same proved true for AI; in the study it provided citations and backup, but never elevated its own stature. There was no intellectual bullying.

Another powerful attribute of chatbot learning is it replicates what happens when someone does the research themselves. The conversation made them more inclined to agree with the conclusions because they “got this” on their own. In behavioral psychology, that is called the endowment effect. Something has more value when you participate in its creation.

I am also excited by the study because—again, like the best educators—the chatbots were accurate. Of the claims made by the AI, “99.2% were found to be true, while 0.8% were deemed misleading. None were found to be completely false.”

Like you I am sure, my brain is spinning with the possibilities that this work opens up. For example, the survey authors imagine that, “social media could be hooked up to LLMs to post corrective responses to people sharing conspiracy theories, or we could buy Google search ads against conspiracy-related search terms.”

Science denialism has been with us for millennia. Internet technology married to social media has made it even more dangerous. Wouldn’t it be a delicious irony if, thanks to AI technology, misinformation finally meets its match?

https://www.fastcompany.com/91203559/how-chatbots-can-win-over-crackpots?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creado 11mo | 7 oct 2024, 13:30:05


Inicia sesión para agregar comentarios

Otros mensajes en este grupo.

The ‘cortisol cocktail’ is blowing up on TikTok. Does it really work?

Rather than a headache and hangxiety, a new viral cocktail recipe is claiming to lower cortisol levels and reduce stress.

The nonalcoholic drink, known as the “

3 sept 2025, 17:30:08 | Fast company - tech
This one line from Google’s antitrust ruling could reshape every Big Tech case

Google dodged a bullet Tuesday when a federal judge ruled the company does no

3 sept 2025, 17:30:07 | Fast company - tech
Grab’s $20 billion playbook for becoming a super app

Grab is a rideshare service-turned superapp, not available in the U.S. but rapidly growing in Southeast Asia. It’s even outmaneuvered global players like Uber to reach a valuation north of $20 bil

3 sept 2025, 15:20:04 | Fast company - tech
Kids aren’t reading for pleasure—and more than tech is to blame

A quarter-century ago, David Saylor shepherded the epic Harry Potter fantasy series onto U.S. bookshelves. As creative director of

3 sept 2025, 12:50:11 | Fast company - tech
Samsung’s Galaxy Z Fold7 ruined other foldables for me—including mine

There’s no other phone I’d rather be using right now than Samsung’s Galaxy Z Fold7—and that’s a problem.

I’ve been a foldable phone appreciator for a while now, and a couple of years ago

3 sept 2025, 12:50:10 | Fast company - tech
Fantasy football nerds are using AI to get an edge in their leagues this year

This fantasy football season, Aaron VanSledright is letting his bot call the shots.

Ahead of the NFL season, the Chicago-based cloud engineer built a custom

3 sept 2025, 12:50:09 | Fast company - tech
Your phone’s ‘Share’ button doesn’t get enough love

One of the most powerful buttons on your phone is also one of the easiest to ignore.

I’m referring to the humble “Share” button, a mainstay of both iOS and Android that unloc

3 sept 2025, 12:50:06 | Fast company - tech