What happened when climate deniers met an AI chatbot?

If you’ve heard anything about the relationship between Big Tech and climate change, it’s probably that the data centers that power our online lives use a mind-boggling amount of power. And some of the newest energy hogs on the block are artificial intelligence tools like ChatGPT. Some researchers suggest that ChatGPT alone might use as much power as 33,000 U.S. households in a typical day, a number that could balloon as the technology becomes more widespread.

The staggering emissions add to a general tenor of panic driven by headlines about AI stealing jobs, helping students cheat, or, who knows, taking over. Already, some 100 million people use OpenAI’s most famous chatbot on a weekly basis, and even those who don’t use it likely encounter AI-generated content often. But a recent study points to an unexpected upside of that wide reach: Tools like ChatGPT could teach people about climate change, and possibly shift deniers closer to accepting the overwhelming scientific consensus that global warming is happening and caused by humans.

In a study recently published in the journal Scientific Reports, researchers at the University of Wisconsin-Madison asked people to strike up a climate conversation with GPT-3, a large language model released by OpenAI in 2020. (ChatGPT runs on GPT-3.5 and 4, updated versions of GPT-3). Large language models are trained on vast quantities of data, allowing them to identify patterns to generate text based on what they’ve seen, conversing somewhat like a human would. The study is one of the first to analyze GPT-3’s conversations about social issues like climate change and Black Lives Matter. It analyzed the bot’s interactions with more than 3,000 people, mostly in the United States, from across the political spectrum. Roughly a quarter of them came into the study with doubts about established climate science, and they tended to come away from their chatbot conversations a little more supportive of the scientific consensus.

That doesn’t mean they enjoyed the experience, though. They reported feeling disappointed after chatting with GPT-3 about the topic, rating the bot’s likability about half a point or lower on a five-point scale. That creates a dilemma for the people designing these systems, said Kaiping Chen, an author of the study and a professor of computation communication at the University of Wisconsin-Madison. As large language models continue to develop, the study says, they could begin to respond to people in a way that matches users’ opinions—regardless of the facts.

“You want to make your user happy; otherwise, they’re going to use other chatbots. They’re not going to get onto your platform, right?” Chen said. “But if you make them happy, maybe they’re not going to learn much from the conversation.”

Prioritizing user experience over factual information could lead ChatGPT and similar tools to become vehicles for bad information, like many of the platforms that shaped the internet and social media before it. Facebook, YouTube, and Twitter, now known as X, are awash in lies and conspiracy theories about climate change. Last year, for instance, posts with the hashtag #climatescam have gotten more likes and retweets on X than ones with #climatecrisis or #climateemergency.

“We already have such a huge problem with dis- and misinformation,” said Lauren Cagle, a professor of rhetoric and digital studies at the University of Kentucky. Large language models like ChatGPT “are teetering on the edge of exploding that problem even more.”

The University of Wisconsin-Madison researchers found that the kind of information GPT-3 delivered depends on who it was talking to. For conservatives and people with less education, it tended to use words associated with negative emotions and talk about the destructive outcomes of global warming, from drought to rising seas. For those who supported the scientific consensus, it was more likely to talk about the things you can do to reduce your carbon footprint, like eating less meat or walking and biking when you can.

What GPT-3 told them about climate change was surprisingly accurate, according to the study: Only 2 percent of its responses went against the commonly understood facts about climate change. These AI tools reflect what they’ve been fed and are liable to slip up sometimes. Last April, an analysis from the Center for Countering Digital Hate, a U.K. nonprofit, found that Google’s chatbot, Bard, told one user, without additional context: “There is nothing we can do to stop climate change, so there is no point in worrying about it.”

It’s not difficult to use ChatGPT to generate misinformation, though OpenAI does have a policy against using the platform to intentionally mislead others. It took some prodding, but I managed to get GPT-4, the latest public version, to write a paragraph laying out the case for coal as the fuel of the future, even though it initially tried to steer me away from the idea. The resulting paragraph mirrors fossil fuel propaganda, touting “clean coal,” a misnomer used to market coal as environmentally friendly.

There’s another problem with large language models like ChatGPT: They’re prone to “hallucinations,” or making up information. Even simple questions can turn up bizarre answers that fail a basic logic test. I recently asked ChatGPT-4, for instance, how many toes a possum has (don’t ask why). It responded, “A possum typically has a total of 50 toes, with each foot having 5 toes.” It only corrected course after I questioned whether a possum had 10 limbs. “My previous response about possum toes was incorrect,” the chatbot said, updating the count to the correct answer, 20 toes.

Despite these flaws, there are potential upsides to using chatbots to help people learn about climate change. In a normal, human-to-human conversation, lots of social dynamics are at play, especially between groups of people with radically different worldviews. If an environmental advocate tries to challenge a coal miner’s views about global warming, for example, it might make the miner defensive, leading them to dig in their heels. A chatbot conversation presents more neutral territory.

“For many people, it probably means that they don’t perceive the interlocutor, or the AI chatbot, as having identity characteristics that are opposed to their own, and so they don’t have to defend themselves,” Cagle said. That’s one explanation for why climate deniers might have softened their stance slightly after chatting with GPT-3.

There’s now at least one chatbot aimed specifically at providing quality information about climate change. Last month, a group of startups launched “ClimateGPT,” an open-source large language model that’s trained on climate-related studies about science, economics, and other social sciences. One of the goals of the ClimateGPT project was to generate high-quality answers without sucking up an enormous amount of electricity. It uses 12 times less computing energy than a comparable large language model, according to Christian Dugast, a natural language scientist at AppTek, a Virginia-based artificial intelligence company that helped fine-tune the new bot.

ClimateGPT isn’t quite ready for the general public “until proper safeguards are tested,” according to its website. Despite the problems Dugast is working on addressing—the “hallucinations” and factual failures common among these chatbots—he thinks it could be useful for people hoping to learn more about some aspect of the changing climate.

“The more I think about this type of system,” Dugast said, “the more I am convinced that when you’re dealing with complex questions, it’s a good way to get informed, to get a good start.”


This article originally appeared in Grist, a nonprofit, independent media organization dedicated to telling stories of climate solutions and a just future. Sign up for its newsletter here.

https://www.fastcompany.com/91022516/what-happened-when-climate-deniers-met-an-ai-chatbot?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 1y | 5 feb 2024, 07:50:07


Accedi per aggiungere un commento

Altri post in questo gruppo

A newly discovered exoplanet rekindles humanity’s oldest question: Are we alone?

Child psychologists tell us that around the age of five or six, children begin to seriously contemplate the world around them. It’s a glorious moment every parent recognizes—when young minds start

13 lug 2025, 11:10:06 | Fast company - tech
How Watch Duty became a go-to app during natural disasters

During January’s unprecedented wildfires in Los Angeles, Watch Duty—a digital platform providing real-time fire data—became the go-to app for tracking the unfolding disaster and is credit

13 lug 2025, 06:30:05 | Fast company - tech
Why the AI pin won’t be the next iPhone

One of the most frequent questions I’ve been getting from business execs lately is whether the

12 lug 2025, 12:10:02 | Fast company - tech
Microsoft will soon delete your Authenticator passwords. Here are 3 password manager alternatives

Users of Microsoft apps are having a rough year. First, in May, the Windows maker

12 lug 2025, 09:40:03 | Fast company - tech
Yahoo Creators platform hits record revenue as publisher bets big on influencer-led content

Yahoo’s bet on creator-led content appears to be paying off. Yahoo Creators, the media company’s publishing platform for creators, had its most lucrative month yet in June.

Launched in M

11 lug 2025, 17:30:04 | Fast company - tech
GameStop’s Nintendo Switch 2 stapler sells for more than $100,000 on eBay after viral mishap

From being the face of memestock mania to going viral for inadvertently stapling the screens of brand-new video game consoles, GameStop is no stranger to infamy.

Last month, during the m

11 lug 2025, 12:50:04 | Fast company - tech
Don’t take the race for ‘superintelligence’ too seriously

The technology industry has always adored its improbably audacious goals and their associated buzzwords. Meta CEO Mark Zuckerberg is among the most enamored. After all, the name “Meta” is the resi

11 lug 2025, 12:50:02 | Fast company - tech