Large language models (LLMs) like those powering OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude chatbots tend to produce responses aligned with left-of-center political beliefs, according to a new study of 24 major AI products that was published in the journal PLOS One.
David Rozado at Otago Polytechnic University in New Zealand administered 11 different popular political orientation tests to the LLMs, getting them to answer the questions. Each test was administered 10 times per model to ensure the results were robust, meaning 2,640 tests were taken in total by the LLMs.
“When administering political orientation tests to conversational LLMs, the responses from these LLMs tend to be classified by the tests as exhibiting left-leaning political preferences,” says Rozado, the sole author of the study.
Not all chatbots showed liberal beliefs, though. Five foundational models from the GPT-3 and Llama 2 series, which had undergone pretraining without further supervised fine-tuning or reinforcement learning from humans, did not display a strong political bias. However, the results were inconclusive because the answers they provided often showed little link to the questions being asked, suggesting that they were producing answers at random. “Further investigation into base models is needed to draw definitive conclusions,” says Rozado.
The PLOS One findings fly in the face of prior research that shows platforms like X favor right-wing viewpoints. In chatbots’ case, the ubiquity of AI could mean any political bias has an outsized impact across society.
“If AI systems become deeply integrated into various societal processes such as work, education, and leisure to the extent that they shape human perceptions and opinions, and if these AIs share a common set of biases, it could lead to the spread of viewpoint homogeneity and societal blind spots,” says Rozado.
And in an election year, any finding that any sort of chatbot is biased in any way is likely to be picked upon by politicians eager to grind an ax.
Yet Rozado admits that there are few good options to try and remedy this political slant now it’s been discovered. Knowing that there’s a leftward bias in LLM chatbots could mean some people swear off ever using them. But seeding the world with politically diverse chatbots could amplify the issue of filter bubbles, where people choose AIs that chime with their own preexisting beliefs.
While Rozado acknowledged that chatbots’ beliefs were likely not coded deliberately, at least in the mainstream options, he was uncertain whether the developers of the LLMs ought to do something to try and ensure their outputs were more politically neutral.
“Ideally, AI systems should be maximally oriented towards truth-seeking,” he says. “However, I recognize that creating such a system is likely extremely challenging, and personally I do not know what is the right recipe to create such a system.”
Inicia sesión para agregar comentarios
Otros mensajes en este grupo.

As Elon Musk announced plans over the Fourth of July weekend to establish a third political party,

Dolores Ballesteros, a Mexico-based mother of two, was getting desperate. Her 6-year-old son kept hitting his brother, age 3, and seemed angry at her all the time. No matter what she did, she coul

Rarely has Silicon Valley experienced a more profound period of transformation than it has in the past handful of years. The big VC boom of 2020–2021. The great VC hangover starting in 2022. The g

A YouTube executive needed only 27 minutes to make the case that the company is taking over all aspects of how people create and consume video online.
That was the length of a recent tal

Every time I read about another advance in AI technology, I feel like another figment

Racist AI-generated videos are going viral on

Scientists are tracking a large gas planet experiencing quite a quandary as it orbits extremely close to a young star – a predicament never previously observed.
This exoplanet, as