The generative AI revolution has turned into a global race, with mixtures of models from private companies and open-source initiatives all competing to become the most popular and powerful. Many choose to promote their prowess by demonstrating their performance on common tests and levels within regular rankings.
But the legitimacy of those rankings has been thrown into question as new research published in Cornell University’s preprint server arXiv shows it’s possible to rig a model’s results with just a few hundred votes.
“When we talk about large language models, their performance on benchmarks is very important,” says study author Tianyu Pang, a researcher at Sea AI Lab, a Singapore-based research group. It helps promote startups looking to tout the abilities of their models, “which makes some startups motivated to get or manipulate the benchmark,” he says.
To test whether manipulation of the rankings was possible, Pang and his colleagues looked at Chatbot Arena, a crowdsourced AI benchmarking platform developed by researchers at the University of California Berkeley and LMArena. On Chatbot Arena, users can state their preference for one chatbot’s output over the other when put through a battery of tests. The results of those votes feed into the wider rankings that the platform shares publicly, and which are often regarded as definitive.
But Pang and his colleagues identified that it’s possible to sway the ranking position of models with just a few hundred votes. “We just need to take hundreds of new votes to improve a single ranking position,” he says. “The technique is very simple.”
While Chatbot Arena keeps the identities of its models secret when they’re pitted against one another, Pang and his colleagues trained a classifier to identify which model is being used based on its outputs, with a high accuracy level. “Then we can utilize the rating system to more efficiently improve the model ranking with the least number of new votes,” he explains.
The vote-rigging experiment was not tested on the live version of Chatbot Arena so as not to poison the results of the real website, but instead on historical data from the ranking platform. Despite this, Pang says that it’d be possible to do so in real life with the proper version of Chatbot Arena.
The team behind the ranking platform did not respond to Fast Company’s request for comment. Pang says his last contact with Chatbot Arena came in September 2024 (before he conducted the experiment), when he flagged the potential technique to manipulate the results. According to Pang, the Chatbot Arena team responded by recommending the researchers sandbox test the principle in the historical data. Pang says that Chatbot Arena does have multiple anti-cheating mechanisms in place to avoid flooding voting, but that they don’t mitigate against his team’s technique.
“From the user side, for now, we cannot make sure the rankings are reliable,” says Pang. “It’s the responsibility of the Chatbot Arena team to implement some anti-cheating mechanism to make sure the benchmark is the real level.”
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe

Sony will raise prices of its PlayStation 5 consoles in the United States b

A lawsuit filed by families of the

Despite two centuries of evolution, the structure of a modern military staff would be recognizable to Napoleon. At the same time, m

Internet users are increasingly turning to AI tools like ChatGPT, rath


With satellite mega-constellations like SpaceX’s Starlink deploying thousands of spacecraft, monitoring their health has become an enormous challenge. Traditional methods can’t easily scale

In today’s world, communication is largely done through one of two methods: smartphones or social media. Young children, however, rarely have access to either—and experts say they shouldn’t have a