The generative AI revolution has turned into a global race, with mixtures of models from private companies and open-source initiatives all competing to become the most popular and powerful. Many choose to promote their prowess by demonstrating their performance on common tests and levels within regular rankings.
But the legitimacy of those rankings has been thrown into question as new research published in Cornell University’s preprint server arXiv shows it’s possible to rig a model’s results with just a few hundred votes.
“When we talk about large language models, their performance on benchmarks is very important,” says study author Tianyu Pang, a researcher at Sea AI Lab, a Singapore-based research group. It helps promote startups looking to tout the abilities of their models, “which makes some startups motivated to get or manipulate the benchmark,” he says.
To test whether manipulation of the rankings was possible, Pang and his colleagues looked at Chatbot Arena, a crowdsourced AI benchmarking platform developed by researchers at the University of California Berkeley and LMArena. On Chatbot Arena, users can state their preference for one chatbot’s output over the other when put through a battery of tests. The results of those votes feed into the wider rankings that the platform shares publicly, and which are often regarded as definitive.
But Pang and his colleagues identified that it’s possible to sway the ranking position of models with just a few hundred votes. “We just need to take hundreds of new votes to improve a single ranking position,” he says. “The technique is very simple.”
While Chatbot Arena keeps the identities of its models secret when they’re pitted against one another, Pang and his colleagues trained a classifier to identify which model is being used based on its outputs, with a high accuracy level. “Then we can utilize the rating system to more efficiently improve the model ranking with the least number of new votes,” he explains.
The vote-rigging experiment was not tested on the live version of Chatbot Arena so as not to poison the results of the real website, but instead on historical data from the ranking platform. Despite this, Pang says that it’d be possible to do so in real life with the proper version of Chatbot Arena.
The team behind the ranking platform did not respond to Fast Company’s request for comment. Pang says his last contact with Chatbot Arena came in September 2024 (before he conducted the experiment), when he flagged the potential technique to manipulate the results. According to Pang, the Chatbot Arena team responded by recommending the researchers sandbox test the principle in the historical data. Pang says that Chatbot Arena does have multiple anti-cheating mechanisms in place to avoid flooding voting, but that they don’t mitigate against his team’s technique.
“From the user side, for now, we cannot make sure the rankings are reliable,” says Pang. “It’s the responsibility of the Chatbot Arena team to implement some anti-cheating mechanism to make sure the benchmark is the real level.”
Chcete-li přidat komentář, přihlaste se
Ostatní příspěvky v této skupině


Video game voice actors and motion capture artists could be headed back to work soon. SAG-AFTRA and major video game companies have announced a tentative contract agreement, 11 months after union

Here’s a dream job for chronically online coffee lovers: Starbucks is hiring two full-time content creators for a 12-month gig posting content at Starbucks locations around the world.


After stumbling out of the starting gate in Big Tech’s pivotal race to capitalize on

Barely anything that truly makes me pause on the internet is shot using traditional, modern camera tech. I appreciate the grainy texture of film photos and the fast, smooth zoom of a shitty camcord

The robotaxi race is heating up in Austin. A decade after Google’s self-driving car project quietly tested on the city’s