The generative AI revolution has turned into a global race, with mixtures of models from private companies and open-source initiatives all competing to become the most popular and powerful. Many choose to promote their prowess by demonstrating their performance on common tests and levels within regular rankings.
But the legitimacy of those rankings has been thrown into question as new research published in Cornell University’s preprint server arXiv shows it’s possible to rig a model’s results with just a few hundred votes.
“When we talk about large language models, their performance on benchmarks is very important,” says study author Tianyu Pang, a researcher at Sea AI Lab, a Singapore-based research group. It helps promote startups looking to tout the abilities of their models, “which makes some startups motivated to get or manipulate the benchmark,” he says.
To test whether manipulation of the rankings was possible, Pang and his colleagues looked at Chatbot Arena, a crowdsourced AI benchmarking platform developed by researchers at the University of California Berkeley and LMArena. On Chatbot Arena, users can state their preference for one chatbot’s output over the other when put through a battery of tests. The results of those votes feed into the wider rankings that the platform shares publicly, and which are often regarded as definitive.
But Pang and his colleagues identified that it’s possible to sway the ranking position of models with just a few hundred votes. “We just need to take hundreds of new votes to improve a single ranking position,” he says. “The technique is very simple.”
While Chatbot Arena keeps the identities of its models secret when they’re pitted against one another, Pang and his colleagues trained a classifier to identify which model is being used based on its outputs, with a high accuracy level. “Then we can utilize the rating system to more efficiently improve the model ranking with the least number of new votes,” he explains.
The vote-rigging experiment was not tested on the live version of Chatbot Arena so as not to poison the results of the real website, but instead on historical data from the ranking platform. Despite this, Pang says that it’d be possible to do so in real life with the proper version of Chatbot Arena.
The team behind the ranking platform did not respond to Fast Company’s request for comment. Pang says his last contact with Chatbot Arena came in September 2024 (before he conducted the experiment), when he flagged the potential technique to manipulate the results. According to Pang, the Chatbot Arena team responded by recommending the researchers sandbox test the principle in the historical data. Pang says that Chatbot Arena does have multiple anti-cheating mechanisms in place to avoid flooding voting, but that they don’t mitigate against his team’s technique.
“From the user side, for now, we cannot make sure the rankings are reliable,” says Pang. “It’s the responsibility of the Chatbot Arena team to implement some anti-cheating mechanism to make sure the benchmark is the real level.”
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe

IShowSpeed and Jynxzi are teaming up to host a $100,000 Fortnite tournament, bringing together 100 top creators for what’s shaping up to be the biggest celebrity Fortnite match to date.

Mark Zuckerberg said on Monday that Meta Platforms would spend hundreds of billions of dollars to build several massive

Meta may not currently lead the race for AI superintelligence, but it&

Southern small-town drama has made its way to TikTok. If you’re not familiar

A preliminary finding into last month’s Air India

In May of 1995, the video game industry hosted its first major trade show. Electronic Entertainment Expo (E3) was designed to shine a spotlight on games, and every major player wanted to stand in

Robinhood cofounder and CEO Vlad Tenev channeled Hollywood glamour last month in Cannes at an extravagantly produced event unveiling of the trading platform’s newest products, including a tokenize