Saturday, May 03, 2025

AI hype: The Leaderboard Illusion based on a comprehensive analysis and review of the Chatbot Arena

Food for thought! This critique should not be limited to a single leaderboard as new models are often submitted several leaderboards. It should also include so called benchmarks.

Caveat: I did not read the paper.

"The authors of this paper argue that the over-reliance on a single leaderboard can lead to overfitting and gaming of the system, rather than genuine technological advancement. They conducted a systematic review of the Chatbot Arena, analyzing data from 2 million battles, 42 providers, and 243 models over a fixed time period.
Their analysis revealed that a small group of preferred providers were given disproportionate access to data and testing, and that there were significant data asymmetries between proprietary, open-weight, and open-source model providers. They also found that the Arena's deprecation policies led to unreliable model rankings."

From the abstract:
"Measuring progress is fundamental to the advancement of any scientific field. As benchmarks play an increasingly central role, they also grow more susceptible to distortion.
Chatbot Arena has emerged as the go-to leaderboard for ranking the most capable AI systems. Yet, in this work we identify systematic issues that have resulted in a distorted playing field.
We find that undisclosed private testing practices benefit a handful of providers who are able to test multiple variants before public release and retract scores if desired.
We establish that the ability of these providers to choose the best score leads to biased Arena scores due to selective disclosure of performance results.
At an extreme, we identify 27 private LLM variants tested by Meta in the lead-up to the Llama-4 release.
We also establish that proprietary closed models are sampled at higher rates (number of battles) and have fewer models removed from the arena than open-weight and open-source alternatives.
Both these policies lead to large data access asymmetries over time.
Providers like Google and OpenAI have received an estimated 19.2% and 20.4% of all data on the arena, respectively.
In contrast, a combined 83 open-weight models have only received an estimated 29.7% of the total data.
We show that access to Chatbot Arena data yields substantial benefits; even limited additional data can result in relative performance gains of up to 112% on the arena distribution, based on our conservative estimates.
Together, these dynamics result in overfitting to Arena-specific dynamics rather than general model quality. The Arena builds on the substantial efforts of both the organizers and an open community that maintains this valuable evaluation platform.
We offer actionable recommendations to reform the Chatbot Arena's evaluation framework and promote fairer, more transparent benchmarking for the field"

These are the authors' affiliations: "Cohere Labs, Cohere, Princeton University, Stanford University, University of Waterloo, Massachusetts Institute of Technology, Allen Institute for Artificial Intelligence, University of
Washington"

Last Week in AI #308 - The Leaderboard Illusion, ChatGPT Glazing, Qwen 3, Ernie X1





No comments:

Post a Comment