Tuesday, November 22, 2022

Large Language Models: Towards a holistic benchmark

What a Herculean undertaking!

Of course, these Stanford University exhibit a strong desire to be politically correct: So out of 7 metrics 3 deal with the dominant ideologies of our time (i.e. toxicity, fairness, bias).

"Stanford studied 30 large language models so you don’t have to 
The university’s Center for Research on Foundation Models has combined several different metrics into one big, holistic benchmark that evaluates the accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency of large language models. I was surprised to see that bigger models didn’t actually translate to better performance."

"... But the AI community lacks the needed transparency: Many language models exist, but they are not compared on a unified standard, and even when language models are evaluated, the full range of societal considerations (e.g., fairness, robustness, uncertainty estimation, commonsense knowledge, disinformation) have not be addressed in a unified way. ...
We believe holistic evaluation involves three elements:
  1. Broad coverage and recognition of incompleteness. ...
  2. Multi-metric measurement. Societally beneficial systems are characterized by many desiderata, but benchmarking in AI often centers on one (usually accuracy). ...
  3. Standardization. ... we should evaluate all the major LMs on the same scenarios to the extent possible.
..."

From the abstract:
"Language models (LMs) are becoming the foundation for almost all major language technologies, but their capabilities, limitations, and risks are not well understood. We present Holistic Evaluation of Language Models (HELM) to improve the transparency of language models. First, we taxonomize the vast space of potential scenarios (i.e. use cases) and metrics (i.e. desiderata) that are of interest for LMs. Then we select a broad subset based on coverage and feasibility, noting what's missing or underrepresented (e.g. question answering for neglected English dialects, metrics for trustworthiness). Second, we adopt a multi-metric approach: We measure 7 metrics (accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency) for each of 16 core scenarios when possible (87.5% of the time). This ensures metrics beyond accuracy don't fall to the wayside, and that trade-offs are clearly exposed. We also perform 7 targeted evaluations, based on 26 targeted scenarios, to analyze specific aspects (e.g. reasoning, disinformation). Third, we conduct a large-scale evaluation of 30 prominent language models (spanning open, limited-access, and closed models) on all 42 scenarios, 21 of which were not previously used in mainstream LM evaluation. Prior to HELM, models on average were evaluated on just 17.9% of the core HELM scenarios, with some prominent models not sharing a single scenario in common. We improve this to 96.0%: now all 30 models have been densely benchmarked on the same core scenarios and metrics under standardized conditions. Our evaluation surfaces 25 top-level findings. For full transparency, we release all raw model prompts and completions publicly for further analysis, as well as a general modular toolkit. We intend for HELM to be a living benchmark for the community, continuously updated with new scenarios, metrics, and models."

Language Models are Changing AI. We Need to Understand Them Scholars benchmark 30 prominent language models across a wide range of scenarios and for a broad range of metrics to elucidate their capabilities and risks.




No comments: