Thursday, January 08, 2026

Evaluating AI’s ability to perform scientific research tasks | OpenAI

Good news! A new benchmark!

"... As accelerating scientific progress is one of the most promising opportunities for AI to benefit humanity, we’re improving our models on difficult math and science tasks and working on the tools that will help scientists get the most from them. ..."

From the abstract:
"We introduce FrontierScience, a benchmark evaluating AI capabilities for expert level scientific reasoning.
FrontierScience consists of two tracks:
(1) Olympiad, which contains international olympiad problems (at the level of IPhO, IChO, and IBO), and
(2) Research, which contains PhD-level, open-ended problems representative of sub-problems in scientific research.
In total, FrontierScience is composed of several hundred questions (160 in the open-sourced gold set) covering subfields across physics, chemistry, and biology, from quantum electrodynamics to synthetic organic chemistry.
Recent model progress has nearly saturated existing science benchmarks, which often rely on multiple-choice knowledge questions or already published information.
In contrast, all Olympiad problems are originally produced by international olympiad medalists and national team coaches to ensure standards of difficulty, originality, and factuality.
All Research problems are research sub-tasks written and verified by PhD scientists (doctoral candidates, postdoctoral researchers, or professors). For Research, we also introduce a granular rubric-based architecture to evaluate model capabilities throughout the process of solving a research task, as opposed to judging a standalone answer. In initial evaluations of several frontier models, GPT-5.2 is the top performing model on FrontierScience, scoring 77% on the Olympiad set and 25% on the Research set."

Evaluating AI’s ability to perform scientific research tasks | OpenAI "We introduce FrontierScience, a new benchmark that evaluates AI capabilities for expert-level scientific reasoning across physics, chemistry, and biology."

Frontierscience: Evaluating Ai’S Ability To Perform Expert-Level Scientific Tasks (This paper seems to be published only internally as of now)

Credits: Last Week in AI





No comments: