Tuesday, May 20, 2025

Study reveals why large language models may struggle to make better decisions when playing games

These models are all too human! Just kidding!

"Researchers from JKU Linz and Google DeepMind identified three key weaknesses that prevent large language models from making good decisions in games like multi-armed bandits and tic-tac-toe. The study found models suffer from greediness (sticking with early promising actions), 
frequency bias (choosing frequently seen options regardless of success), and a “knowing-doing gap” where models correctly identify optimal actions but choose differently.
Testing with Google’s Gemma 2 models showed reinforcement learning fine-tuning could significantly improve performance, with the smallest model’s tic-tac-toe win rate jumping from 15 percent to 75 percent after training.
The researchers discovered simple interventions like forcing models to try every possible action once at the beginning dramatically improved results, while chain-of-thought reasoning and increased token budgets also proved crucial for better decision-making. Reinforcement learning and increased test-time compute have become hallmarks of LLM-based reasoning models."

From the abstract:
"The success of Large Language Models (LLMs) has sparked interest in various agentic applications.
A key hypothesis is that LLMs, leveraging common sense and Chain-of-Thought (CoT) reasoning, can effectively explore and efficiently solve complex domains. However, LLM agents have been found to suffer from sub-optimal exploration and the knowing-doing gap, the inability to effectively act on knowledge present in the model.
In this work, we systematically study why LLMs perform sub-optimally in decision-making scenarios. In particular, we closely examine three prevalent failure modes: greediness, frequency bias, and the knowing-doing gap.
We propose mitigation of these shortcomings by fine-tuning via Reinforcement Learning (RL) on self-generated CoT rationales. Our experiments across multi-armed bandits, contextual bandits, and Tic-tac-toe, demonstrate that RL fine-tuning enhances the decision-making abilities of LLMs by increasing exploration and narrowing the knowing-doing gap. Finally, we study both classic exploration mechanisms, such as -greedy, and LLM-specific approaches, such as self-correction and self-consistency, to enable more effective fine-tuning of LLMs for decision-making."

How Mayo Clinic radiologists have embraced artificial intelligence

No comments: