Food for thought! Just finished reading this paper. A critical paper by Dacheng Tao and his team!
"... From the above analysis, we observe a fundamental limitation of dLLMs [Diffusion-based Large Language Models]: despite their efficiency gains, parallel decoding weakens causal dependency and induces fuzzy intermediate states, hindering stable commitment to partial plans or structured outputs. ...
As a result, dLLMs perform poorly on long-horizon reasoning and strictly structured tasks ..."
From the abstract:
"The pursuit of real-time agentic interaction has driven interest in Diffusion-based Large Language Models (dLLMs) as alternatives to auto-regressive backbones, promising to break the sequential latency bottleneck.
However, does such efficiency gains translate into effective agentic behavior? In this work, we present a comprehensive evaluation of dLLMs (e.g., LLaDA, Dream) across two distinct agentic paradigms:
Embodied Agents (requiring long-horizon planning) and
Tool-Calling Agents (requiring precise formatting).
Contrary to the efficiency hype, our results on Agentboard and BFCL reveal a "bitter lesson": current dLLMs fail to serve as reliable agentic backbones, frequently leading to systematically failure.
(1) In Embodied settings, dLLMs suffer repeated attempts, failing to branch under temporal feedback.
(2) In Tool-Calling settings, dLLMs fail to maintain symbolic precision (e.g. strict JSON schemas) under diffusion noise.
To assess the potential of dLLMs in agentic workflows, we introduce DiffuAgent, a multi-agent evaluation framework that integrates dLLMs as plug-and-play cognitive cores.
Our analysis shows that dLLMs are effective in non-causal roles (e.g., memory summarization and tool selection) but require the incorporation of causal, precise, and logically grounded reasoning mechanisms into the denoising process to be viable for agentic tasks."
No comments:
Post a Comment