Friday, February 20, 2026

Meet OAT: The New Action Tokenizer Bringing LLM-Style Scaling and Flexible, Anytime Inference to the Robotics World


"Ordered Action Tokenization (OAT), developed by researchers at Harvard and Stanford, is a new framework that enables robots to learn and move using the same autoregressive methods as large language models. Traditional robot tokenizers were often too slow, lacked structure, or caused system crashes due to "undecodable" math. OAT solves these issues by satisfying three "desiderata":
high compression,
total decodability, and a 
left-to-right causal ordering.
Using a technique called Nested Dropout, OAT forces the most important global movements into the first few tokens, while later tokens add fine-grained details. This unique "ordered" structure allows for anytime inference, where a robot can stop generating tokens early to react quickly or continue for higher precision. Across more than 20 tasks, OAT consistently outperformed industry-standard diffusion policies and other tokenization methods, offering a more scalable and flexible foundation for future robotic control ..."

From the abstract:
"Autoregressive policies offer a compelling foundation for scalable robot learning by enabling discrete abstraction, token-level reasoning, and flexible inference. However, applying autoregressive modeling to continuous robot actions requires an effective action tokenization scheme. Existing approaches either rely on analytical discretization methods that produce prohibitively long token sequences, or learned latent tokenizers that lack structure, limiting their compatibility with next-token prediction. In this work, we identify three desiderata for action tokenization - high compression, total decodability, and a left-to-right causally ordered token space - and introduce Ordered Action Tokenization (OAT), a learned action tokenizer that satisfies all three. OAT discretizes action chunks into an ordered sequence of tokens using transformer with registers, finite scalar quantization, and ordering-inducing training mechanisms. The resulting token space aligns naturally with autoregressive generation and enables prefix-based detokenization, yielding an anytime trade-off between inference cost and action fidelity. Across more than 20 tasks spanning four simulation benchmarks and real-world settings, autoregressive policies equipped with OAT consistently outperform prior tokenization schemes and diffusion-based baselines, while offering significantly greater flexibility at inference time."

Microsoft AI Proposes OrbitalBrain | New from Harvard- Ordered Action Tokenization (OAT)...


OAT: Ordered Action Tokenization (preprint, open access)




No comments: