Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726 - podcast episode cover

Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726

Apr 08, 202552 minEp. 726
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Today, we're joined by Maohao Shen, PhD student at MIT to discuss his paper, “Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search.” We dig into how Satori leverages reinforcement learning to improve language model reasoning—enabling model self-reflection, self-correction, and exploration of alternative solutions. We explore the Chain-of-Action-Thought (COAT) approach, which uses special tokens—continue, reflect, and explore—to guide the model through distinct reasoning actions, allowing it to navigate complex reasoning tasks without external supervision. We also break down Satori’s two-stage training process: format tuning, which teaches the model to understand and utilize the special action tokens, and reinforcement learning, which optimizes reasoning through trial-and-error self-improvement. We cover key techniques such “restart and explore,” which allows the model to self-correct and generalize beyond its training domain. Finally, Maohao reviews Satori’s performance and how it compares to other models, the reward design, the benchmarks used, and the surprising observations made during the research. The complete show notes for this episode can be found at https://twimlai.com/go/726.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726 | The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - Listen or read transcript on Metacast