Are reasoning models fundamentally flawed? - podcast episode cover

Are reasoning models fundamentally flawed?

Jun 20, 202517 minEp. 306
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

AI reasoning models have emerged in the past year as a beacon of hope for large language models (LLMs), with AI developers such as OpenAI, Google, and Anthropic selling them as the go-to solution for solving the most complex business problems. 

However, a new research paper by Apple has cast significant doubts on the efficacy of reasoning models, going as far as to suggest that when a problem is too complex, they simply give up. What's going on here? And does it mean reasoning models are fundamentally flawed? 

In this episode, Rory Bathgate speaks to ITPro's news and analysis editor Ross Kelly to explain some of the report's key findings and what it means for the future of AI development.


For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
Are reasoning models fundamentally flawed? | The ITPro Podcast - Listen or read transcript on Metacast