SN 1029: The Illusion of Thinking - Meta Apps and JavaScript Collusion - podcast episode cover

SN 1029: The Illusion of Thinking - Meta Apps and JavaScript Collusion

Jun 11, 20252 hr 46 minEp. 1029
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Episode description

  • In memoriam: Bill Atkinson
  • Meta native apps & JavaScript collude for a localhost local mess.
  • The EU rolls out its own DNS4EU filtered DNS service.
  • Ukraine DDoS's Russia's Railway DNS ... and... so what?
  • The Linux Foundation creates an alternative Wordpress package manager.
  • Court tells OpenAI it must NOT delete ANYONE's chats. Period! :(
  • A CVSS 10.0 in Erlang/OTP's SSH library.
  • Can Russia intercept Telegram? Perhaps.
  • Spain's ISPs mistakenly block Google sites.
  • Reddit sues Anthropic.
  • Twitter's new encrypted DM's are as lame as the old ones.
  • The Login.gov site may not have any backups.
  • Apple explores the question of recent Large Reasoning Models "thinking"

Show Notes - https://www.grc.com/sn/SN-1029-Notes.pdf

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to Security Now at https://twit.tv/shows/security-now.

You can submit a question to Security Now at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit

Sponsors:

Transcript

Primary Navigation Podcasts Club Blog Subscribe Sponsors More… Tech The Illusion of Thinking: Apple's Research Reveals the Hidden Limits of AI Reasoning Models

Jun 13th 2025

AI-created, human-edited.

In a recent episode of Security Now, hosts Leo Laporte and Steve Gibson dove deep into Apple's fascinating new research paper that challenges our fundamental assumptions about artificial intelligence capabilities. The study, titled "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity," offers a sobering look at what AI can and cannot actually do.

Apple's researchers made a startling discovery about Large Reasoning Models (LRMs) like OpenAI's O1 and O3, DeepSeek R1, and Claude 3.7 Sonnet Thinking. Despite their sophisticated "thinking" mechanisms and impressive performance on established benchmarks, these models experience what the researchers call "complete accuracy collapse beyond certain complexities."

As Steve Gibson explained on the show, "There's a cliff" where performance simply falls off entirely. This isn't a gradual decline—it's a dramatic failure that reveals fundamental limitations in how these systems operate.

Apple's researchers chose the classic Towers of Hanoi puzzle as one of their primary testing grounds—a brilliant choice that both hosts appreciated. Leo Laporte reminisced about solving this puzzle as a child, noting how it helped him understand recursion later in programming.

The puzzle's beauty lies in its scalability. With just one or two disks, both standard language models and reasoning models perform perfectly. But as complexity increases:

1-2 disks: Both models achieve 100% success3 disks: The "thinking" model surprisingly underperformed the simpler LLM by about 4%4 disks: The reasoning model maintained 100% while the standard LLM collapsed to 35%8 disks: The reasoning model managed about 10% success10 disks: Complete failure for both model types

The research revealed three fascinating performance patterns:

Low Complexity: Standard LLMs surprisingly outperform reasoning models (more efficient, less "overthinking")Medium Complexity: Reasoning models show their advantage with additional thinking capabilityHigh Complexity: Both model types experience complete performance collapse

Perhaps the most eye-opening discovery was that even when researchers provided the exact algorithm for solving the Tower of Hanoi puzzle, the models' performance didn't improve. As Gibson emphasized, "They gave them the answer and it didn't help."

This suggests that these models aren't actually following logical steps or understanding algorithms—they're engaging in sophisticated pattern matching that breaks down when patterns become too complex or unfamiliar.

Both hosts discussed a critical issue plaguing AI evaluation: data contamination. Many benchmarks may have been encountered during training, making it impossible to distinguish between genuine reasoning and high-level memorization. Apple's use of controllable puzzle environments helps address this concern by enabling systematic complexity manipulation.

Leo Laporte's early characterization of AI as "fancy spell correction" appears increasingly prescient. The Apple research supports this view, suggesting that while these systems demonstrate impressive pattern matching capabilities, they don't exhibit genuine understanding or reasoning.

Steve Gibson concluded that we need new terminology to describe these capabilities—one that doesn't anthropomorphize what these systems actually do. "AI does not need to become AGI or self-aware to be useful," he noted, "and, frankly, I would strongly prefer that it did not."

The research raises crucial questions about the fundamental approach underlying current AI systems. Will scaling up these language model-based systems overcome these limitations, or do they represent inherent barriers to generalizable reasoning?

As Gibson pointed out, this is a rapidly evolving field where any conclusions need "date stamps" and "expiration dates." However, the study provides valuable insights into the current state of AI capabilities and limitations.

For those building applications with AI or relying on these systems for complex reasoning tasks, Apple's research offers important guidance:

AI excels at medium-complexity problems where additional "thinking" provides advantagesPerformance can collapse entirely at high complexity levelsSimple problems might be better handled by standard language modelsUnderstanding these limitations is crucial for appropriate deployment

While AI systems continue to impress and provide genuine utility, Apple's research reinforces that we're dealing with sophisticated pattern matching rather than true reasoning or understanding. This doesn't diminish their value—it simply helps us understand what we actually have and set appropriate expectations for what these systems can and cannot do.

As the AI landscape continues to evolve rapidly, studies like Apple's provide crucial insights into the nature of these remarkable but ultimately limited systems. The "illusion of thinking" may be compelling, but understanding the reality behind it is essential for making informed decisions about AI's role in our future.

Share: Copied! Security Now #1029
Jun 10 2025 - The Illusion of Thinking
Meta Apps and JavaScript Collusion… All Tech posts Contact Advertise CC License Privacy Policy Ad Choices TOS Store Twitter Facebook Instgram YouTube Yes, like every site on the Internet, this site uses cookies. So now you know. Learn more Hide Home Schedule Subscribe Club TWiT About Club TWiT FAQ Access Account Members-Only Podcasts Update Payment Method Connect to Discord TWiT Blog Recent Posts Advertise Sponsors Store People About What is TWiT.tv Tickets Developer Program and API Tip jar Partners Contact Us
Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast