Scott & Mark Learn To… Induced Hallucinations - podcast episode cover

Scott & Mark Learn To… Induced Hallucinations

May 28, 202525 minSeason 1Ep. 17
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

In this episode of Scott and Mark Learn ToScott Hanselman and Mark Russinovich dive into the chaotic world of large language models, hallucinations, and grounded AI. Through hilarious personal stories, they explore the difference between jailbreaks, induced hallucinations, and factual grounding in AI systems. With live prompts and screen shares, they test the limits of AI's reasoning and reflect on the evolving challenges of trust, creativity, and accuracy in today's tools.  

 

 

Takeaways:    

  • AI is getting better, but we still need to be careful and double check our work 
  • AI sometimes gives wrong answers confidently 
  • Jailbreaks break the rules on purpose, while hallucinations are just AI making stuff up 

   

Who are they?     

View Scott Hanselman on LinkedIn  

View Mark Russinovich on LinkedIn   

 

Watch Scott and Mark Learn on YouTube 

       

Listen to other episodes at scottandmarklearn.to  

         

Discover and follow other Microsoft podcasts at microsoft.com/podcasts   

Hosted on Acast. See acast.com/privacy for more information.

For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
Scott & Mark Learn To… Induced Hallucinations | Scott & Mark Learn To... podcast - Listen or read transcript on Metacast