903: LLM Benchmarks Are Lying to You (And What to Do Instead), with Sinan Ozdemir - podcast episode cover

903: LLM Benchmarks Are Lying to You (And What to Do Instead), with Sinan Ozdemir

Jul 08, 20251 hr 28 min
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Episode description

Has AI benchmarking reached its limit, and what do we have to fill this gap? Sinan Ozdemir speaks to Jon Krohn about the lack of transparency in training data and the necessity of human-led quality assurance to detect AI hallucinations, when and why to be skeptical of AI benchmarks, and the future of benchmarking agentic and multimodal models. Additional materials: ⁠⁠⁠⁠⁠www.superdatascience.com/903⁠⁠⁠⁠ This episode is brought to you by Trainium2, the latest AI chip from AWS, by ⁠⁠Adverity, the conversational analytics platform⁠⁠ and by the ⁠⁠Dell AI Factory with NVIDIA⁠⁠. Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information. In this episode you will learn: (16:48) Sinan’s new podcast, Practically Intelligent (21:54) What to know about the limits of AI benchmarking (53:22) Alternatives to AI benchmarks (1:01:23) The difficulties in getting a model to recognize its mistakes
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
903: LLM Benchmarks Are Lying to You (And What to Do Instead), with Sinan Ozdemir | Super Data Science: ML & AI Podcast with Jon Krohn - Listen or read transcript on Metacast