903: LLM Benchmarks Are Lying to You (And What to Do Instead), with Sinan Ozdemir
Jul 08, 2025•1 hr 28 min
Episode description
Has AI benchmarking reached its limit, and what do we have to fill this gap? Sinan Ozdemir speaks to Jon Krohn about the lack of transparency in training data and the necessity of human-led quality assurance to detect AI hallucinations, when and why to be skeptical of AI benchmarks, and the future of benchmarking agentic and multimodal models.
Additional materials: www.superdatascience.com/903
This episode is brought to you by Trainium2, the latest AI chip from AWS, by Adverity, the conversational analytics platform and by the Dell AI Factory with NVIDIA.
Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information.
In this episode you will learn:
(16:48) Sinan’s new podcast, Practically Intelligent
(21:54) What to know about the limits of AI benchmarking
(53:22) Alternatives to AI benchmarks
(1:01:23) The difficulties in getting a model to recognize its mistakes
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast