650: SparseGPT: Remove 100 Billion Parameters but Retain 100% Accuracy - podcast episode cover

650: SparseGPT: Remove 100 Billion Parameters but Retain 100% Accuracy

Feb 03, 20238 min
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Episode description

SparseGPT is a noteworthy one-shot pruning technique that can halve the size of large language models like GPT-3 without adversely affecting accuracy. In this episode, Jon Krohn provides an overview of this development and explains its commercial and environmental implications. Additional materials: www.superdatascience.com/650 Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
650: SparseGPT: Remove 100 Billion Parameters but Retain 100% Accuracy | Super Data Science: ML & AI Podcast with Jon Krohn - Listen or read transcript on Metacast