670: LLaMA: GPT-3 performance, 10x smaller - podcast episode cover

670: LLaMA: GPT-3 performance, 10x smaller

Apr 14, 202313 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

How does Meta AI's natural language model, LLaMa compare to the rest? Based on the Chinchilla scaling laws, LLaMa is designed to be smaller but more performant. But how exactly does it achieve this feat? It's all done by training a small model for a longer period of time. Discover how LLaMa compares to its competition, including GPT-3, in this week's episode. Additional materials: www.superdatascience.com/670Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
670: LLaMA: GPT-3 performance, 10x smaller | Super Data Science: ML & AI Podcast with Jon Krohn - Listen or read transcript on Metacast