674: Parameter-Efficient Fine-Tuning of LLMs using LoRA (Low-Rank Adaptation) - podcast episode cover

674: Parameter-Efficient Fine-Tuning of LLMs using LoRA (Low-Rank Adaptation)

Apr 28, 20235 min
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Episode description

Models like Alpaca, Vicuña, GPT4All-J and Dolly 2.0 have relatively small model architectures, but they're prohibitively expensive to train even on a small amount of your own data. The standard model-training protocol can also lead to catastrophic forgetting. In this week's episode, Jon explores a solution to these problems, introducing listeners to Parameter-Efficient Fine-Tuning (PEFT) and the leading approach: Low-Rank Adaptation (LoRA).Additional materials: www.superdatascience.com/674Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
674: Parameter-Efficient Fine-Tuning of LLMs using LoRA (Low-Rank Adaptation) | Super Data Science: ML & AI Podcast with Jon Krohn - Listen or read transcript on Metacast