DeepSeek—a name no one had even heard of until last week—has since given Big Tech a run for its money.
Soon after the news, they torched $1 trillion from Nvidia and other giants' market caps.
And while AI researchers call it “distillation,” aren’t LLMs recursive by design? Everyone’s distilling someone else’s work.
The real question: Did DeepSeek actually crack efficient training? Or just drop the right meme at the right time?
We’re also on ↓
X: https://twitter.com/moreorlesspod
Instagram: https://instagram.com/moreorless
Spotify: https://podcasters.spotify.com/pod/show/moreorlesspod
Connect with us here:
1) Sam Lessin: https://x.com/lessin
2) Dave Morin: https://x.com/davemorin
3) Jessica Lessin: https://x.com/Jessicalessin
4) Brit Morin: https://x.com/brit
(00:00:00) Trailer
(00:01:03) Under the weather
(00:01:43) News overload
(00:05:16) Subscriptions, trust, parasocial relationships
(00:14:21) DeepSeek and meme warfare
(00:29:09) Meme coins and the future of digital economy
(00:39:03) Regulation and trends
(00:44:04) AI is the tale of two cities
(00:49:40) Meta's strategic position
(00:55:29) DeepSeek questions
(00:59:13) Sam's watchlist
(01:01:35) Outro