Latent Space: The AI Engineer Podcast - podcast cover

Latent Space: The AI Engineer Podcast

swyx + Alessiowww.latent.space
The podcast by and for AI Engineers! In 2024, over 2 million readers and listeners came to Latent Space to hear about news, papers and interviews in Software 3.0. We cover Foundation Models changing every domain in Code Generation, Multimodality, AI Agents, GPU Infra and more, directly from the founders, builders, and thinkers involved in pushing the cutting edge. Striving to give you both the definitive take on the Current Thing down to the first introduction to the tech you'll be using in the next 3 months! We break news and exclusive interviews from OpenAI, Anthropic, Gemini, Meta (Soumith Chintala), Sierra (Bret Taylor), tiny (George Hotz), Databricks/MosaicML (Jon Frankle), Modular (Chris Lattner), Answer.ai (Jeremy Howard), et al. Full show notes always on https://latent.space

Episodes

From RLHF to RLHB: The Case for Learning from Human Behavior - with Jeffrey Wang and Joe Reeve of Amplitude

Welcome to the almost 3k latent space explorers that joined us last month! We’re holding our first SF listener meetup with Practical AI next Monday; join us if you want to meet past guests and put faces to voices! All events are in /community . Who among you regularly click the ubiquitous 👍 /👎 buttons in ChatGPT/Bard/etc? Anyone? I don’t see any hands up. OpenAI has told us how important reinforcement learning from human feedback (RLHF) is to creating the magic that is ChatGPT, but we know fro...

Jun 08, 202349 min

Building the AI × UX Scenius — with Linus Lee of Notion AI

Read: https://www.latent.space/p/ai-interfaces-and-notion Show Notes * Linus on Twitter * Linus’ personal blog * Notion * Notion AI * Notion Projects * AI UX Meetup Recap Timestamps * [00:03:30] Starting the AI / UX community * [00:10:01] Most knowledge work is not text generation * [00:16:21] Finding the right constraints and interface for AI * [00:19:06] Linus' journey to working at Notion * [00:23:29] The importance of notations and interfaces * [00:26:07] Setting interface defaults and stand...

Jun 01, 20231 hr 10 min

Debugging the Internet with AI agents – with Itamar Friedman of Codium AI and AutoGPT

We are hosting the AI World’s Fair in San Francisco on June 8th! You can RSVP here . Come meet fellow builders, see amazing AI tech showcases at different booths around the venue, all mixed with elements of traditional fairs: live music, drinks, games, and food! We are also at Amplitude’s AI x Product Hackathon and are hosting our first joint Latent Space + Practical AI Podcast Listener Meetup next month! We are honored by the rave reviews for our last episode with MosaicML! They are also welcom...

May 25, 20231 hr 3 min

MPT-7B and The Beginning of Context=Infinity — with Jonathan Frankle and Abhinav Venigalla of MosaicML

We are excited to be the first podcast in the world to release an in-depth interview on the new SOTA in commercially licensed open source models - MosiacML MPT-7B! The Latent Space crew will be at the NYC Lux AI Summit next week, and have two meetups in June. As usual, all events are on the Community page ! We are also inviting beta testers for the upcoming AI for Engineers course. See you soon! One of GPT3’s biggest limitations is context length - you can only send it up to 4000 tokens (3k word...

May 20, 20231 hr 7 min

Guaranteed quality and structure in LLM outputs - with Shreya Rajpal of Guardrails AI

Tomorrow, 5/16, we’re hosting Latent Space Liftoff Day in San Francisco. We have some amazing demos from founders at 5:30pm, and we’ll have an open co-working starting at 2pm. Spaces are limited, so please RSVP here ! One of the biggest criticisms of large language models is their inability to tightly follow requirements without extensive prompt engineering. You might have seen examples of ChatGPT playing a game of chess and making many invalid moves, or adding new pieces to the board. Guardrail...

May 16, 20231 hr 2 min

The AI Founder Gene: Being Early, Building Fast, and Believing in Greatness — with Sharif Shameem of Lexica

Thanks to the over 42,000 latent space explorers who checked out our Replit episode ! We are hosting/attending a couple more events in SF and NYC this month. See you if in town! Lexica.art was introduced to the world 24 hours after the release of Stable Diffusion as a search engine for prompts, gaining instant product-market fit as a world discovering generative AI also found they needed to learn prompting by example. Lexica is now 8 months old, serving 5B image searches/day, and just shipped V3...

May 08, 202351 min

No Moat: Closed AI gets its Open Source wakeup call — ft. Simon Willison

It’s now almost 6 months since Google declared Code Red , and the results — Jeff Dean’s recap of 2022 achievements and a mass exodus of the top research talent that contributed to it in January, Bard’s rushed launch in Feb, a slick video showing Google Workspace AI features and confusing doubly linked blogposts about PaLM API in March, and merging Google Brain and DeepMind in April — have not been inspiring. Google’s internal panic is in full display now with the surfacing of a well written memo...

May 05, 202344 min

Training a SOTA Code LLM in 1 week and Quantifying the Vibes — with Reza Shabani of Replit

Latent Space is popping off! Welcome to the over 8500 latent space explorers who have joined us. Join us this month at various events in SF and NYC , or start your own! This post spent 22 hours at the top of Hacker News . As announced during their Developer Day celebrating their $100m fundraise following their Google partnership , Replit is now open sourcing its own state of the art code LLM: replit-code-v1-3b ( model card , HF Space ), which beats OpenAI’s Codex model on the industry standard H...

May 03, 20231 hr 10 min

Mapping the future of *truly* Open Models and Training Dolly for $30 — with Mike Conover of Databricks

The race is on for the first fully GPT3/4-equivalent, truly open source Foundation Model! LLaMA’s release proved that a great model could be released and run on consumer-grade hardware (see llama.cpp ), but its research license prohibits businesses from running it and all it’s variants (Alpaca, Vicuna, Koala, etc) for their own use at work. So there is great interest and desire for *truly* open source LLMs that are feasible for commercial use (with far better customization, finetuning, and priva...

Apr 29, 20231 hr 16 min

AI-powered Search for the Enterprise — with Deedy Das of Glean

The most recent YCombinator W23 batch graduated 59 companies building with Generative AI for everything from sales, support, engineering, data, and more: Many of these B2B startups will be seeking to establish an AI foothold in the enterprise. As they look to recent success, they will find Glean, started in 2019 by a group of ex-Googlers to finally solve AI-enabled enterprise search. In 2022 Sequoia led their Series C at a $1b valuation and Glean have just refreshed their website touting new log...

Apr 22, 20231 hr 4 min

Segment Anything Model and the Hard Problems of Computer Vision — with Joseph Nelson of Roboflow

2023 is the year of Multimodal AI , and Latent Space is going multimodal too! * This podcast comes with a video demo at the 1hr mark and it’s a good excuse to launch our YouTube - please subscribe! * We are also holding two events in San Francisco — the first AI | UX meetup next week (already full; we’ll send a recap here on the newsletter) and Latent Space Liftoff Day on May 4th ( signup here ; but get in touch if you have a high profile launch you’d like to make). * We also joined the Chroma/O...

Apr 13, 20231 hr 20 min

AI Fundamentals: Benchmarks 101

We’re trying a new format, inspired by Acquired.fm ! No guests, no news, just highly prepared, in-depth conversation on one topic that will level up your understanding. We aren’t experts, we are learning in public. Please let us know what we got wrong and what you think of this new format! When you ask someone to break down the basic ingredients of a Large Language Model, you’ll often hear a few things: You need lots of data. You need lots of compute. You need models with billions of parameters....

Apr 07, 202351 min

Grounded Research: From Google Brain to MLOps to LLMOps — with Shreya Shankar of UC Berkeley

We are excited to feature our first academic on the pod! I first came across Shreya when her tweetstorm of MLOps principles went viral: Shreya’s holistic approach to production grade machine learning has taken her from Stanford to Facebook and Google Brain, being the first ML Engineer at Viaduct, and now a PhD in Databases (trust us, its relevant) at UC Berkeley with the new EPIC Data Lab . If you know Berkeley’s history in turning cutting edge research into gamechanging startups, you should be ...

Mar 29, 202342 min

Emergency Pod: ChatGPT's App Store Moment (w/ OpenAI's Logan Kilpatrick, LindyAI's Florent Crivello and Nader Dabit)

This blogpost has been updated since original release to add more links and references. The ChatGPT Plugins announcement today could be viewed as the launch of ChatGPT’s “App Store”, a moment as significant as when Apple opened its App Store for the iPhone in 2008 or when Facebook let developers loose on its Open Graph in 2010. With a dozen lines of simple JSON and a mostly-english prompt to help ChatGPT understand what the plugin does, developers will be able to add extensions to ChatGPT to get...

Mar 24, 20231 hr 36 min

From Astrophysics to AI: Building the future AI Data Stack — with Sarah Nagy of Seek.ai

If Text is the Universal Interface , then Text to SQL is perhaps the killer B2B business usecase for Generative AI. You may have seen incredible demos from Perplexity AI , OSS Insights , and CensusGPT where the barrier of learning SQL and schemas goes away and you can intuitively converse with your data in natural language. But in the multi-billion dollar data engineering industry, Seek.ai has emerged as the forerunner in building a conversational engine and knowledge base that truly democratize...

Mar 10, 202338 min

97% Cheaper, Faster, Better, Correct AI — with Varun Mohan of Codeium

OpenAI just rollicked the AI world yet again yesterday — while releasing the long awaited ChatGPT API, they also priced it at $2 per million tokens generated, which is 90% cheaper than the text-davinci-003 pricing of the “GPT3.5” family. Their blogpost on how they did it is vague: Through a series of system-wide optimizations, we’ve achieved 90% cost reduction for ChatGPT since December; we’re now passing through those savings to API users. We were fortunate enough to record Episode 2 of our pod...

Mar 02, 202351 min

ChatGPT, GPT4 hype, and Building LLM-native products — with Logan Kilpatrick of OpenAI

We’re so glad to launch our first podcast episode with Logan Kilpatrick ! This also happens to be his first public interview since joining OpenAI as their first Developer Advocate. Thanks Logan! Recorded in-person at the beautiful StudioPod studios in San Francisco. Full transcript is below the fold. Timestamps * 00:29: Logan’s path to OpenAI * 07:06: On ChatGPT and GPT3 API * 16:16: On Prompt Engineering * 20:30: Usecases and LLM-Native Products * 25:38: Risks and benefits of building on OpenAI...

Feb 23, 202352 min
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast