879: Serverless, Parallel, and AI-Assisted: The Future of Data Science is Here, with Zerve’s Dr. Greg Michaelson - podcast episode cover

879: Serverless, Parallel, and AI-Assisted: The Future of Data Science is Here, with Zerve’s Dr. Greg Michaelson

Apr 15, 20251 hr 7 min
--:--
--:--
Listen in podcast apps:

Episode description

Greg Michaelson speaks to Jon Krohn about the latest developments at Zerve, an operating system for developing and delivering data and AI products, including a revolutionary feature allowing users to run multiple parts of a program’s code at once and without extra costs. You’ll also hear why LLMs might spell trouble for SaaS companies, Greg’s ‘good-cop, bad-cop’ routine that improves LLM responses, and how RAG (retrieval-augmented generation) can be deployed to create even more powerful AI applications. Additional materials: www.superdatascience.com/879 This episode is brought to you by Trainium2, the latest AI chip from AWS and by the Dell AI Factory with NVIDIA. Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information. In this episode you will learn: (04:00) Zerve’s latest features (35:26) How Zerve’s built-in API builder and GPU manager lowers barriers to entry (40:54) How to get started with Zerve (41:49) Will LLMs make SaaS companies redundant? (52:29) How to create fairer and more transparent AI systems (56:07) The future of software developer workflows
879: Serverless, Parallel, and AI-Assisted: The Future of Data Science is Here, with Zerve’s Dr. Greg Michaelson | Super Data Science: ML & AI Podcast with Jon Krohn - Listen or read transcript on Metacast