
Episode description
This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.
Guests:
- Josh Batson, research scientist at Anthropic
Additional Reading:
- Google’s A.I. Search Errors Cause a Furor Online
- Google Confirms the Leaked Search Documents are Real
- Mapping the Mind of a Large Language Model
- A.I. Firms Musn’t Govern Themselves, Say Ex-Members of OpenAI’s Board
We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.
Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.