342. Superalignment with Sam Altman’s Values - podcast episode cover

342. Superalignment with Sam Altman’s Values

May 23, 20241 hr 17 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

We talk about how everybody on the superalignment team at OpenAI—focused on safety, risk, adversarial testing, societal impacts, and existential concerns—is resigning, including high-profile people like Illya Sutskever. And nobody can talk about it because of draconian rules (even for Silicon Valley) about non-disclosure and non-disparagement people must sign (or risk their vested equity) upon exiting the company. For us, the turmoil of OpenAI is indicative of conflict between true believers (superalignment) and cynical operators (Sam Altman). Outro: Aunty Donna – Real Estate Agents https://www.youtube.com/watch?v=VGm267O04a8 ••• “I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence ••• ChatGPT can talk, but OpenAI employees sure can’t https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release Subscribe to hear more analysis and commentary in our premium episodes every week! https://www.patreon.com/thismachinekills Hosted by Jathan Sadowski (www.twitter.com/jathansadowski) and Edward Ongweso Jr. (www.twitter.com/bigblackjacobin). Production / Music by Jereme Brown (www.twitter.com/braunestahl)
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
342. Superalignment with Sam Altman’s Values | This Machine Kills podcast - Listen or read transcript on Metacast