106 - Why GPT and other LLMs (probably) aren't sentient - podcast episode cover

106 - Why GPT and other LLMs (probably) aren't sentient

Apr 11, 2023
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Episode description

In this episode, I chat to Robert Long about AI sentience. Robert is a philosopher that works on issues related to the philosopy of mind, cognitive science and AI ethics. He is currently a philosophy fellow at the Centre for AI Safety in San Francisco. He completed his PhD at New York University. We do a deep dive on the concept of sentience, why it is important, and how we can tell whether an animal or AI is sentient. We also discuss whether it is worth taking the topic of AI sentience seriously.

You can download the episode here or listen below. You can also subscribe the podcast on AppleSpotifyGoogleAmazon or whatever your preferred service might be.


Relevant Links
Subscribe to the newsletter
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
106 - Why GPT and other LLMs (probably) aren't sentient | Philosophical Disquisitions podcast - Listen or read transcript on Metacast