AIAP: Astronomical Future Suffering and Superintelligence with Kaj Sotala - podcast episode cover

AIAP: Astronomical Future Suffering and Superintelligence with Kaj Sotala

Jun 14, 20181 hr 15 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

In the classic taxonomy of risks developed by Nick Bostrom, existential risks are characterized as risks which are both terminal in severity and transgenerational in scope. If we were to maintain the scope of a risk as transgenerational and increase its severity past terminal, what would such a risk look like? What would it mean for a risk to be transgenerational in scope and hellish in severity? In this podcast, Lucas spoke with Kaj Sotala, an associate researcher at the Foundational Research Institute. He has previously worked for the Machine Intelligence Research Institute, and has publications on AI safety, AI timeline forecasting, and consciousness research. Topics discussed in this episode include: -The definition of and a taxonomy of suffering risks -How superintelligence has special leverage for generating or mitigating suffering risks -How different moral systems view suffering risks -What is possible of minds in general and how this plays into suffering risks -The probability of suffering risks -What we can do to mitigate suffering risks
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
AIAP: Astronomical Future Suffering and Superintelligence with Kaj Sotala | Future of Life Institute Podcast - Listen or read transcript on Metacast