109 - How Can We Align Language Models like GPT with Human Values? - podcast episode cover

109 - How Can We Align Language Models like GPT with Human Values?

May 30, 2023
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Episode description



In this episode of the podcast I chat to Atoosa Kasirzadeh. Atoosa is an Assistant Professor/Chancellor's fellow at the University of Edinburgh. She is also the Director of Research at the Centre for Technomoral Futures at Edinburgh. We chat about the alignment problem in AI development, roughly: how do we ensure that AI acts in a way that is consistent with human values. We focus, in particular, on the alignment problem for language models such as ChatGPT, Bard and Claude, and how some old ideas from the philosophy of language could help us to address this problem.

You can download the episode here or listen below. You can also subscribe the podcast on AppleSpotifyGoogleAmazon or whatever your preferred service might be.


Relevant Links


Subscribe to the newsletter
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
109 - How Can We Align Language Models like GPT with Human Values? | Philosophical Disquisitions podcast - Listen or read transcript on Metacast