Liv Boeree joins the podcast to discuss Moloch, beauty filters, game theory, institutional change, and artificial intelligence. You can read more about Liv's work here: https://livboeree.com Timestamps: 00:00 Introduction 01:57 What is Moloch? 04:13 Beauty filters 10:06 Science citations 15:18 Resisting Moloch 20:51 New institutions 26:02 Moloch and WinWin 28:41 Changing systems 33:37 Artificial intelligence 39:14 AI acceleration Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTE...
Mar 16, 2023•42 min•Transcript available on Metacast Tobias Baumann joins the podcast to discuss suffering risks, space colonization, and cooperative artificial intelligence. You can read more about Tobias' work here: https://centerforreducingsuffering.org. Timestamps: 00:00 Suffering risks 02:50 Space colonization 10:12 Moral circle expansion 19:14 Cooperative artificial intelligence 36:19 Influencing governments 39:34 Can we reduce suffering? Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INS...
Mar 09, 2023•43 min•Transcript available on Metacast Tobias Baumann joins the podcast to discuss suffering risks, artificial sentience, and the problem of knowing which actions reduce suffering in the long-term future. You can read more about Tobias' work here: https://centerforreducingsuffering.org. Timestamps: 00:00 Introduction 00:52 What are suffering risks? 05:40 Artificial sentience 17:18 Is reducing suffering hopelessly difficult? 26:06 Can we know how to reduce suffering? 31:17 Why are suffering risks neglected? 37:31 How do we avoid accid...
Mar 02, 2023•47 min•Transcript available on Metacast Neel Nanda joins the podcast for a lightning round on mathematics, technological progress, aging, living up to our values, and generative AI. You can find his blog here: https://www.neelnanda.io Timestamps: 00:00 Introduction 00:55 How useful is advanced mathematics? 02:24 Will AI replace mathematicians? 03:28 What are the key drivers of tech progress? 04:13 What scientific discovery would disrupt Neel's worldview? 05:59 How should humanity view aging? 08:03 How can we live up to our values? 10:...
Feb 23, 2023•35 min•Transcript available on Metacast Neel Nanda joins the podcast to talk about mechanistic interpretability and how it can make AI safer. Neel is an independent AI safety researcher. You can find his blog here: https://www.neelnanda.io Timestamps: 00:00 Introduction 00:46 How early is the field mechanistic interpretability? 03:12 Why should we care about mechanistic interpretability? 06:38 What are some successes in mechanistic interpretability? 16:29 How promising is mechanistic interpretability? 31:13 Is machine learning analogo...
Feb 16, 2023•1 hr 2 min•Transcript available on Metacast Neel Nanda joins the podcast to explain how we can understand neural networks using mechanistic interpretability. Neel is an independent AI safety researcher. You can find his blog here: https://www.neelnanda.io Timestamps: 00:00 Who is Neel? 04:41 How did Neel choose to work on AI safety? 12:57 What does an AI safety researcher do? 15:53 How analogous are digital neural networks to brains? 21:34 Are neural networks like alien beings? 29:13 Can humans think like AIs? 35:00 Can AIs help us discov...
Feb 09, 2023•1 hr 5 min•Transcript available on Metacast Connor Leahy from Conjecture joins the podcast for a lightning round on a variety of topics ranging from aliens to education. Learn more about Connor's work at https://conjecture.dev Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
Feb 02, 2023•1 hr 6 min•Transcript available on Metacast Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 00:47 What is the best way to understand AI safety? 09:50 Why is the world relatively stable? 15:18 Is the main worry human misuse of AI? 22:47 Can humanity solve AI safety? 30:06 Can we slow down AI development? 37:1...
Jan 26, 2023•1 hr 5 min•Transcript available on Metacast Connor Leahy from Conjecture joins the podcast to discuss AI progress, chimps, memes, and markets. Learn more about Connor's work at https://conjecture.dev Timestamps: 00:00 Introduction 01:00 Defining artificial general intelligence 04:52 What makes humans more powerful than chimps? 17:23 Would AIs have to be social to be intelligent? 20:29 Importing humanity's memes into AIs 23:07 How do we measure progress in AI? 42:39 Gut feelings about AI progress 47:29 Connor's predictions about AGI 52:44 ...
Jan 19, 2023•1 hr 4 min•Transcript available on Metacast On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about regulating AI drug discovery. Timestramps: 00:00 Introduction 00:31 Ethical guidelines and regulation of AI drug discovery 06:11 How do we balance innovation and safety in AI drug discovery? 13:12 Keeping dangerous chemical data safe 21:16 Sean’s personal story of voicing concerns about AI drug discovery 32:06 How Sean will continue working on AI drug discovery
Jan 12, 2023•37 min•Transcript available on Metacast On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about the dangers of AI drug discovery. They talk about how Sean discovered an extremely toxic chemical (VX) by reversing an AI drug discovery algorithm. Timestamps: 00:00 Introduction 00:46 Sean’s professional journey 03:45 Can computational models replace animal models? 07:24 The risks of AI drug discovery 12:48 Should scientists disclose dangerous discoveries? 19:40 How should scientists handle dual-use technologies...
Jan 05, 2023•39 min•Transcript available on Metacast Anders Sandberg joins the podcast to discuss various philosophical questions about the value of the future. Learn more about Anders' work: https://www.fhi.ox.ac.uk Timestamps: 00:00 Introduction 00:54 Humanity as an immature teenager 04:24 How should we respond to our values changing over time? 18:53 How quickly should we change our values? 24:58 Are there limits to what future morality could become? 29:45 Could the universe contain infinite value? 36:00 How do we balance weird philosophy with c...
Dec 29, 2022•50 min•Transcript available on Metacast Anders Sandberg joins the podcast to discuss how big the future could be and what humanity could achieve at the limits of physics. Learn more about Anders' work: https://www.fhi.ox.ac.uk Timestamps: 00:00 Introduction 00:58 Does it make sense to write long books now? 06:53 Is it possible to understand all of science now? 10:44 What is exploratory engineering? 15:48 Will humanity develop a completed science? 21:18 How much of possible technology has humanity already invented? 25:22 Which sciences...
Dec 22, 2022•1 hr 3 min•Transcript available on Metacast Anders Sandberg from The Future of Humanity Institute joins the podcast to discuss ChatGPT, large language models, and what he's learned about the risks and benefits of AI. Timestamps: 00:00 Introduction 00:40 ChatGPT 06:33 Will AI continue to surprise us? 16:22 How do language models fail? 24:23 Language models trained on their own output 27:29 Can language models write college-level essays? 35:03 Do language models understand anything? 39:59 How will AI models improve in the future? 43:26 AI s...
Dec 15, 2022•58 min•Transcript available on Metacast Vincent Boulanin joins the podcast to explain how modern militaries use AI, including in nuclear weapons systems. Learn more about Vincent's work: https://sipri.org Timestamps: 00:00 Introduction 00:45 Categorizing risks from AI and nuclear 07:40 AI being used by non-state actors 12:57 Combining AI with nuclear technology 15:13 A human should remain in the loop 25:05 Automation bias 29:58 Information requirements for nuclear launch decisions 35:22 Vincent's general conclusion about military mach...
Dec 08, 2022•48 min•Transcript available on Metacast Vincent Boulanin joins the podcast to explain the dangers of incorporating artificial intelligence in nuclear weapons systems. Learn more about Vincent's work: https://sipri.org Timestamps: 00:00 Introduction 00:55 What is strategic stability? 02:45 How can AI be a positive factor in nuclear risk? 10:17 Remote sensing of nuclear submarines 19:50 Using AI in nuclear command and control 24:21 How does AI change the game theory of nuclear war? 30:49 How could AI cause an accidental nuclear escalati...
Dec 01, 2022•45 min•Transcript available on Metacast Robin Hanson joins the podcast to discuss AI forecasting methods and metrics. Timestamps: 00:00 Introduction 00:49 Robin's experience working with AI 06:04 Robin's views on AI development 10:41 Should we care about metrics for AI progress? 16:56 Is it useful to track AI progress? 22:02 When should we begin worrying about AI safety? 29:16 The history of AI development 39:52 AI progress that deviates from current trends 43:34 Is this AI boom different than past booms? 48:26 Different metrics for p...
Nov 24, 2022•52 min•Transcript available on Metacast Robin Hanson joins the podcast to explain his theory of grabby aliens and its implications for the future of humanity. Learn more about the theory here: https://grabbyaliens.com Timestamps: 00:00 Introduction 00:49 Why should we care about aliens? 05:58 Loud alien civilizations and quiet alien civilizations 08:16 Why would some alien civilizations be quiet? 14:50 The moving parts of the grabby aliens model 23:57 Why is humanity early in the universe? 28:46 Could't we just be alone in the univers...
Nov 17, 2022•1 hr•Transcript available on Metacast Ajeya Cotra joins us to talk about thinking clearly in a rapidly changing world. Learn more about the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:44 The default versus the accelerating picture of the future 04:25 The role of AI in accelerating change 06:48 Extrapolating economic growth 08:53 How do we know whether the pace of change is accelerating? 15:07 How can we cope with a rapidly changing world? 18:50 How could the future be utopian?...
Nov 10, 2022•45 min•Transcript available on Metacast Ajeya Cotra joins us to discuss how artificial intelligence could cause catastrophe. Follow the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:53 AI safety research in general 02:04 Realistic scenarios for AI catastrophes 06:51 A dangerous AI model developed in the near future 09:10 Assumptions behind dangerous AI development 14:45 Can AIs learn long-term planning? 18:09 Can AIs understand human psychology? 22:32 Training an AI model with nai...
Nov 03, 2022•54 min•Transcript available on Metacast Ajeya Cotra joins us to discuss forecasting transformative artificial intelligence. Follow the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:53 Ajeya's report on AI 01:16 What is transformative AI? 02:09 Forecasting transformative AI 02:53 Historical growth rates 05:10 Simpler forecasting methods 09:01 Biological anchors 16:31 Different paths to transformative AI 17:55 Which year will we get transformative AI? 25:54 Expert opinion on transfo...
Oct 27, 2022•48 min•Transcript available on Metacast Alan Robock joins us to discuss nuclear winter, famine and geoengineering. Learn more about Alan's work: http://people.envsci.rutgers.edu/robock/ Follow Alan on Twitter: https://twitter.com/AlanRobock Timestamps: 00:00 Introduction 00:45 What is nuclear winter? 06:27 A nuclear war between India and Pakistan 09:16 Targets in a nuclear war 11:08 Why does the world have so many nuclear weapons? 19:28 Societal collapse in a nuclear winter 22:45 Should we prepare for a nuclear winter? 28:13 Skepticis...
Oct 20, 2022•41 min•Transcript available on Metacast Brian Toon joins us to discuss the risk of nuclear winter. Learn more about Brian's work: https://lasp.colorado.edu/home/people/brian-toon/ Read Brian's publications: https://airbornescience.nasa.gov/person/Brian_Toon Timestamps: 00:00 Introduction 01:02 Asteroid impacts 04:20 The discovery of nuclear winter 13:56 Comparing volcanoes and asteroids to nuclear weapons 19:42 How did life survive the asteroid impact 65 million years ago? 25:05 How humanity could go extinct 29:46 Nuclear weapons as a...
Oct 13, 2022•49 min•Transcript available on Metacast Philip Reiner joins us to talk about nuclear, command, control and communications systems. Learn more about Philip’s work: https://securityandtechnology.org/ Timestamps: [00:00:00] Introduction [00:00:50] Nuclear command, control, and communications [00:03:52] Old technology in nuclear systems [00:12:18] Incentives for nuclear states [00:15:04] Selectively enhancing security [00:17:34] Unilateral de-escalation [00:18:04] Nuclear communications [00:24:08] The CATALINK System [00:31:25] AI in nucl...
Oct 06, 2022•47 min•Transcript available on Metacast Daniela and Dario Amodei join us to discuss Anthropic: a new AI safety and research company that's working to build reliable, interpretable, and steerable AI systems. Topics discussed in this episode include: -Anthropic's mission and research strategy -Recent research and papers by Anthropic -Anthropic's structure as a "public benefit corporation" -Career opportunities You can find the page for the podcast here: https://futureoflife.org/2022/03/04/daniela-and-dario-amodei-on-anthropic/ Watch the...
Mar 04, 2022•2 hr 1 min•Transcript available on Metacast Anthony Aguirre and Anna Yelizarova join us to discuss FLI's new Worldbuilding Contest. Topics discussed in this episode include: -Motivations behind the contest -The importance of worldbuilding -The rules of the contest -What a submission consists of -Due date and prizes Learn more about the contest here: https://worldbuild.ai/ Join the discord: https://discord.com/invite/njZyTJpwMz You can find the page for the podcast here: https://futureoflife.org/2022/02/08/anthony-aguirre-and-anna-yelizaro...
Feb 09, 2022•33 min•Transcript available on Metacast David Chalmers, Professor of Philosophy and Neural Science at NYU, joins us to discuss his newest book Reality+: Virtual Worlds and the Problems of Philosophy. Topics discussed in this episode include: -Virtual reality as genuine reality -Why VR is compatible with the good life -Why we can never know whether we're in a simulation -Consciousness in virtual realities -The ethics of simulated beings You can find the page for the podcast here: https://futureoflife.org/2022/01/26/david-chalmers-on-re...
Jan 26, 2022•2 hr 43 min•Transcript available on Metacast Rohin Shah, Research Scientist on DeepMind's technical AGI safety team, joins us to discuss: AI value alignment; how an AI Researcher might decide whether to work on AI Safety; and why we don't know that AI systems won't lead to existential risk. Topics discussed in this episode include: - Inner Alignment versus Outer Alignment - Foundation Models - Structural AI Risks - Unipolar versus Multipolar Scenarios - The Most Important Thing That Impacts the Future of Life You can find the page for the ...
Nov 02, 2021•2 hr 44 min•Transcript available on Metacast Future of Life Institute President Max Tegmark and our grants team, Andrea Berman and Daniel Filan, join us to announce a $25M multi-year AI Existential Safety Grants Program. Topics discussed in this episode include: - The reason Future of Life Institute is offering AI Existential Safety Grants - Max speaks about how receiving a grant changed his career early on - Daniel and Andrea provide details on the fellowships and future grant priorities Check out our grants programs here: https://grants....
Oct 18, 2021•25 min•Transcript available on Metacast Dr. Filippa Lentzos, Senior Lecturer in Science and International Security at King's College London, joins us to discuss the most pressing issues in biosecurity, big data in biology and life sciences, and governance in biological risk. Topics discussed in this episode include: - The most pressing issue in biosecurity - Stories from when biosafety labs failed to contain dangerous pathogens - The lethality of pathogens being worked on at biolaboratories - Lessons from COVID-19 You can find the pag...
Oct 01, 2021•58 min•Transcript available on Metacast