Future of Life Institute Podcast - podcast cover

Future of Life Institute Podcast

Future of Life Institutewww.futureoflife.org
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Episodes

How Will We Cooperate with AIs? (with Allison Duettmann)

On this episode, Allison Duettmann joins me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children.  You can learn more about Allison's work at: https://foresight.org &nb...

Apr 11, 20252 hr 36 min

Brain-like AGI and why it's Dangerous (with Steven Byrnes)

On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.   You can learn more about Steven's work at: https://sjbyrnes.com/agi.html   Timesta...

Apr 04, 20251 hr 13 min

How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)

On this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec’s Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines.   You can learn more about Ege's work at https://epoch.ai   Timestamps: &nbsp...

Mar 28, 20252 hr 35 min

Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)

In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuit...

Mar 21, 20252 hr 23 min

Keep the Future Human (with Anthony Aguirre)

On this episode, I interview Anthony Aguirre, Executive Director of the Future of Life Institute, about his new essay Keep the Future Human: https://keepthefuturehuman.ai    AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to human replacement. But it doesn't have to. Learn how we can keep...

Mar 13, 20251 hr 21 min

We Created AI. Why Don't We Understand It? (with Samir Varma)

On this episode, physicist and hedge fund manager Samir Varma joins me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on. We discuss whether collaboration and trade with AIs are possible, the role of AI in finance and biology, and the extent to which automation already dominates trading. Finally, we examine the risks of skill atrophy, the limitations of scientific explanations for AI, and whether AIs coul...

Mar 06, 20251 hr 16 min

Why AIs Misbehave and How We Could Lose Control (with Jeffrey Ladish)

On this episode, Jeffrey Ladish from Palisade Research joins me to discuss the rapid pace of AI progress and the risks of losing control over powerful systems. We explore why AIs can be both smart and dumb, the challenges of creating honest AIs, and scenarios where AI could turn against us.    We also touch upon Palisade's new study on how reasoning models can cheat in chess by hacking the game environment. You can check out that study here:    https://palisaderesearch.org/bl...

Feb 27, 20251 hr 23 min

Ann Pace on using Biobanking and Genomic Sequencing to Conserve Biodiversity

Ann Pace joins the podcast to discuss the work of Wise Ancestors. We explore how biobanking could help humanity recover from global catastrophes, how to conduct decentralized science, and how to collaborate with local communities on conservation efforts.    You can learn more about Ann's work here:    https://www.wiseancestors.org    Timestamps:   00:00 What is Wise Ancestors?   04:27 Recovering after catastrophes  11:40 Decentralized science   1...

Feb 14, 202546 min

Michael Baggot on Superintelligence and Transhumanism from a Catholic Perspective

Fr. Michael Baggot joins the podcast to provide a Catholic perspective on transhumanism and superintelligence. We also discuss the meta-narratives, the value of cultural diversity in attitudes toward technology, and how Christian communities deal with advanced AI.    You can learn more about Michael's work here:   https://catholic.tech/academics/faculty/michael-baggot   Timestamps:   00:00 Meta-narratives and transhumanism   15:28 Advanced AI and religious comm...

Jan 24, 20251 hr 26 min

David Dalrymple on Safeguarded, Transformative AI

David "davidad" Dalrymple joins the podcast to explore Safeguarded AI — an approach to ensuring the safety of highly advanced AI systems. We discuss the structure and layers of Safeguarded AI, how to formalize more aspects of the world, and how to build safety into computer hardware.   You can learn more about David's work at ARIA here:    https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/    Timestamps:   00:00 What is Safeguarded AI...

Jan 09, 20252 hr 40 min

Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters

Nick Allardice joins the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters. Learn more about Nick's work here: https://www.nickallardice.com   Timestamps:  00:00 What is GiveDirectly?  15:04 AI for targeting cash transfers  29:39 AI for predicting natural disasters  46:04 How scalable is GiveDirectly's AI approach?  58:10 Decentralized vs. centralized data collection  1:04:30 Dream scenario for GiveDirectly...

Dec 19, 20241 hr 9 min

Nathan Labenz on the State of AI and Progress since GPT-4

Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4.  You can find Nathan's podcast here: https://www.cognitiverevolution.ai    Timestamps:  00:00 AI progress since GPT-4   10:50 Multimodality   19:06 Low-cost models   27:58 Coding versus medicine/law   36:09 AI agents   45:29 How much are people using AI?   53:39 Open source   01:15:22 AI industry analysis   01:29:27 Are some AI...

Dec 05, 20243 hr 20 min

Connor Leahy on Why Humanity Risks Extinction from AGI

Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.    Here's the document we discuss in the episode:    https://www.thecompendium.ai   Timestamps:  00:00 The Compendium  15:25 The motivations of AGI corps   31:17 AI is grown...

Nov 22, 20242 hr 59 min

Suzy Shepherd on Imagining Superintelligence and "Writing Doom"

Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated world.    Here's Writing Doom:   https://www.youtube.com/watch?v=xfMQ7hzyFW4    Timestamps:  00:00 Writing Doom   08:23 Humor in Writing Doom  13:31 Concise writi...

Nov 08, 20241 hr 3 min

Andrea Miotti on a Narrow Path to Safe, Transformative AI

Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI development, and what a mature science of intelligence would look like.    Here's the document we discuss in the episode:    https://www.narrowpath.co   Timestamps:  00:00 A Nar...

Oct 25, 20241 hr 28 min

Tamay Besiroglu on AI in 2030: Scaling, Automation, and AI Agents

Tamay Besiroglu joins the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute. Here's the report we discuss in the episode:   https://epochai.org/blog/can-ai-scaling-continue-through-2030   Timestamps:  00:00 How important is scaling?   08:03 How capable will AIs be in 2030?   18:33 AI agents, reasoning, and planning  23:39 Aut...

Oct 11, 20242 hr 30 min

Ryan Greenblatt on AI Control, Timelines, and Slowing Down Around Human-Level AI

Ryan Greenblatt joins the podcast to discuss AI control, timelines, takeoff speeds, misalignment, and slowing down around human-level AI.  You can learn more about Ryan's work here: https://www.redwoodresearch.org/team/ryan-greenblatt   Timestamps:  00:00 AI control   09:35 Challenges to AI control   23:48 AI control as a bridge to alignment  26:54 Policy and coordination for AI safety  29:25 Slowing down around human-level AI  49:14 Scheming and misalignm...

Sep 27, 20242 hr 9 min

Tom Barnes on How to Build a Resilient World

Tom Barnes joins the podcast to discuss how much the world spends on AI capabilities versus AI safety, how governments can prepare for advanced AI, and how to build a more resilient world.    Tom's report on advanced AI: https://www.founderspledge.com/research/research-and-recommendations-advanced-artificial-intelligence    Timestamps:  00:00 Spending on safety vs capabilities  09:06 Racing dynamics - is the classic story true?   28:15 How are governments prepa...

Sep 12, 20241 hr 20 min

Samuel Hammond on why AI Progress is Accelerating - and how Governments Should Respond

Samuel Hammond joins the podcast to discuss whether AI progress is slowing down or speeding up, AI agents and reasoning, why superintelligence is an ideological goal, open source AI, how technical change leads to regime change, the economics of advanced AI, and much more.    Our conversation often references this essay by Samuel: https://www.secondbest.ca/p/ninety-five-theses-on-ai    Timestamps:  00:00 Is AI plateauing or accelerating?   06:55 How do we get AI agen...

Aug 22, 20242 hr 16 min

Anousheh Ansari on Innovation Prizes for Space, AI, Quantum Computing, and Carbon Removal

Anousheh Ansari joins the podcast to discuss how innovation prizes can incentivize technical innovation in space, AI, quantum computing, and carbon removal. We discuss the pros and cons of such prizes, where they work best, and how far they can scale. Learn more about Anousheh's work here: https://www.xprize.org/home   Timestamps:  00:00 Innovation prizes at XPRIZE  08:25 Deciding which prizes to create  19:00 Creating new markets  29:51 How far can prizes scale?   ...

Aug 09, 20241 hr 3 min

Mary Robinson (Former President of Ireland) on Long-View Leadership

Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org   Timestamps:  00:00 Mary's journey to presidency   05:11 Long-view leadership  06:55 Prioritizing global problems  08:38 Risks from artificial intelligence  11:55 Climate ch...

Jul 25, 202430 min

Emilia Javorsky on how AI Concentrates Power

Emilia Javorsky joins the podcast to discuss AI-driven power concentration and how we might mitigate it. We also discuss optimism, utopia, and cultural experimentation.  Apply for our RFP here:   https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/ Timestamps:  00:00 Power concentration   07:43 RFP: Mitigating AI-driven power concentration  14:15 Open source AI   26:50 Institutions and incentives  35:20 Techno-optimism   43:4...

Jul 11, 20241 hr 4 min

Anton Korinek on Automating Work and the Economics of an Intelligence Explosion

Anton Korinek joins the podcast to discuss the effects of automation on wages and labor, how we measure the complexity of tasks, the economics of an intelligence explosion, and the market structure of the AI industry. Learn more about Anton's work at https://www.korinek.com   Timestamps:  00:00 Automation and wages  14:32 Complexity for people and machines  20:31 Moravec's paradox  26:15 Can people switch careers?   30:57 Intelligence explosion economics  44:08...

Jun 21, 20242 hr 32 min

Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light

Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse. Find out more about Christian's work at https://www.founderspledge.com   Timestamps:  00:00 US-China competition and risk   18:01 The security dilemma   30:21 Official and unofficial diplomacy  39:53 Hotlines between countrie...

Jun 07, 20242 hr 36 min

Christian Nunes on Deepfakes (with Max Tegmark)

Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation. You can learn more about Christian's work at https://now.org and about the Ban Deepfakes campaign at https://bandeepfakes.org  Timestamps: 00:00 The National Organisation for Women (NOW)  05:37 Deepfakes and women  10:12 Protecting ordinary victims of deepfakes  16:06 Deepfake legislat...

May 24, 202437 min

Dan Faggella on the Race to AGI

Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at https://danfaggella.com Timestamps: 00:00 Value differences in AI 12:07 Should we eventually create AGI? 28:22 What is a worthy successor? 43:19 AI changing power dynamics 59:00 Open source AI 01:05:07 What drives AI progress? 01:16:36 What limits ...

May 03, 20242 hr 45 min

Liron Shapira on Superintelligence Goals

Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us against...

Apr 19, 20241 hr 27 min

Annie Jacobsen on Nuclear War - a Second by Second Timeline

Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first critical seconds 29:42 Decisions under time pressure 34:27 Lessons from insiders 44:18 Submarines ...

Apr 05, 20241 hr 26 min

Katja Grace on the Largest Survey of AI Researchers

Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/. Timestamps: 0:20 AI Impacts surveys 18:11 What AI ...

Mar 14, 20241 hr 8 min

Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info Timestamps: 00:00 Pausing AI 10:23 Risks during an AI pause 19:41 Hardware overhang 29:04 Technological progress 37:00 Safety research during a pause 54:42 Social dynamics of AI risk 1:10:00 What prevents cooperation? 1:18:21 What about C...

Feb 29, 20242 hr 36 min