Future of Life Institute Podcast - podcast cover

Future of Life Institute Podcast

Future of Life Institutewww.futureoflife.org
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Episodes

Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)

Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pearce, largely exemplified in his latest album — aptly named "Utility." Sam's artistic excellence, motivated by blissful visions of the future, and David's philosophical and technological writings on the potential for the b...

Jun 24, 20202 hr 42 min

Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI

Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfall...

Jun 15, 20202 hr 53 min

Sam Harris on Global Priorities, Existential Risk, and What Matters Most

Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most ...

Jun 01, 20202 hr 33 min

FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents which could pose global catastrophic and existential risks to life on Earth. George Church, a titan of synthetic biology, joins us on this episode of the FLI Podcast to discuss the benefits and risks of our growing know...

May 15, 20201 hr 13 min

FLI Podcast: On Superforecasting with Robert de Neufville

Essential to our assessment of risk and ability to plan for the future is our understanding of the probability of certain events occurring. If we can estimate the likelihood of risks, then we can evaluate their relative importance and apply our risk mitigation resources effectively. Predicting the future is, obviously, far from easy — and yet a community of "superforecasters" are attempting to do just that. Not only are they trying, but these superforecasters are also reliably outperforming subj...

Apr 30, 20201 hr 20 min

AIAP: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah

Just a year ago we released a two part episode titled An Overview of Technical AI Alignment with Rohin Shah. That conversation provided details on the views of central AI alignment research organizations and many of the ongoing research efforts for designing safe and aligned systems. Much has happened in the past twelve months, so we've invited Rohin — along with fellow researcher Buck Shlegeris — back for a follow-up conversation. Today's episode focuses especially on the state of current resea...

Apr 15, 20202 hr 21 min

FLI Podcast: Lessons from COVID-19 with Emilia Javorsky and Anthony Aguirre

The global spread of COVID-19 has put tremendous stress on humanity’s social, political, and economic systems. The breakdowns triggered by this sudden stress indicate areas where national and global systems are fragile, and where preventative and preparedness measures may be insufficient. The COVID-19 pandemic thus serves as an opportunity for reflecting on the strengths and weaknesses of human civilization and what we can do to help make humanity more resilient. The Future of Life Institute's E...

Apr 09, 20201 hr 27 min

FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord

Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity" has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar with existential risk, Toby brings new historical and academic context to the problem, along with central arguments for why existential risk matters, novel quantitative ...

Apr 01, 20201 hr 11 min

AIAP: On Lethal Autonomous Weapons with Paul Scharre

Lethal autonomous weapons represent the novel miniaturization and integration of modern AI and robotics technologies for military use. This emerging technology thus represents a potentially critical inflection point in the development of AI governance. Whether we allow AI to make the decision to take human life and where we draw lines around the acceptable and unacceptable uses of this technology will set precedents and grounds for future international AI collaboration and governance. Such regul...

Mar 16, 20201 hr 16 min

FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O'Keefe

As with the agricultural and industrial revolutions before it, the intelligence revolution currently underway will unlock new degrees and kinds of abundance. Powerful forms of AI will likely generate never-before-seen levels of wealth, raising critical questions about its beneficiaries. Will this newfound wealth be used to provide for the common good, or will it become increasingly concentrated in the hands of the few who wield AI technologies? Cullen O'Keefe joins us on this episode of the FLI ...

Feb 28, 20201 hr 5 min

AIAP: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown

From Max Tegmark's Life 3.0 to Stuart Russell's Human Compatible and Nick Bostrom's Superintelligence, much has been written and said about the long-term risks of powerful AI systems. When considering concrete actions one can take to help mitigate these risks, governance and policy related solutions become an attractive area of consideration. But just what can anyone do in the present day policy sphere to help ensure that powerful AI systems remain beneficial and aligned with human values? Do to...

Feb 18, 20201 hr 11 min

FLI Podcast: Identity, Information & the Nature of Reality with Anthony Aguirre

Our perceptions of reality are based on the physics of interactions ranging from millimeters to miles in scale. But when it comes to the very small and the very massive, our intuitions often fail us. Given the extent to which modern physics challenges our understanding of the world around us, how wrong could we be about the fundamental nature of reality? And given our failure to anticipate the counterintuitive nature of the universe, how accurate are our intuitions about metaphysical and persona...

Jan 31, 20202 hr 45 min

AIAP: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson

In the 1984 book Reasons and Persons, philosopher Derek Parfit asks the reader to consider the following scenario: You step into a teleportation machine that scans your complete atomic structure, annihilates you, and then relays that atomic information to Mars at the speed of light. There, a similar machine recreates your exact atomic structure and composition using locally available resources. Have you just traveled, Parfit asks, or have you committed suicide? Would you step into this machine? ...

Jan 16, 20202 hr 3 min

On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark

Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial intelligence, history, philosophy and anthropology — to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspect...

Dec 31, 20191 hr 1 min

FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team

As 2019 is coming to an end and the opportunities of 2020 begin to emerge, it's a great time to reflect on the past year and our reasons for hope in the year to come. We spend much of our time on this podcast discussing risks that will possibly lead to the extinction or the permanent and drastic curtailing of the potential of Earth-originating intelligent life. While this is important and useful, much has been done at FLI and in the broader world to address these issues in service of the common ...

Dec 28, 20192 hr 39 min

AIAP: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike

Jan Leike is a senior research scientist who leads the agent alignment team at DeepMind. His is one of three teams within their technical AGI group; each team focuses on different aspects of ensuring advanced AI systems are aligned and beneficial. Jan's journey in the field of AI has taken him from a PhD on a theoretical reinforcement learning agent called AIXI to empirical AI safety research focused on recursive reward modeling. This conversation explores his movement from theoretical to empiri...

Dec 16, 201958 min

FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert

We could all be more altruistic and effective in our service of others, but what exactly is it that's stopping us? What are the biases and cognitive failures that prevent us from properly acting in service of existential risks, statistically large numbers of people, and long-term future considerations? How can we become more effective altruists? Stefan Schubert, a researcher at University of Oxford's Social Behaviour and Ethics Lab, explores questions like these at the intersection of moral psyc...

Dec 02, 201959 min

Not Cool Ep 26: Naomi Oreskes on trusting climate science

It’s the Not Cool series finale, and by now we’ve heard from climate scientists, meteorologists, physicists, psychologists, epidemiologists and ecologists. We’ve gotten expert opinions on everything from mitigation and adaptation to security, policy and finance. Today, we’re tackling one final question: why should we trust them? Ariel is joined by Naomi Oreskes, Harvard professor and author of seven books, including the newly released "Why Trust Science?" Naomi lays out her case for why we shoul...

Nov 26, 201951 min

Not Cool Ep 25: Mario Molina on climate action

Most Americans believe in climate change — yet far too few are taking part in climate action. Many aren't even sure what effective climate action should look like. On Not Cool episode 25, Ariel is joined by Mario Molina, Executive Director of Protect our Winters, a non-profit aimed at increasing climate advocacy within the outdoor sports community. In this interview, Mario looks at climate activism more broadly: he explains where advocacy has fallen short, why it's important to hold corporations...

Nov 21, 201935 min

Not Cool Ep 24: Ellen Quigley and Natalie Jones on defunding the fossil fuel industry

Defunding the fossil fuel industry is one of the biggest factors in addressing climate change and lowering carbon emissions. But with international financing and powerful lobbyists on their side, fossil fuel companies often seem out of public reach. On Not Cool episode 24, Ariel is joined by Ellen Quigley and Natalie Jones, who explain why that’s not the case, and what you can do — without too much effort — to stand up to them. Ellen and Natalie, both researchers at the University of Cambridge’s...

Nov 19, 201954 min

AIAP: Machine Ethics and AI Governance with Wendell Wallach

Wendell Wallach has been at the forefront of contemporary emerging technology issues for decades now. As an interdisciplinary thinker, he has engaged at the intersections of ethics, governance, AI, bioethics, robotics, and philosophy since the beginning formulations of what we now know as AI alignment were being codified. Wendell began with a broad interest in the ethics of emerging technology and has since become focused on machine ethics and AI governance. This conversation with Wendell explor...

Nov 15, 20191 hr 13 min

Not Cool Ep 23: Brian Toon on nuclear winter: the other climate change

Though climate change and global warming are often used synonymously, there’s a different kind of climate change that also deserves attention: nuclear winter. A period of extreme global cooling that would likely follow a major nuclear exchange, nuclear winter is as of now — unlike global warming — still avoidable. But as Cold War era treaties break down and new nations gain nuclear capabilities, it's essential that we understand the potential climate impacts of nuclear war. On Not Cool Episode 2...

Nov 15, 20191 hr 3 min

Not Cool Ep 22: Cullen Hendrix on climate change and armed conflict

Right before civil war broke out in 2011, Syria experienced a historic five-year drought. This particular drought, which exacerbated economic and political insecurity within the country, may or may not have been caused by climate change. But as climate change increases the frequency of such extreme events, it’s almost certain to inflame pre-existing tensions in other countries — and in some cases, to trigger armed conflict. On Not Cool episode 22, Ariel is joined by Cullen Hendrix, co-author of ...

Nov 13, 201936 min

Not Cool Ep 21: Libby Jewett on ocean acidification

The increase of CO2 in the atmosphere is doing more than just warming the planet and threatening the lives of many terrestrial species. A large percentage of that carbon is actually reabsorbed by the oceans, causing a phenomenon known as ocean acidification — that is, our carbon emissions are literally changing the chemistry of ocean water and threatening ocean ecosystems worldwide. On Not Cool episode 21, Ariel is joined by Libby Jewett, founding Director of the Ocean Acidification Program at t...

Nov 07, 201939 min

Not Cool Ep 20: Deborah Lawrence on deforestation

This summer, the world watched in near-universal horror as thousands of square miles of rainforest went up in flames. But what exactly makes forests so precious — and deforestation so costly? On the 20th episode of Not Cool, Ariel explores the many ways in which forests impact the global climate — and the profound price we pay when we destroy them. She’s joined by Deborah Lawrence, Environmental Science Professor at the University of Virginia whose research focuses on the ecological effects of t...

Nov 06, 201943 min

FLI Podcast: Cosmological Koans: A Journey to the Heart of Physical Reality with Anthony Aguirre

There exist many facts about the nature of reality which stand at odds with our commonly held intuitions and experiences of the world. Ultimately, there is a relativity of the simultaneity of events and there is no universal "now." Are these facts baked into our experience of the world? Or are our experiences and intuitions at odds with these facts? When we consider this, the origins of our mental models, and what modern physics and cosmology tell us about the nature of reality, we are beckoned ...

Oct 31, 20192 hr 31 min

Not Cool Ep 19: Ilissa Ocko on non-carbon causes of climate change

Carbon emissions account for about 50% of warming, yet carbon overwhelmingly dominates the climate change discussion. On Episode 19 of Not Cool, Ariel is joined by Ilissa Ocko for a closer look at the non-carbon causes of climate change — like methane, sulphur dioxide, and an aerosol known as black carbon — that are driving the other 50% of warming. Ilissa is a senior climate scientist with the Environmental Defense Fund and an expert on short-lived climate pollutants. She explains how these non...

Oct 31, 201938 min

Not Cool Ep 18: Glen Peters on the carbon budget and global carbon emissions

In many ways, the global carbon budget is like any other budget. There’s a maximum amount we can spend, and it must be allocated to various countries and various needs. But how do we determine how much carbon each country can emit? Can developing countries grow their economies without increasing their emissions? And if a large portion of China’s emissions come from products made for American and European consumption, who’s to blame for those emissions? On episode 18 of Not Cool, Ariel is joined ...

Oct 30, 201951 min

Not Cool Ep 17: Tackling Climate Change with Machine Learning, part 2

It’s time to get creative in the fight against climate change, and machine learning can help us do that. Not Cool episode 17 continues our discussion of “Tackling Climate Change with Machine Learning,” a nearly 100 page report co-authored by 22 researchers from some of the world’s top AI institutes. Today, Ariel talks to Natasha Jaques and Tegan Maharaj, the respective authors of the report’s “Tools for Individuals” and “Tools for Society” chapters. Natasha and Tegan explain how machine learning...

Oct 24, 201958 min