Future of Life Institute Podcast - podcast cover

Future of Life Institute Podcast

Future of Life Institutewww.futureoflife.org
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Episodes

Anthony Aguirre and Anna Yelizarova on FLI's Worldbuilding Contest

Anthony Aguirre and Anna Yelizarova join us to discuss FLI's new Worldbuilding Contest. Topics discussed in this episode include: -Motivations behind the contest -The importance of worldbuilding -The rules of the contest -What a submission consists of -Due date and prizes Learn more about the contest here: https://worldbuild.ai/ Join the discord: https://discord.com/invite/njZyTJpwMz You can find the page for the podcast here: https://futureoflife.org/2022/02/08/anthony-aguirre-and-anna-yelizaro...

Feb 09, 202233 min

David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy

David Chalmers, Professor of Philosophy and Neural Science at NYU, joins us to discuss his newest book Reality+: Virtual Worlds and the Problems of Philosophy. Topics discussed in this episode include: -Virtual reality as genuine reality -Why VR is compatible with the good life -Why we can never know whether we're in a simulation -Consciousness in virtual realities -The ethics of simulated beings You can find the page for the podcast here: https://futureoflife.org/2022/01/26/david-chalmers-on-re...

Jan 26, 20222 hr 43 min

Rohin Shah on the State of AGI Safety Research in 2021

Rohin Shah, Research Scientist on DeepMind's technical AGI safety team, joins us to discuss: AI value alignment; how an AI Researcher might decide whether to work on AI Safety; and why we don't know that AI systems won't lead to existential risk. Topics discussed in this episode include: - Inner Alignment versus Outer Alignment - Foundation Models - Structural AI Risks - Unipolar versus Multipolar Scenarios - The Most Important Thing That Impacts the Future of Life You can find the page for the ...

Nov 02, 20212 hr 44 min

Future of Life Institute's $25M Grants Program for Existential Risk Reduction

Future of Life Institute President Max Tegmark and our grants team, Andrea Berman and Daniel Filan, join us to announce a $25M multi-year AI Existential Safety Grants Program. Topics discussed in this episode include: - The reason Future of Life Institute is offering AI Existential Safety Grants - Max speaks about how receiving a grant changed his career early on - Daniel and Andrea provide details on the fellowships and future grant priorities Check out our grants programs here: https://grants....

Oct 18, 202125 min

Filippa Lentzos on Global Catastrophic Biological Risks

Dr. Filippa Lentzos, Senior Lecturer in Science and International Security at King's College London, joins us to discuss the most pressing issues in biosecurity, big data in biology and life sciences, and governance in biological risk. Topics discussed in this episode include: - The most pressing issue in biosecurity - Stories from when biosafety labs failed to contain dangerous pathogens - The lethality of pathogens being worked on at biolaboratories - Lessons from COVID-19 You can find the pag...

Oct 01, 202158 min

Susan Solomon and Stephen Andersen on Saving the Ozone Layer

Susan Solomon, internationally recognized atmospheric chemist, and Stephen Andersen, leader of the Montreal Protocol, join us to tell the story of the ozone hole and their roles in helping to bring us back from the brink of disaster. Topics discussed in this episode include: -The industrial and commercial uses of chlorofluorocarbons (CFCs) -How we discovered the atmospheric effects of CFCs -The Montreal Protocol and its significance -Dr. Solomon's, Dr. Farman's, and Dr. Andersen's crucial roles ...

Sep 16, 20212 hr 45 min

James Manyika on Global Economic and Technological Trends

James Manyika, Chairman and Director of the McKinsey Global Institute, joins us to discuss the rapidly evolving landscape of the modern global economy and the role of technology in it. Topics discussed in this episode include: -The modern social contract -Reskilling, wage stagnation, and inequality -Technology induced unemployment -The structure of the global economy -The geographic concentration of economic growth You can find the page for this podcast here: https://futureoflife.org/2021/09/06/...

Sep 07, 20212 hr 38 min

Michael Klare on the Pentagon's view of Climate Change and the Risks of State Collapse

Michael Klare, Five College Professor of Peace & World Security Studies, joins us to discuss the Pentagon's view of climate change, why it's distinctive, and how this all ultimately relates to the risks of great powers conflict and state collapse. Topics discussed in this episode include: -How the US military views and takes action on climate change -Examples of existing climate related difficulties and what they tell us about the future -Threat multiplication from climate change -The risks of c...

Jul 30, 20212 hr 35 min

Avi Loeb on UFOs and if they're Alien in Origin

Avi Loeb, Professor of Science at Harvard University, joins us to discuss unidentified aerial phenomena and a recent US Government report assessing their existence and threat. Topics discussed in this episode include: -Evidence counting for the natural, human, and extraterrestrial origins of UAPs -The culture of science and how it deals with UAP reports -How humanity should respond if we discover UAPs are alien in origin -A project for collecting high quality data on UAPs You can find the page f...

Jul 09, 202141 min

Avi Loeb on 'Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures

Avi Loeb, Professor of Science at Harvard University, joins us to discuss a recent interstellar visitor, if we've already encountered alien technology, and whether we're ultimately alone in the cosmos. Topics discussed in this episode include: -Whether 'Oumuamua is alien or natural in origin -The culture of science and how it affects fruitful inquiry -Looking for signs of alien life throughout the solar system and beyond -Alien artefacts and galactic treaties -How humanity should handle a potent...

Jul 09, 20212 hr 4 min

Nicolas Berggruen on the Dynamics of Power, Wisdom, and Ideas in the Age of AI

Nicolas Berggruen, investor and philanthropist, joins us to explore the dynamics of power, wisdom, technology and ideas in the 21st century. Topics discussed in this episode include: -What wisdom consists of -The role of ideas in society and civilization -The increasing concentration of power and wealth -The technological displacement of human labor -Democracy, universal basic income, and universal basic capital -Living an examined life You can find the page for this podcast here: https://future...

Jun 01, 20211 hr 8 min

Bart Selman on the Promises and Perils of Artificial Intelligence

Bart Selman, Professor of Computer Science at Cornell University, joins us to discuss a wide range of AI issues, from autonomous weapons and AI consciousness to international governance and the possibilities of superintelligence. Topics discussed in this episode include: -Negative and positive outcomes from AI in the short, medium, and long-terms -The perils and promises of AGI and superintelligence -AI alignment and AI existential risk -Lethal autonomous weapons -AI governance and racing to pow...

May 20, 20212 hr 41 min

Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century

Jaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century. Topics discussed in this episode include: -Intelligence and coordination -Existential risk from AI, synthetic biology, and unknown unknowns -AI adoption as a delegation process -Jaan's investments and philanthropic efforts -International coordination and incentive s...

Apr 21, 20211 hr 27 min

Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures

Joscha Bach, Cognitive Scientist and AI researcher, as well as Anthony Aguirre, UCSC Professor of Physics, join us to explore the world through the lens of computation and the difficulties we face on the way to beneficial futures. Topics discussed in this episode include: -Understanding the universe through digital physics -How human consciousness operates and is structured -The path to aligned AGI and bottlenecks to beneficial futures -Incentive structures and collective coordination You can fi...

Apr 01, 20212 hr 38 min

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety. Topics discussed in this episode include: -Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI -The relationship between AI safety, control, and alignment -Virtual worlds as a proposal for solving multi-multi alignment -AI security You can find the pa...

Mar 20, 20211 hr 12 min

Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Autonomous Weapons

Stuart Russell, Professor of Computer Science at UC Berkeley, and Zachary Kallenborn, WMD and drone swarms expert, join us to discuss the highest risk and most destabilizing aspects of lethal autonomous weapons. Topics discussed in this episode include: -The current state of the deployment and development of lethal autonomous weapons and swarm technologies -Drone swarms as a potential weapon of mass destruction -The risks of escalation, unpredictability, and proliferation with regards to autonom...

Feb 25, 20212 hr 40 min

John Prendergast on Non-dual Awareness and Wisdom for the 21st Century

John Prendergast, former adjunct professor of psychology at the California Institute of Integral Studies, joins Lucas Perry for a discussion about the experience and effects of ego-identification, how to shift to new levels of identity, the nature of non-dual awareness, and the potential relationship between waking up and collective human problems. This is not an FLI Podcast, but a special release where Lucas shares a direction he feels has an important relationship with AI alignment and existen...

Feb 09, 20212 hr 46 min

Beatrice Fihn on the Total Elimination of Nuclear Weapons

Beatrice Fihn, executive director of the International Campaign to Abolish Nuclear Weapons (ICAN) and Nobel Peace Prize recipient, joins us to discuss the current risks of nuclear war, policies that can reduce the risks of nuclear conflict, and how to move towards a nuclear weapons free world. Topics discussed in this episode include: -The current nuclear weapons geopolitical situation -The risks and mechanics of accidental and intentional nuclear war -Policy proposals for reducing the risks of ...

Jan 22, 20211 hr 18 min

Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year

Max Tegmark and members of the FLI core team come together to discuss favorite projects from 2020, what we've learned from the past year, and what we think is needed for existential risk reduction in 2021. Topics discussed in this episode include: -FLI's perspectives on 2020 and hopes for 2021 -What our favorite projects from 2020 were -The biggest lessons we've learned from 2020 -What we see as crucial and needed in 2021 to ensure and make -improvements towards existential safety You can find t...

Jan 08, 20211 hr 1 min

Future of Life Award 2020: Saving 200,000,000 Lives by Eradicating Smallpox

The recipients of the 2020 Future of Life Award, William Foege, Michael Burkinsky, and Victor Zhdanov Jr., join us on this episode of the FLI Podcast to recount the story of smallpox eradication, William Foege's and Victor Zhdanov Sr.'s involvement in the eradication, and their personal experience of the events. Topics discussed in this episode include: -William Foege's and Victor Zhdanov's efforts to eradicate smallpox -Personal stories from Foege's and Zhdanov's lives -The history of smallpox ...

Dec 11, 20202 hr 54 min

Sean Carroll on Consciousness, Physicalism, and the History of Intellectual Progress

Sean Carroll, theoretical physicist at Caltech, joins us on this episode of the FLI Podcast to comb through the history of human thought, the strengths and weaknesses of various intellectual movements, and how we are to situate ourselves in the 21st century given progress thus far. Topics discussed in this episode include: -Important intellectual movements and their merits -The evolution of metaphysical and epistemological views over human history -Consciousness, free will, and philosophical blu...

Dec 02, 20202 hr 31 min

Mohamed Abdalla on Big Tech, Ethics-washing, and the Threat on Academic Integrity

Mohamed Abdalla, PhD student at the University of Toronto, joins us to discuss how Big Tobacco and Big Tech work to manipulate public opinion and academic institutions in order to maximize profits and avoid regulation. Topics discussed in this episode include: -How Big Tobacco uses it's wealth to obfuscate the harm of tobacco and appear socially responsible -The tactics shared by Big Tech and Big Tobacco to preform ethics-washing and avoid regulation -How Big Tech and Big Tobacco work to influen...

Nov 17, 20201 hr 22 min

Maria Arpa on the Power of Nonviolent Communication

Maria Arpa, Executive Director of the Center for Nonviolent Communication, joins the FLI Podcast to share the ins and outs of the powerful needs-based framework of nonviolent communication. Topics discussed in this episode include: -What nonviolent communication (NVC) consists of -How NVC is different from normal discourse -How NVC is composed of observations, feelings, needs, and requests -NVC for systemic change -Foundational assumptions in NVC -An NVC exercise You can find the page for this p...

Nov 02, 20201 hr 13 min

Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism

Stephen Batchelor, a Secular Buddhist teacher and former monk, joins the FLI Podcast to discuss the project of awakening, the facets of human nature which contribute to extinction risk, and how we might better embrace existential threats. Topics discussed in this episode include: -The projects of awakening and growing the wisdom with which to manage technologies -What might be possible of embarking on the project of waking up -Facets of human nature that contribute to existential risk -The dange...

Oct 15, 20202 hr 39 min

Kelly Wanser on Climate Change as a Possible Existential Threat

Kelly Wanser from SilverLining joins us to discuss techniques for climate intervention to mitigate the impacts of human induced climate change. Topics discussed in this episode include: - The risks of climate change in the short-term - Tipping points and tipping cascades - Climate intervention via marine cloud brightening and releasing particles in the stratosphere - The benefits and risks of climate intervention techniques - The international politics of climate change and weather modification ...

Sep 30, 20202 hr 46 min

Andrew Critch on AI Research Considerations for Human Existential Safety

In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the m...

Sep 16, 20202 hr 51 min

Iason Gabriel on Foundational Philosophical Questions in AI Alignment

In the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI brings i...

Sep 03, 20202 hr 55 min

Peter Railton on Moral Learning and Metaethics in AI Systems

From a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivity to morally relevant information and variables. This capacity for moral learning allows us to solve open-ended problems with other persons who may hold complex beliefs and preferences. As AI systems become increasingly ...

Aug 18, 20202 hr 42 min

Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want. Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for h...

Jul 01, 20202 hr 37 min

Barker - Hedonic Recalibration (Mix)

This is a mix by Barker, Berlin-based music producer, that was featured on our last podcast: Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix). We hope that you'll find inspiration and well-being in this soundscape. You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/ Tracklist: Delta Rain Dance - 1 John Beltran - A Differen...

Jun 26, 202044 min