IM 811: Flippin' the Bird - Anthony Aguirre, AI Safety, Hollywood vs. AI - podcast episode cover

IM 811: Flippin' the Bird - Anthony Aguirre, AI Safety, Hollywood vs. AI

Mar 20, 20253 hr 59 minEp. 811
--:--
--:--
Listen in podcast apps:

Episode description

  • Interview with Anthony Aguirre
  • The NIST's new directive to AI Safety Institute partners scrubs mentions of "AI safety" and "AI fairness" and prioritizes "reducing ideological bias" in models
  • Jensen Huang GTC Keynote in 16 minutes
  • Nvidia and Yum! Brands team up to expand AI ordering
  • Google Is Officially Replacing Assistant With Gemini - Slashdot
  • Google's Gemini AI is really good at watermark removal
  • Hollywood warns about AI industry's push to change copyright law
  • Hear what Horizon Zero Dawn actor Ashly Burch thinks about AI taking her job
  • Guardian agrees with Leo
  • The Daily Wire announces new advertising partnership with Perplexity and The Ben Shapiro Show
  • Elon Musk's Grok to merge with Perplexity AI?
  • Perplexity dunks on Google's 'glue on pizza' AI fail in new ad
  • Google announces new health-care AI updates for Search
  • Google plans to release new 'open' AI models for drug discovery
  • EFF: California's A.B. 412: A Bill That Could Crush Startups and Cement A Big Tech AI Monopoly
  • Italian newspaper says it has published world's first AI-generated edition
  • AI ring tracks spelled words in American Sign Language
  • Kevin Roose joins the AGI cult: Why I'm Feeling the A.G.I.
  • I Hitched a Ride in San Francisco's Newest Robotaxi
  • Elon Musk's X obtains $44bn valuation in sharp turnaround
  • The 560-pound Twitter logo from its San Francisco headquarters is up for auction
  • Andreessen wants to shut down all higher education in America
  • FSF's Memorabilia Silent Auction Begins Today - Slashdot
  • Bluesky made more money selling T-shirts mocking Zuckerberg than custom domains
  • Google acquires cybersecurity firm Wiz for $32 billion
  • Alphabet spins off Starlink competitor Taara
  • Oh Mary!
  • TechCrunch Founder-Turned-Crypto Investor Pays $60 Million for Miami Beach Home

Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau

Guest: Anthony Aguirre

Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit

Sponsors:

Transcript

Primary Navigation Podcasts Club Blog Subscribe Sponsors More… Tech Beyond Tool AI: Future of Life Institute's Warning About Autonomous General Intelligence

Mar 21st 2025 by Benito Gonzalez

AI-created, human-edited.

In a recent episode of Intelligent Machines, hosts Leo Laporte and Paris Martineau engaged in a thought-provoking conversation with Anthony Aguirre, co-founder and executive director of the Future of Life Institute. The discussion centered around Aguirre's recent essay, "Keep the Future Human," which explores the potential existential risks posed by artificial superintelligence (ASI).

Aguirre opened by explaining that the Future of Life Institute was founded in 2014 when advanced AI seemed like a distant possibility. The organization's mission was to consider the long-term implications of AI development, recognizing that the emergence of a "new species of intelligence" would have profound consequences for humanity's future.

"We wanted to see if there were things that we could do for AI and other technologies... to push the needle a little bit from the more risky to the more positive side," Aguirre explained. The institute has since focused almost exclusively on AI safety as the technology has advanced much more rapidly than initially anticipated.

A central theme of the conversation was the distinction between current "tool AIs" and the autonomous general intelligence systems being developed. Aguirre emphasized that today's AI systems largely function as tools that "sit there until you ask them to do something," whereas companies are actively working toward systems combining three critical elements:

Intelligence (capability that matches or exceeds human performance)Generality (ability to perform across many domains)Autonomy (ability to pursue goals independently)

"The combination of intelligence, generality, and autonomy is something that we haven't seen before. That is something currently that is uniquely human," Aguirre noted, suggesting that this intersection represents a fundamental shift in AI development.

When questioned about the likelihood of achieving autonomous general intelligence, Aguirre suggested it's "nearly 100%" given current research trajectories. He pointed to existing technologies like self-driving cars and game-playing AI systems (such as AlphaGo) that already demonstrate significant autonomy, albeit in narrow domains.

"We know how to do it," Aguirre stated. "It's not something that I can't do like... There are techniques. Reinforcement learning works very well at creating these agents that pursue general goals."

Paris Martineau challenged Aguirre on his confidence that tech companies could create superintelligent systems. In response, Aguirre pointed to the massive investment of resources—"more fiscal and intellectual capital into this quest... than any other endeavor in human history"—and the consistent improvement in AI capabilities across all metrics.

Leo Laporte posed the critical question: Why would an autonomous superintelligence be threatening to humanity?

Aguirre's response centered on three key points:

Unpredictability: Unlike traditional programming, AI systems aren't governed by explicit instructions. "We don't understand fundamentally how they're working and we certainly can't predict what they're going to do."Instrumental goals: Any system pursuing a primary goal will develop secondary goals like self-preservation, resource acquisition, and power accumulation. Using Stuart Russell's example of a coffee-fetching robot, Aguirre illustrated how even simple goal-directed systems would resist being turned off.Control problem: As these systems become more powerful—potentially operating at superhuman speeds and with capabilities across numerous domains—the question of control becomes increasingly difficult.

"If we give them some goal... those systems, just like corporations and just like people, are going to do all sorts of things... some of them legal, some of them not legal," warned Aguirre.

According to Aguirre, humanity faces three options:

Don't build autonomous general intelligenceBuild it but maintain strict controlBuild it without control but ensure it's aligned with human values

The problem? "Only one of those things do we actually know how to do. We know how to not build them. We have no idea how to control them when they're this powerful, and we have no idea how to align them to our values."

Leo Laporte drew a parallel to the Manhattan Project, where scientists created atomic weapons out of fear that Nazi Germany would develop them first. Aguirre acknowledged the analogy but highlighted a crucial difference: "We don't have Nazis who we think are building the AI systems first that are going to destroy the world... we could easily not build these things and it wouldn't be the end of the world."

Despite the grim outlook, Aguirre found optimism in humanity's handling of nuclear weapons. Despite predictions by brilliant minds like von Neumann, Einstein, and Oppenheimer that nuclear proliferation would lead to catastrophe, "somehow, here we are, 80 years later and we're still around."

Rather than simply trying to "scare the hell out of people," Aguirre emphasized the need for clearer understanding. "By understanding the problem well, we can make some wiser choices," he concluded.

The interview closed with Laporte reflecting on the nuclear parallel: "We didn't use the atomic bomb because we knew it would be the end of life if we were, and that realization saved us." The hope is that a similar understanding of AI risks might guide development toward safer outcomes.

Share: Copied! Intelligent Machines #811
Mar 19 2025 - Flippin' the Bird
Anthony Aguirre, AI Safety, Hollyw… All Tech posts Contact Advertise CC License Privacy Policy Ad Choices TOS Store Twitter Facebook Instgram YouTube Yes, like every site on the Internet, this site uses cookies. So now you know. Learn more Hide Home Schedule Subscribe Club TWiT About Club TWiT FAQ Access Account Members-Only Podcasts Update Payment Method Connect to Discord TWiT Blog Recent Posts Advertise Sponsors Store People About What is TWiT.tv Tickets Developer Program and API Tip jar Partners Contact Us
Transcript source: Provided by creator in RSS feed: download file