OpenAI's $20,000 AI Agent, Nauru Sells Citizenship for Relocation, and Eric Schmidt Opposes AGI Manhattan Project - podcast episode cover

OpenAI's $20,000 AI Agent, Nauru Sells Citizenship for Relocation, and Eric Schmidt Opposes AGI Manhattan Project

Mar 10, 20258 min
--:--
--:--
Listen in podcast apps:

Episode description

We're experimenting and would love to hear from you!

In this episode of ‘Discover Daily’ by Perplexity, we delve into the latest developments in tech and geopolitics. OpenAI is set to revolutionize its business model with the introduction of advanced AI agents, offering monthly subscription plans ranging from $2,000 to $20,000. These agents are designed to perform complex tasks autonomously, leveraging advanced language models and decision-making algorithms. This move is supported by a significant $3 billion investment from SoftBank, highlighting the potential for these agents to contribute significantly to OpenAI's future revenue.

The Pacific island nation of Nauru is also making headlines with its controversial 'golden passport' scheme. For $105,000, individuals can gain citizenship and visa-free access to 89 countries. This initiative aims to fund Nauru's climate change mitigation efforts, as the island faces existential threats from rising sea levels. However, the program raises ethical concerns about criminal exploitation, vetting issues, and the commodification of national identity. As Nauru navigates these challenges, it will be crucial to monitor the program's effectiveness in providing necessary funds for climate adaptation without compromising national security or ethical standards.

Our main story focuses on former Google CEO Eric Schmidt's opposition to a U.S. government-led 'Manhattan Project' for developing Artificial General Intelligence (AGI). Schmidt argues that such a project could escalate international tensions and trigger a dangerous AI arms race, particularly with China. Instead, he advocates for a more cautious approach, emphasizing defensive strategies and international cooperation in AI advancement. This stance reflects a growing concern about the risks of unchecked superintelligence development and highlights the need for policymakers and tech leaders to prioritize AI safety and collaboration.

From Perplexity's Discover Feed:

https://www.perplexity.ai/page/openai-s-20000-ai-agent-nvz8rzw7TZ.ECGL9usO2YQ

https://www.perplexity.ai/page/nauru-sells-citizenship-for-re-mWT.fYg_Su.C7FVaMGqCfQ

https://www.perplexity.ai/page/eric-schmidt-opposes-agi-manha-pymGB79nR.6rRtLvcqONIA 


**Introducing Perplexity Deep Research:**

https://www.perplexity.ai/hub/blog/introducing-perplexity-deep-research 




Perplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you’re interested in.

Take the world's knowledge with you anywhere. Available on iOS and Android

Join our growing Discord community for the latest updates and exclusive content.

Follow us on:

Transcript

Speaker 1

Welcome to Discover Daily , by Perplexity , your AI-curated digest of breakthroughs in tech , science and culture . I'm Alex . Today , we're exploring a range of topics , including Eric Schmidt's opposition to an AGI Manhattan project . But first let's look at what else is happening across the tech and geopolitical landscape .

Our first story today is about OpenAI's ambitious new AI agent plans . The company behind ChatGPT is set to introduce a line of AI agents with monthly subscription plans ranging from $2,000 to $20,000 . These agents are expected to contribute up to 25% of OpenAI's future revenue .

The pricing structure is tiered with low-end agents at $2,000 per month , targeting high-income knowledge workers . Mid-tier agents around $10,000 for software development tasks , and high-end agents up to $20,000 , functioning as PhD-level research assistants . This move signals a significant shift in OpenAI's business model .

The company anticipates that agent products will make up 20% to 25% of its total revenue in the future . This projection is supported by a major $3 billion investment from SoftBank specifically for agent development this year . These AI agents represent a leap in artificial intelligence capability .

They're designed to autonomously perform complex tasks , utilizing advanced language models and decision-making algorithms to interact with digital environments , execute actions and solve problems with minimal human intervention . Moving on to our second story minimal human intervention .

Moving on to our second story , we turn to the Pacific Island nation of Nauru , which has launched a controversial Golden Passport scheme to fund its climate change mitigation efforts . Nauru is now offering citizenship for $105,000 through its Economic and Climate Resilience Citizenship Program .

This initiative provides visa-free access to 89 countries and is expected to generate $5.7 million in its first year , potentially scaling up to $42 million annually . The urgency behind this program is clear .

Nauru , a tiny coral atoll with an average elevation of just three metres above sea level , faces existential threats from climate change , particularly rising sea levels . The island nation's limited resources and small population of around 10,000 make financing large-scale adaptation projects particularly challenging . Scale adaptation projects particularly challenging .

Funds from the Citizenship Program are earmarked for inland relocation and infrastructure development , with the initial relocation phase estimated to cost over $60 million . President David Adiang emphasized that this initiative is about securing a viable future for upcoming generations . However , the program has sparked debate due to potential risks and ethical concerns .

Critics worry about criminal exploitation , vetting issues and the commodification of national identity . There are also concerns about transparency and the potential for corruption in the application process .

Now let's dive into our main story of the day former Google CEO Eric Schmidt's opposition to a US government-led Manhattan project for developing artificial general intelligence , or AGI .

Schmidt , along with other tech leaders , has published a policy paper titled Superintelligence Strategy that argues against a government-led AGI development program modeled after the 1940s atomic bomb project . The paper outlines several key concerns . Program modeled after the 1940s atomic bomb project . The paper outlines several key concerns .

First , schmidt and his co-authors warn that a US-led AGI Manhattan Project could escalate international tensions and potentially trigger a dangerous AI arms race , particularly with China . They argue that rival nations fearing a global power imbalance in superintelligence might resort to sophisticated cyber attacks to disrupt US AI advancements .

The authors also challenge the assumption that competitors would simply accept an enduring imbalance or potential existential threat rather than taking preventive action . This perspective marks a significant shift in Schmidt's stance on AI competition . In the past , schmidt had been more supportive of aggressive AI development .

However , this paper reflects a growing concern about about the risks of unchecked superintelligence development . He now advocates for a more cautious approach , emphasizing defensive strategies and international cooperation in AI advancement . A key concept introduced in the paper is mutual assured AI malfunction .

This strategy draws parallels to nuclear deterrence while addressing the unique challenges of AGI development . Mutual Assured AI Malfunction suggests proactively disabling threatening AI projects .

Rather than waiting for adversaries to weaponize AGI , the authors propose expanding cyber attack capabilities to neutralize dangerous AI developments in other nations and limiting adversaries' access to advanced AI chips and open source models .

This approach represents a shift from winning the race to superintelligence to deterring other countries from creating potentially harmful AGI . Instead of a high-stakes race for AGI supremacy , schmidt and his co-authors advocate for a more measured defensive strategy that prioritizes AI safety , focuses on deterring hostile AI development and promotes international cooperation .

They argue that fostering collaboration and shared safety standards could lead to more stable and beneficial outcomes in the long-term pursuit of AGI . That wraps up today's episode of Discover Daily . Our new deep research feature , launched earlier this month , now analyzes hundreds of sources in minutes .

Think of it as deploying a personal research team through our web and mobile platforms . This cutting-edge tool combines autonomous reasoning with rapid processing to deliver exhaustive reports on specialized topics . Deep research excels at expert-level tasks across various domains , from finance and marketing to product research , and is available on our desktop and mobile apps .

Thanks for listening . We'll be back with more stories that shape our world . Until then , stay curious .

Transcript source: Provided by creator in RSS feed: download file