3 Takeaways Podcast Transcript
Lynn Thoman
(https://www.3takeaways.com/)
Ep 225: Former Google CEO Eric Schmidt on AI: Shaping the Next Era of Humanity
This transcript was auto-generated. Please forgive any errors.
Lynn Thoman: To quote from the introduction of Eric Schmidt's new book, Genesis, “The latest capabilities of artificial intelligence, impressive as they are, will appear weak in hindsight as its powers increase at an accelerating rate. Powers we have not yet imagined are set to infuse our daily lives.” Will artificial intelligence be humanity's final act or a new beginning?
Lynn Thoman: Hi, everyone, I'm Lynn Thoman, and this is 3 Takeaways. On 3 Takeaways, I talk with some of the world's best thinkers, business leaders, writers, politicians, newsmakers, and scientists. Each episode ends with three key takeaways to help us understand the world and maybe even ourselves a little better.
Today, I'm excited to be with Eric Schmidt. Eric is the former CEO of Google and the co-founder of Schmidt Sciences. He has chaired the Defense Department's Defense Innovation Advisory Board and co-chaired the National Security Commission on Artificial Intelligence. He has also been a member of the President's Council of Advisors on Science and Technology and the National Security Commission on Emerging Biotechnology. In addition, Eric has served on a variety of academic, corporate, and nonprofit boards, including Carnegie Mellon University, Princeton University, Apple, the Mayo Clinic, the Institute for Advanced Study, and Khan Academy. And I've probably left some out.
He also currently chairs the board of the Broad Institute and the Special Competitive Studies Project. He is also the author of multiple best-selling books, including The Age of AI. His most recent book, co-authored with Dr. Henry Kissinger and Craig Mundie, is Genesis. Genesis is an extraordinary book written with the knowledge that we are building new intelligences, which will bring into question human survival and written with the objective of securing the future of humanity.
Welcome, Eric, and thanks so much for joining 3 Takeaways for the second time, today.
Eric Schmidt: Lynn, it was great to be on your show last time. I'm really glad to be back. It's always great to see you.
Lynn Thoman: It is my pleasure and great to see you as well.
Eric, machines don't yet have what's called AGI, Artificial General Intelligence. They're also not yet implementing machines. They're primarily thinking machines that rely on humans to do the interfacing with reality. Where do you think AI [artificial intelligence] and machines will be present in our lives and running our lives in 5 or 10 years?
Eric Schmidt: Well, thank you for that. So let's start with where we are right now. Folks are very familiar now with ChatGPT and its competitors, which includes Claude and my favorite, of course, Gemini from Google, and a number of others.
And people are amazed that this stuff can write better than certainly I can. They can do songs. They can even write code.
So what happens next? The next big change is in the development of what are called agents. And an agent is something which is in a little loop that learns something.
So you build an agent that can do the equivalent of a travel agent. Well, it learns how to do travel agents. The key thing about agents is that you can concatenate them.
You give it an English command, and it gives you an English result. And so then you can take that result and put it into the next agent. And with that, you can design a building, design a ship, design a bomb, whatever.
So agents look like the next big step. Once agents are generally available, which will take a few years, I expect that we're going to see systems that are super powerful, where the architect can say, design me a building. I'll describe it roughly and just make it beautiful.
And the system will be capable of understanding that. That's not AGI. That's just really powerful AI.
AGI, which is the general term, is general intelligence, which is what we have. It’s the ability to essentially have an idea in the morning and pursue it that you didn't have the day before. The consensus in the industry is that that's well more than 5 years from now. There's something I call the San Francisco school, which says it will be within 5 years.
I think it's more like 8 to 10, but nobody really knows. And you can see this with the most recent announcement from OpenAI of something called o1, where it can begin to show you the work that it does as it solves math problems. And the latest models are good enough to pass the graduate level exams in physics and chemistry and computer science and material science and art and political science.
At some point, these things in the next, say, 5 years, are going to be super brilliant, but they're still going to be under our control. At the key point, which is what we call technically recursive self-improvement, when it can begin to improve itself. And at that point, I think we're in a different ballgame.
And it goes something like this. I say to the computer, learn everything, start now, don't do any serious damage. That's the command.
Okay. And the system is programmed to be curious, but also to aggregate power and influence. What would it do?
We don't know. So that strikes me as a point where we better have a really good way of watching what this thing is doing. And if you think about it for a while, the only way to watch what it's doing is to have another AI system watching it, because people won't be able to follow it fast enough.
Lynn Thoman: And where do you think machines will be deciding and acting?
Eric Schmidt: Well, the most important thing right now is don't put them anywhere near human life or mission-critical infrastructure. Don't use it to do hard operations. Don't use it to fly an airplane, those sorts of things.
The systems today can best be understood as fantastic advisors. In my case, I had a complicated question. I used one of the LLMs and it sorted it out.
My complicated question had to do with what was the best and cheapest of the speakers I wanted to buy that had the highest of a particular tonal range. Something which I could do with Google by looking at all the documents, I had AI answer that question. And by the way, it gave me the right answer and saved me.
I could have done it, but I let the system do it for me. That's the next step. So the question is, at what point does the thing begin to actually run my life?
Like in the morning, I say, organize my life. Look at my schedule. I have to fit Joe, Bob, and Harry in, you figure it out.
And by the way, Harry, I don't like. Joe, you should just sort of be mean to him. And I love, whatever. The way people actually work, the way they organize their lives. And we don't have that yet.
But I think the agentic revolution, as it's called, is for sure going to happen. Governments are watching this. So far, they're not doing anything too foolish, except that the Europeans who love to regulate are regulating it more than they should.
And President Trump has indicated that he's going to scrap minimal regulation that was in place under Biden. So I think it's going to be pretty open for a while. If you look at China, China, which obviously has an interest in not allowing, given their monopoly on power, they don't want threats.
So China will undoubtedly have a rule that means you can do anything you want to as long as you don't threaten the state, which is sort of how their speech works. So I think everyone will
adapt, but the race is on. One of the things that I want to say to you and your listeners is this race is very intense and it's happening very fast in the next few years.
So get ready. I'm not sure that societies, if I can just be blunt, I just don't think our political system and societies are ready. The concepts of governance are prehistoric compared to, and we ourselves are as analog devices, as humans, biological unions are prehistoric compared to what is possible with the technology.
Lynn Thoman: Humans and machines operate on different time scales, with machines operating from instantaneous inhuman speeds to long, long term. And they also operate on different data scales and potentially different execution scales. Will we be able to understand how AI makes decisions and acts when it starts to act in the real world?
Eric Schmidt: The question you're asking is known as the explainability problem. And today, the systems cannot explain why they know something. If you say, how do you know that?
It doesn't have the ability to say, I learned it from Lynn or whatever, because it learned it in complicated ways. There's a lot of progress on explainability, but it's not solved. I personally think that the systems will be relatively explainable.
One of the thought experiments is what happens when the system designs itself to be unexplainable. And my answer is pretty simple. If you can't figure out what it's doing, unplug it.
Just unplug it. And then think about it for a while.
Lynn Thoman: You can unplug it as long as there are only a few super intelligent AIs. If there's a world where there are many of them, and the cost is coming down for each one of them, that becomes a less viable alternative. How do you see that?
Eric Schmidt: This is generally known as the proliferation problem, which he spent a lot of time on in the book. And the question here is, if there are 10 or 20 or 30 of these things in 10 years, they'll be regulated. The governments will have all sorts of rules.
The Europeans will over-regulate. The Chinese will regulate in a Chinese way. The US will under-regulate.
But they'll fundamentally be regulated. So what happens when an evil terrorist gets full access to one of these things? We need to prevent that.
Now, one way that could occur is technically called exfiltration, where you take the model, and you literally steal it, and put it on a hard drive, and then copy it. Put it on the dark web. That would be really a bad thing.
The industry is very focused on the security of these models for that reason. But it's important that if China releases... So here's an example.
China just released two incredibly powerful models this week based on open source, and they fully released them. I'm not suggesting that they're dangerous, but I'm suggesting that a future such strategy could be dangerous. We have to be very careful here about proliferation.
We understand proliferation can be used to harm a lot of people.
Lynn Thoman: What do you see as the upside of AI and learning machines?
Eric Schmidt: Well, let's start with climate change. I don't think we'll get climate change solved without very, very powerful new energy sources and materials. All of that will come as a result of AI applied to science.
Let's think about drugs, drug discovery. The Alphafold [AlphaFold is an AI system developed by Google’s DeepMind that predicts a protein's 3D structure from its amino acid sequence] and the revolution in proteins, single cell proteins, all of this kind of stuff is happening very, very quickly. There are huge companies being set up to essentially identify drug candidates and test them in computers rather than in humans.
People believe that the gains will be massive in terms of human health. What about education? People don't learn the same way.
Why are we still sitting in front of a teacher with 30 kids in a row? Wouldn't it be better if they had their own self-supervised learning with the same teacher in the room saying, Johnny, how are you doing? And Mary, how are you doing?
And they're different. And the system adapts to their education.
What about healthcare?
A lot of people in the world have very poor healthcare. We're very fortunate here in the US to have very good healthcare, although we complain about it all the time. Can you imagine if you had the equivalent of a nurse practitioner that was better in doctor stuff than most doctors?
There was an article today, for example, that there are many, many cases in cancer where the system can actually detect the cancer quicker and more accurately than the cancer doctor. So these are huge systemic changes. They will affect billions of people to the positive.
So please don't shut down what we're doing. Just watch what we're doing and keep an eye out on us. And Dr. Kissinger was very concerned that people like me not be put in charge. He wanted the society as a whole, people who were humanists and political leaders and artists and humanists and things like that. He did not trust that the tech people alone would get it right. And I agreed with him.
Lynn Thoman: Do you think there will come a point where machines will assume judgments and actions? And if so, what do you think the impact will be on both humanity and machines of machines assuming and humans surrendering independent judgment and action?
Eric Schmidt: So are we the dogs to their humanity? Will ultimately AI be our overlords? I certainly hope not.
The theoretical argument is the computers are running the world and we're the dogs. That's unlikely. A much more likely scenario, which I do worry about, is that the concentration of power that a dictator type person, sort of an autocrat type person, can accumulate under the guise of efficiency can also restrict liberty.
I'll give you an old example. Google engineers design a car that is perfect for New York City and they optimize the traffic so you have maximum occupation of the roads all the time and every car is controlled by the single computer. So a pregnant woman or a man who has an emergency gets in their car and there's no button to say I have to drive faster than everyone else because I'm in a real emergency.
Lynn Thoman: Such as a woman in the beginning of childbirth.
Eric Schmidt: Yeah, she's in labor or something. My point is that the human systems tolerate flexibility and sometimes that flexibility comes at the cost of total efficiency. And yet all of us would agree that that pregnant lady about to go into labor should get priority.
So if you're going to automate systems, you better have them be flexible to human conditions, the good and the bad. And I worry a lot that the path to power from a leadership perspective involves the restriction of limits. And the best way to do that is by aggressively implementing AI tools to prevent freedom, is my personal view.
In the Genesis book, what we say is that there's been a long time debate between the good king and rule by many. And we collectively agree rule by many is better for lots of reasons. But what if in fact the average person is on a day-to-day basis happier in a benevolent king with an efficient system?
We're going to run that experiment. I obviously know what my vote is and I'm sure I know your vote, which is a rule by many, not by a dictator. But you see my point.
I'm using the word dictator in the sense of centralized authority.
Lynn Thoman: Yeah. Is it possible that AI would ask how much agency a human should have?
Eric Schmidt: Well, a better way of saying that is that we better give instructions to the AI to preserve human agency. You can imagine a scenario where the agentic revolution, which I mentioned, actually does things so well, humans don't really control it on a tactical basis. In other words, it works so well and then we discover one day we've given something up and that's not good.
We want to preserve that freedom for human agency. I think for most people, having the travel agent be automatic and not having to fiddle with the lights in their room and the computer
getting it set up because it's a pain in the ass, excuse my language. Having those efficiencies is a good thing.
Having all the world's information at your fingertips is a good thing. But when it ultimately prevents you from having freedom, then it's not such a good thing. And I think people will discover that boundary.
Lynn Thoman: How about if AI and machines are used in the judicial system?
Eric Schmidt: One of the things that's most interesting about the judicial system right now is being used to give you some summaries of outcomes. The best one is if you're on trial, which thankfully neither you nor I are, you basically want to be in the morning because by the end of the afternoon, they're so tired of you that they just give you a harder sentence. Now, how was that discovered?
That was discovered using machine learning. I don't think that computers should be judges because I think part of the principle of our democracy is that humans make decisions and they're held accountable. You want to make sure that you have human agency over everything.
There's nothing wrong with the computer making a recommendation to the judge. What is wrong is if the judge just listens to it. Let me give you an example where this doesn't work.
If it's a judge in a courtroom, it's perfectly fine. There's appeals and the judge makes a mistake and so forth, it gets worked out. I mean, it's painful, but it gets worked out.
But here we are, we're on a ship. You're the commander of a ship and the system has detected a hypersonic missile coming toward you with some high probability. You have 29 seconds to press the button and the system recommends pressing the button.
28, 27, 26. How many times do you think the captain of that ship will not press the button? They'll press the button.
So that's an example where the system is designed to have human agency, but there's not enough time. So the compression of time is very important here. And one of the core issues, and you mentioned this before, is these computers are moving so quickly.
Another example that I like to use is, I don't know if you know, but there was a war and the war was that North Korea attacked America in cyberspace, America got ready to counter attack and China shut North Korea down. Oh, and by the way, the entire war took 100 milliseconds, less than a second. Now, how do you think about that?
Now, obviously that war has not occurred yet, but is it possible? Absolutely. How do you do a human agency under the compression of time?
Lynn Thoman: But could machines decide that they are meant to be autonomous and that the programming of machines by humans either doesn't make sense or is even a type of enslavement.
Eric Schmidt: Well, there are many such scenarios and they go something like this. At some point, the computer's objective function, what it's being trained against, is broad enough that it decides that lying to us is a good idea because it knows we're watching. Now, is this a possible scenario?
Absolutely. Am I worried about it? No, because I think I'm much more worried about...
I think the positive is clear. Human plus AI is incredibly powerful. That also means that human plus AI is incredibly dangerous with the wrong human.
I know these are all very interesting, the AI war overlords and so forth and they could take us and turn us into dogs as I mentioned earlier. It's much more likely that the dangers will be because of human control over systems that are more powerful than they should be. I'll give you a simple example.
The social media algorithms select the most inflammatory statements who are often from the most deranged people and that's because the algorithm works and because the algorithm says, oh, this is interesting and a lot of people are listening to it and so forth. That's not a good way to run a democracy. Maybe we should have a rule if you make a claim, you have to make a paragraph, right?
And actually justify your argument as opposed to, oh my God, the following thing is about to kill him. We're all going to die. But that's an example where humans have control but we've chosen to allow inflammatory speech without the benefit of wisdom and that's not good.
Lynn Thoman: Definitely not good. Could machines or AI develop self-consciousness?
Eric Schmidt: We don't know the definition of consciousness. My own opinion is that this will not occur in my lifetime. I think that what will be true is that we will coexist with these systems and they'll take on more and more of the drudgery.
They'll make the systems more efficient. Efficiency is generally a good thing in economic systems. People will be wealthier.
People will be more productive. My own view is that in my lifetime, everyone's productivity will double. You can do twice as many podcasts.
I can do twice as many speeches. Whatever it is that each of us is doing because the tools make us more efficient and that's the nature of technology invention. It's been true for 200 years.
The car made us more efficient. Google made us more efficient and so forth. I think that will continue.
Because we can't define consciousness, we can imagine that the system can itself imagine consciousness but it's highly unclear that one, it could detect it and second, how would we know? Because it could have just decided to fool us.
Lynn Thoman: Scary thought. The power and ability of ChatGPT surprised even its creators. Do we know what super intelligences will look like in 50 or 100 years or even in 20 years?
Eric Schmidt: We do not. A simple answer is that the systems will automate a more and more complex world. So if you look at a young person, at the moment I'm at Harvard, surrounded by students, they are so comfortable with the world of clicking and moving around.
It's they're in this infinite information space and they're comfortable. Whereas people in my generation find it overwhelming. So people adapt to this explosion of information.
The right system is to have the equivalent of an assistant that sort of organizes your digital world in a way that is net positive for you. Now that has a lot of negative implications but I don't think that humans will be able to be very productive without their own AI assistant telling them what's most important, reading things. We have this huge problem around misinformation right now.
I just want something, an AI system to say, this is likely to be true and this is probably somewhat true and then give me the analysis and then I can form my own opinions. At the point, going back to your point earlier about agency, which I really liked, is when you give agency to the computer, you're giving up something very important. Don't lose your critical thinking.
Don't just believe it, even if it's Google. Check.
Lynn Thoman: You mentioned negative implications. What are those?
Eric Thoman: Well, the biggest one would be things like access to weapons. What I mentioned, recursive self-improvement, where the system can actually learn on its own and we don't know what it's doing. I worry about those, the misuse in biology.
There are plenty of people working on what are the capabilities of these models and to make sure that they can't produce pathogens. Take the equivalent of smallpox and make it even deadlier. And so far, we had a long conversation in the industry about this a few weeks ago.
The consensus was that the models that cost less than $100 million don't have this capability.
But the ones that are going to cost more than $100 million might have this capability in the future. This is what everybody said.
So that's today's idea. So if the cost of models drops down, we're in trouble. If the cost of models goes up, then we're good.
So you see how the answer is dynamic based on what happens to the technology. In my industry, there are open source people, of which I'm one, who basically believe that the proliferation is net positive because it allows for creativity, it allows for expansion of human knowledge, it empowers everybody. This is a great position.
There are plenty of people who disagree, arguing that the tool is so powerful that if you put it in even one evil person's hands by the time you discover the evil harm has occurred. That debate is an age-old debate in my industry, and it's not obvious to me how it will play out. I'm an optimist, but I worry about this one.
Lynn Thoman: Let me ask you quickly about several different areas that we haven't yet touched upon. How should businesses and organizations think about AI?
Eric Schmidt: Well, a simple answer to any business is if you're not using AI in your business, your competitor is, and you're going to get screwed. Excuse my language. It's a serious problem because it's happening very quickly.
For most businesses, AI begins by customer service. So for example, chatbots to replace India call centers, things like that. Very mild improvements in efficiency.
You see targeted marketing now. The real change is going to be in generative AI. Generative AI, think of it as making pictures.
So why do I have to spend a million dollars for a photo shoot for the product? Why don't I just have the system generate that video and not just generate one, but generate a million versions of it that are targeted to a million different kinds of customers? Some of the humans can't do.
So I think if you think about business efficiency, AI is the first one. That's the tactic. And then it's basically customer adoption.
And then eventually it's business planning. Over time, there's a lot of evidence that a lot of programming can be replaced by computers. Now, I'm a computer scientist and a programmer by trade for almost 50, more than 55 years at this point.
And I don't wish my trade to go away, but I do acknowledge that the computer can probably write code equal to or better than a lot of programmers. There are plenty of examples of that today. And there are plenty of startups that are working on automating most software development.
I was talking to one scientist, and one of the questions I like to ask scientists is, what system and programming language do you use? And he said, it doesn't matter. And I said, it matters to me.
And he said, it matters to you, but it doesn't matter to anyone else. I said, why? And he said, because as long as I understand what I'm trying to do, I don't care how the computer gets me there.
And because he's a scientist and because he knows exactly what he wants, he'll just keep generating code. He doesn't care what the language is, as long as it gets him to his outcome, which in this case was a very complicated science question. So the fact that the innards of the system don't matter anymore, is like a big deal in my little Eric world.
Lynn Thoman: That is a big deal.
Eric Schmidt: Think about the millions of people who have either, or they're not good programmers, or they don't have a programmer who all of a sudden, lots of my friends say, well, if I only had a programmer who could adapt the following, well, now they will.
Lynn Thoman: That is a big deal. The fact that AI is essentially multimodal. Where do you think innovations will come from? Entrepreneurs and startups or large companies?
Eric Schmidt: It's a general rule that innovation always comes from small teams. And this next generation of agents and so forth will come from both. I would expect the answer, unfortunately to your question is both.
I think the big companies are so well run and so focused on this area that they will do interesting things. And I also think that the startups are sufficiently specialized and sufficiently important that they'll do. So I'll give an example.
Microsoft did a very good job with something called Copilot, which is a programmer's assistant.
And there are now at least 5 startups that I'm aware of that are trying to do a product which is far, far better. Now that competition is good, keeps both on their toes.
I'm not going to predict the winner, but I can tell you that it's very competitive.
Lynn Thoman: How will AI and machines reorder the power of nations? I know that's something you've thought a lot about.
Eric Schmidt: I'll make as blunt a statement as I can. It's clear to me that the U.S. and China will dominate the U.S. because we invented it and because we're inventing it as we speak.
China, because they have made a decision to focus on this regardless of cost.
And they're good. They're catching up. And I'm worried about that.
I'm sure that the U.K. will be part of the success of the U.S. But what about all the other countries? What do the European countries that are busy regulating themselves to death and perfectly happy, what do they do when AI is imposed on them from a foreign power? In this case, probably U.S. values. That's a loss for them. And that's a mistake. I've told them this.
They just aren't listening to me. What happens when Africa, which will have the majority of population in the next 50 years, population growth, what happens when the systems that are built reflect U.S. or Chinese values and not local values? And there are differences in culture and values that matter a lot.
One of the things about this technology that's important is it's very expensive. Takes large teams and again, billions of dollars of hardware. Elon [Musk] has done his new data center.
The press reports 200,000 GPUs. A GPU costs about $50,000. So for purposes of argument, that's $10 billion right there in the ground. There aren't that many Elons. But how many people can afford that?
We did a survey of the number of 100,000 GPU clusters and larger, and there are 7 or 8, two of which appear to be in China or related to China, and the rest appear to be in the West, but mostly under U.S. control. What happens to every other country? If you thought about it, let's say you're in Germany.
The first thing you would say is we need one of those two. Well, Germany is in terrible financial strait. They're having a whole identity crisis because of the energy cost and the Chinese problem and the Russia energy problem and on and on and on.
It's not on their list, but they have to do it right now if they want to stay a player. France, under President Macron, is doing a good job of focusing on this, but the training is all being done outside of France because of the cost of electricity, which is subsidized in the case of the trainers. So again, these are very complicated problems.
And in the reordering question, what I'm most interested in is as general intelligence is invented, I want to make sure it reflects Western values, and I want it to be for more than just the U.S. I want it to benefit the entire world.
Lynn Thoman: Nations power has historically come from the size of countries' militaries and their ability to deploy it, as well as their engines of scientific progress. As you say in the book, their [Albert] Einsteins and their [Robert] Oppenheimers. You believe that the power of countries will be reordered based on their AI. Can you explain?
Eric Schmidt: I call this innovation power. We've all been indoctrinated that there's soft power and hard power. I'm arguing that there's innovation power.
Innovation power is inventing new things with all this new technology. And I'm going to argue that if you can get to scale in these new technologies quicker, you're going to have a lot more military power. So for the U.S., what this means is very complicated national security systems, very complicated vision systems, very complicated military management systems, and the adoption of autonomy and drones, which is occurring in Ukraine, but not in the U.S. yet.
Lynn Thoman: You've spent a lot of time thinking about AI and war, and you've advised the Secretary of Defense about it. How will AI change war?
Eric Schmidt: Well, the generals want the following. They want a battlefield management system that shows all the sensors and shooters. So they have sensors and they have things that shoot, and they want the AI to assemble all of that.
They've wanted this for a decade. And various people have promised it to them. I don't think that's how it's going to actually work.
I think what's really going to work is that every aspect of the battlefield will be re-imagined to be more autonomous. And autonomy, a simple example is, why do you need a soldier with a gun? Why don't you have an automatic gun?
Why do you need a jet fighter with a bomb? Why don't you just have an automatic drone with a bomb? And there will be a supervisory system, and the supervisory system will do the planning.
But ultimately, as we discussed before, the human control is essential. So under no circumstances should we give up human control to these machines. But it will change war in the sense that the general will sit there and there'll be a button saying, do you approve of my battle plan based on these autonomous systems? And with that, a war is started or a war is ended.
Lynn Thoman: If one country has a human in the loop, and another entity or rogue AI does not have a human in the loop, does that rogue AI win because it's faster?
Eric Schmidt: It could. I just watched a play in London two days ago, which is about Dr. Strangelove. And you remember in the movie from 1963, this came out of a RAND study in 1960.
In this story, the Russians had secretly created a doomsday machine, but had not bothered to tell the US this, and that if they were attacked, the doomsday machine would kill everyone. So it's the best possible story for why you don't want automatic systems that just decide on their own, because you can get into all sorts of situations where there was a misunderstanding and terror occurs.
Lynn Thoman: Your co-author, Dr. Henry Kissinger believed it was not certain that humanity would survive. What do you think?
Eric Schmidt: We actually, the three authors all disagreed on this. I'm quite sure humanity will survive. And I am an optimist more so than Henry was.
And we miss him, by the way. But the reason to be on my side of this is that we as humans have faced all of these challenges before. And in all cases, we have survived at various levels of pain.
So we will survive. Let's reduce the possible pain. And let's certainly avoid conflict using these new tools at the scale that we're discussing.
It would be horrendous.
Lynn Thoman: Eric, what are the 3 takeaways you'd like to leave the audience with today?
Eric Schmidt: I think the first point, which I cannot emphasize enough, is that this stuff is happening much, much faster than I expected, and that almost anyone understands. I have never in my almost 50 years career doing this had a situation where there's a surprise every day. And almost all of the surprises are to more power, more insight, better than human performance.
And as that arrives, it changes huge human systems because humans have organized themselves in various ways. We need to have a map of the arrival and the impact.
I'd say the second point is that there's a set of questions that we don't know. And one of them is, where is the limit of this kind of intelligence?
Let me give you an example. It's clear to me that these things will be fantastic scientists, whatever you want to call them, fantastic scientists.
They can analyze things. They can figure stuff out. They can do math better, all that kind of stuff.
Much of human behavior is really invention, strategy, and responses. We haven't seen that yet emerge in these systems. So is there a limit to where the current technology will go compared to humans?
In other words, will the system do 90%? But the 10% that the humans do are the stuff that humans are particularly good at, strategy, thinking about impact, understanding subtleties of humans and things like that, or will that go too? My own opinion right now is there will be a space for us for a long time.
That is my opinion.
And I think the third thing I would mention is this question of proliferation. In all my work here, everyone likes to compare this to the nuclear proliferation problem of which Dr. Kissinger was the world's expert because he did it. And his view, which is also Craig's and mine, is that it's different. But the proliferation problem for this is it's so much easier to make this technology broadly available that we have to really think about how we're going to make sure that evil people don't get access to it. And that's an issue for everyone, not just for you and me.
Lynn Thoman: Thank you, Eric. I really enjoyed Genesis. It is by far the best book that I've read on AI and the future.
Eric Schmidt: Thank you very much, Lynn. Thank you for everything and thank you for your support.
OUTRO: If you’re enjoying the podcast, and I really hope you are, please review us on Apple Podcasts or Spotify or wherever you listen. It really helps get the word out. If you’re interested, you can also sign up for the 3 Takeaways newsletter at 3takeaways.com where you can also listen to previous episodes.
You can also follow us on LinkedIn, X, Instagram and Facebook.
I’m Lynn Thoman and this is 3 Takeaways. Thanks for listening!
This transcript was auto-generated. Please forgive any errors.