This is Science Friday. I'm Ira Plato. And I'm Flora Lichtman. Today on the podcast, tech companies are in a race to create AGI, Artificial General Intelligence. But what does that actually mean? I'm not fond of the term because it is so ill-defined. If you're not familiar with AGI, that's what we're going to do. We're going to try to get you in the loop.
Let's start. OpenAI is an organization founded in 2015. It makes the chatbot ChatGBT. It has also said that its founding goal was to build AGI. That is a system that, according to the founders, quote, benefits all of humanity and gives everyone incredible new capabilities. Others in the field, like DeepMind co-founder Demis Hassabis, have called AGI, quote, a system that should be able to do pretty much any cognitive tasks that humans can do.
On the other hand, OpenAI CEO Sam Altman said AGI would, quote, matter much less than people think. So which is it? Will AGI elevate all of humanity? Or will it end up not mattering that much? And just what is artificial general intelligence supposed to mean anyhow? A lot to unpack here. Who better to answer these questions than someone who researches the intersection of machine and human intelligence? That would be Dr. Melanie Mitchell, professor of complexity.
at the Santa Fe Institute, where she researches cognition in artificial intelligence and machine systems. Welcome back to Science Friday. Been a while. Yeah, thanks for having me. Nice to have you back. All right.
Now that we've been rattling off all these definitions of AGI, what is your definition of AGI and if you think it is really definable? Well, honestly, I'm... not one to give a definition of HEI because I'm not fond of the term, I have to say, because it is so ill-defined and it makes... the assumption that humans have something like general intelligence, which I think is not really true. Humans have very specific kinds of intelligence that are good for...
the kinds of environments that we find ourselves in, but not for everything. But AGI has been defined in so many different ways, it's almost lost any rigorous meaning. Really? So how is AGI then different? than let's say ChatGPT or perplexity or any of those services you use and ask it for a prompt or a question? Well, it depends on how you define it, of course. Originally, AGI was defined as some system that goes beyond doing a narrow kind of task or capability. So if you think back to, say...
Deep Blue, which played chess, or AlphaGo, which played Go. They were superhuman in their abilities to play those games, but they couldn't do anything else. So they were what people called the kind of narrow AI. General AI was supposed to be machines that are more like humans, that have the range of human intelligence, and that are more, you might say, like the AI that we see in movies.
that can do all kinds of things. So ChatGPT is more general than, say, AlphaGo or Deep Blue, but it certainly can't do the range of things that humans can do. So I think that when OpenAI says its goal is to produce AGI, what they mean in some sense is to get machines that can do really all of the kinds of things humans can do.
But then they put a caveat on it. You mentioned it at the beginning when you said cognitive tasks. So they want to separate the idea of being able to do all the so-called cognitive things that we do. from the physical things that we do. You know, ChatGPT is not going to go fix your plumbing. or re-roof your house. So they don't count those interactions with the physical world in their definition of AGI. And lately, some people like Sam Altman.
have started throwing around the word superintelligence more often in these conversations. Not to open another can of worms here, but how is that different from artificial general intelligence? Good question. So I think... You know, we have this notion of human-level AI, which is sort of the equivalent to AGI, aside from all those physical things that I mentioned. But superintelligence is...
AI that's better than humans across the board. We already have AI systems that are much better than humans at many different tasks. We've had that for a long time, including playing chess. Perhaps navigating a city very quickly with maps and so on. But superintelligence in this context, the kind that Sam Alton is looking for, is... AI systems that are better than humans at everything. Is this like Ray Kurzweil's singularity moment?
When AI surpasses human intelligence and we become the robots? Yeah, in some sense, that's right. So Kurzweil's singularity is the moment that AI systems become smarter than humans. And in that, his view, they're going to be able then to improve themselves. And so you get this kind of feedback loop, positive feedback loop where they're getting smarter and smarter and smarter.
And then we have this singularity that where machine intelligence sort of becomes in some sense, uncomprehensible to humans. That's Kurzweil's vision. Yeah. So that's not the super.
intelligence sam altman is talking about or is it i think it is you know and sam altman has has this idea that once we get super intelligence we'll get super super intelligence and then super super super intelligence and then we'll you know have machines that have cured cancer and figured out how to colonize mars and all of these things right let's let's start
talking about this word intelligence. And this is a big question. How do you study intelligence? What do people who do this work think about the concept of intelligence itself? Right. So intelligence is kind of an umbrella term for a lot of different kinds of capabilities. And, you know, we think of our abilities to reason.
to figure out what's causing what in the world, to figure out how to interact with other people and understand them better. You know, I don't think intelligence is any one thing. It's a whole host of capabilities. And some people have strengths in some areas of intelligence and others have other kinds of strengths. Some animals. are more intelligent in some areas than even humans, but humans have this capability for reasoning and for reasoning about our own reasoning.
kind of being able to understand the world in a deeper way than perhaps any other species. Does that include self-awareness? I think it does. I think self-awareness is a key part of intelligence because it helps us understand and reason about our own thinking. And I think self-awareness is something that current machines lack. ChatGPT doesn't have self-awareness. It doesn't have the concept of itself as an entity, I believe. And that's part of what's keeping it from...
being more intelligent. It doesn't have any sense of whether what it's saying is true or false. or whether it has more confidence versus less confidence in some of its statements. And from that, we see that it can produce untrue things that it's just as confident about. as the true things that it generates. Well, you know, people who claim to be intelligent also can produce untrue things. Absolutely. That they know to be untrue. Right. And that's when you say they know.
to be untrue that's a kind of self-awareness that's an intention whereas these systems like chat gpt don't have these kinds of intentions you know they don't have the intention to be deceptive or to be truthful. They're really just generating. text according to some probabilities that they've calculated about what's the sort of most likely kinds of things that they should be saying. After the break, did you know...
There's probably a pretty good correlation between the people who are drawn to study AI and the people who like Star Trek. Support for Science Friday comes from the Alfred P. Sloan Foundation, working to enhance public understanding of science, technology, and economics in the modern world. Let's talk about the history a bit behind this term AGI, but also AI itself. I know you've been studying this for a while. How far does it go back? So the term AI goes back to the 1950s.
when a group of people had a meeting at Dartmouth College. about this new field and they had a kind of arguments about what to call it. And one of the founders of the field, John McCarthy, suggested artificial intelligence as a way to distinguish the field from... other kinds of fields that were studying intelligence at the time. He later regretted calling it artificial intelligence because, you know, why are we calling it artificial? We should be seeking actual, real intelligence.
Yeah, it makes sense. But other people proposed other terms for this field. One of them was Herbert Simon, who suggested complex information processing. which avoided the sort of anthropomorphism of intelligence, the notion of intelligence. So you can imagine maybe the way that we think about these systems might have been a little different if we didn't call them artificial intelligence.
Right. And you've written how Star Trek has had a kind of outsized influence on the direction of the field as a whole. Yeah, exactly. You know, as I said in the book I wrote on AI that there's a. probably a pretty good correlation between the people who are drawn to study AI and the people who like Star Trek. And one of the elements in the early Star Trek episodes was a computer. It was just called computer.
would answer any question. You could ask it anything and it would give you a very cogent, concise, correct answer. It knew everything. Computer was really what a lot of people in AI said that they felt it was sort of their North Star for building AI systems. They wanted an AI system that was like the computer in Star Trek.
But we're close to that. I mean, on a superficial level, aren't we? You can ask ChatGBT and speak to it and get an answer. Yeah. So we are closer than we've ever been, for sure. ChatGPT and these other generative AI systems lack something that that computer had, which is trustworthiness. You know, often you could trust anything the Star Trek computer told you. But with ChatGPT...
While most of the things that it tells you are correct, it does have a tendency to do what people have called hallucinate, which is to generate very confident-sounding answers that actually are untrue. So this is, I think, the next frontier, if you will, is the final frontier is trustworthiness with these kinds of machines. Do we need another breakthrough somewhere down the line?
to get this to be more of a self-aware, I guess it's the intelligence that we can't define. You can't really define AGI right now. So that, you know, it's the old phrase, I don't know exactly what it is, but I'll know it when I see it. Yeah, I think we need a couple breakthroughs. One, it would be in how to make these systems more trustworthy, more self-aware, more have a better notion of what they're talking about and whether it's true or false.
We also need to better understand what we mean by intelligence. As you said, we know it when we see it. But one of the problems is that we've thought that for maybe millennia, and it turns out we've often been wrong. Just as an example, people used to think before.
the age of Deep Blue, the chess-playing computer, that in order to play chess at a grandmaster level or a superhuman level, you'd have to have superhuman general intelligence. But it turned out that you could... get a computer to play chess at this superhuman level without anything like we call general intelligence. The same thing has been said of things like speech recognition and conversation, like the kind that we have now with ChatGPT.
To get those kinds of abilities, you'd need something like general human intelligence, human level intelligence. But it's turned out that we can accomplish these kinds of capabilities without having... this idea of general intelligence, the kind that the pioneers of AI really were looking for. So it's really taught us a lot about how...
hard it is to define what we mean by intelligence and to know when we have a system that's close to having that. Well, with that caveat in mind, especially about how wrong we always are in predicting the future. I want you to predict the future for me. Here it is, beginning of 2025. What should we be paying attention to? this year? What kind of developments are you expecting to hear about in AGI or from any of these companies? Well, I think...
One thing is that this word AGI has gotten so much cachet that people will be trying to sort of redefine it into existence. to say, well, what we have at the end of 2025 is clearly AGI and then have some definition that captures what we have. So I do think that that is likely to happen. But I also think that people are going to realize that these systems are actually lacking a lot of the very important aspects of what makes human intelligence more sort of trustworthy.
when it is, and more general, and that those things people are going to start focusing much more effort on in their development of AI systems. Yeah, because people in general are afraid. They're fearful of computers from what they see in science fiction and what they're watching in their real life. Now you have to become more trustworthy. Absolutely. You know, there's.
The fear that the machines will get too intelligent and will take over. But there's also the opposite fear that the machines won't be intelligent enough to do the things we give them to do. But we'll trust them too much and they will fail in ways that we didn't expect. So I think both of those fears might be worth considering.
Yeah, I can see that both in good and bad, like in medicine and even in computer warfare. You trust your machines, right? Yeah, we don't want to trust them too much when they haven't in some sense earned our trust. Well, you have earned our trust, Dr. Mitchell. I want to thank you for taking time to be with us today. Well, thanks very much. This is a very important topic.
thrilled to be able to talk about it with your audience. Well, you're welcome. We're happy to have you. Dr. Melanie Mitchell, Professor of Complexity at the Santa Fe Institute in Santa Fe, New Mexico. That's about all the time we have for now. A lot of people helped make this show happen. Jason Rosenberg. Phyllis Amaz. Beth Ramey. Sandy Roberts. I'm Ira Flato. Thanks for listening.