like asking is the University of California Berkeley library smarter than I am? Well, it definitely has more information in it than I do, but it just feels like that's not really the right question. From the Santa Fe Institute, this is COMPLEXITY. I'm Melanie Mitchell and I'm Abba Elifobu. Today's episode kicks off a new season for the COMPLEXITY podcast and with a new season comes a new theme. This fall we are exploring the nature and complexity of intelligence in six episodes.
What it means, who has it, who doesn't, and if machines that can beat us at our own games, are as powerful as we think they are. The voices you'll hear were recorded remotely across different locations including countries, cities and work spaces, but first, I'd like you to meet our new co-host. My name is Melanie Mitchell. I'm a professor here at the Santa Fe Institute. I work on artificial intelligence and cognitive science. I've been interested in the nature of
intelligence for decades. I want to understand how humans think and how we can get machines to be more intelligent and what it all means. Melanie, it's such a pleasure to have you here. I truly can't think of a better person to guide us through what exactly it means to call something intelligent. Melanie's book, Artificial Intelligence, a guide for thinking humans, is one of the top books on AI recommended by the New York Times. It's a rational voice among all the AI hype in the media.
And depending on whom you ask, AI is either going to solve all of humanity's problems or it's going to kill us. When we interact with systems like Google Translate or hear the buzz around self-driving cars or wonder if chat GPT actually understands human language, it can feel like AI is going to transform everything about the way we live. But before we get carried away making predictions about AI, it's useful to take a step back. What does it mean to call anything intelligent,
whether it's a computer or an animal or a human child? In this season, we're going to hear from cognitive scientists, child development specialists, animal researchers, and AI experts to get a sense of what we humans are capable of and how AI models actually compare. And in the sixth episode, I'll sit down with Melanie to talk about her research and her views on AI. To kick us off, we're going to start with the broadest, most basic question. What really is
intelligence anyway? As many researchers know, the answer is more complicated than you might think. Part One. What is intelligence? I'm Alison Gopnik. I'm a professor of psychology and affiliate professor philosophy and a member of the Berkeley AI research group. And I study how children manage to learn as much as they do,
particularly in a sort of computational context. What kinds of computations are they performing in those little brains that let them be the best learners we know of in the universe? Alison is also an external professor with the Santa Fe Institute and she's done extensive research on children and learning. When babies are born, they're practically little blobs that can't hold up their own heads, but as we all know, most babies become full blown adults who can move, speak,
and solve complex problems. From the time we enter this world, we're trying to figure out what the heck is going on all around us and that learning sets the foundation for human intelligence. Yeah, so one of the things that is really, really important about the world is that some things
make other things happen. So everything from thinking about the way the moon affects the tides, to just the fact that I'm talking to you and that's going to make you change your minds about things or the fact that I can pick up this cup and spill the water and everything will get what? Those really basic cause and effect relationships are incredibly important and they're important
partly because they let us do things. So if I know that something is going to cause a particular effect, what that means is if I want to bring about that effect, I can actually go out in the world and do it. And it's underpins everything from just our everyday ability to get around in the world, even for an infant, to the most incredible accomplishments of science. But at the same time, those causal relationships are kind of mysterious and always have been, how is it? After all,
all we see is that one thing happens and another thing follows it. How do we figure out that causal structure? So how do we? Yeah, good question. So that's been a problem. Philosophers have thought about for centuries and there's basically two pieces and anyone who's done science will recognize these two pieces. We analyze statistics. So we look at what the dependencies are between
one thing and another and we do experiments. We go out perhaps the most important way that we understand about causality is you do something and then you see what happens and then you do something again and you see, oh, wait a minute, that happened again. And part of what I've been doing recently, which has been really fun is just look at babies, even like one year olds. And if you just sit and look at one year old, mostly what they're doing is doing experiments. I have a lovely
video of my one year old grandson with a xylophone and a mallet. Of course, we had to ask Allison to show us the video. Her grandson is sitting on the floor with a xylophone while his grandfather plays an intricate song on the piano. Together, they make a strange do it. And it's not just that he makes the noise. He tries turning the mallet upside down. He tries with his hand a bit. That doesn't make a
noise. He tries with a stick end. That doesn't make a noise. Then he tries it on one bar and it makes one noise. Another bar makes another noise. So when the babies are doing experiments, we call it getting into everything. But I increasingly think that's like their greatest motivation. So babies and children are doing these cause and effect experiments constantly. And that's a
major way that they learn. At the same time, they're also figuring out how to move and use their bodies, developing a distinct intelligence in the motor system so they can balance, walk, use their hands, turn their heads, and eventually move in ways that don't have been required, much thinking at all. One of the leading researchers on intelligence and physical movement is John Crackauer. He's a professor of neurology, neuroscience, physical medicine, and rehabilitation at the Johns Hopkins
University School of Medicine. John's also in the process of writing a book. I am. I've been writing it for much longer than I expected. But now I finally know the story I want to tell. I've been practicing it. Well, let me ask. I just want to mention that the subtitle is thinking versus intelligence in animals, machines, and humans. So I wanted to get your take on what is thinking and what is intelligence. Oh my gosh. Thanks, Melanie, for such an easy softball question.
Well, you're writing a book about it. Well, yes. So I think I was very inspired by two things. One was how much intelligent adaptive behavior your motor system has even when you're not thinking about it. The example I always give is you when you press an elevator button before you lift your arm to press the button, you contract your gastrocnemius in anticipation that your arm is sufficiently heavy that if you didn't do that, you'd fall over because your center of gravity has shifted.
So there are countless examples of intelligent behaviors. In other words, their goal directed and accomplished the goal below the level of overt deliberation or awareness. And then there's a whole field, you know, all these what are called long latency stretch reflexes. These below the time of voluntary movement, but sufficiently flexible to be able to deal with quite a lot of variation in the environment and still get the goal accomplished, but it's still involuntary.
There's a lot that we can do without actually understanding what's happening. Think about the muscles we use to swallow food or balance on a bike, for example. Learning how to ride a bike takes a lot of effort, but once you've figured it out, it's almost impossible to explain it to someone else. And so it's what Daniel Dennett, you know, who recently passed away, but with very influential for me, with what he called competence with comprehension versus competence without
comprehension. And I think he also was impressed by how much competence there is in the absence of comprehension. And yet, along came this extra piece, the comprehension, which added to competence. And greatly increased the repertoire of our competences. A body is competent in some ways,
but when we use our minds to understand what's going on, we can do even more. To go back to Allison's example of her grandson playing with a xylophone, comprehension allows him, or anyone, playing with a xylophone mallet to learn that each side of it makes a different sound. If you or I saw a xylophone for the first time, we would need to learn what a xylophone is, what a mallet is, how to hold it, and which end man make a noise if we knocked it against a musical
bar. We're aware of it. Over time, we internalize these observations so that every time we see a xylophone mallet, we don't need to think through what it is and what the mallet is supposed to do. And that brings us to another crucial part of human intelligence. Common sense. Common sense is knowing that you hold a mallet by the stick end and use the round part to make music. And if you see another instrument like a Morimba, you know that the mallet is going to work the same way.
Common sense gives us basic assumptions that help us move through the world and know what to do in new situations. But it gets more complicated when you try to define exactly what common sense is and how it's acquired. Well, I mean, to me, common sense is the amalgam of stuff that you're born with. So, you know, any animal will know that if it steps over the edge, it's going to fall. What you've learned through experience that allows you to do quick inference. So in other words,
you know, an animal, it starts raining, it knows it has to find shelter. Right? So in other words, presumably it learns that you don't want to be wet. And so it makes the inference, it's going to get wet. And then it finds the shelter. It's a common sense thing to do in a way. And then there's the thought version of common sense. Right? It's common sense that if you are approaching a narrow alleyway,
your car's not going to fit in it. Or if you go to an outsidely less narrow one, your door won't open when you open the door. Accountless interactions between your physical experience, your innate repertoire and a little bit of thinking. And it's that fascinating mixture of fact and inference and deliberation. And then we seem to be able to do it over a vast number of situations. Right? In other words, we just seem to have a lot of facts, a lot of innate understanding of the physical
world. And then we seem to be able to think with those facts and those innate awarenesses. That to me is what common sense is. Is this almost language-like flexibility of thinking with our facts and thinking with our innate sense of the physical world and combinatorially doing it all the time, thousands of times a day. Yeah, I know that's a bit waftly. I'm sure Melanie can do a much better job at me than that, but that's how I see it.
No, I think that's actually a great exposition of what it means. I totally agree. I think it is fast inference about new situations that combines knowledge and sort of reasoning, fast reasoning. And a lot of very basic knowledge that's not really written down anywhere that we happen to know because we exist in the physical world and we interact with it. So observing cause and effect, developing motor reflexes and strengthening common sense are all happening and overlapping
its children get older. And we're going to cover one more type of intelligence that seems to be unique to humans and that's the drive to understand the world. It turns out, for reasons that physicists have puzzled over, that the universe is understandable, explainable and manipulable. The side effect of understanding the world is understandable is you begin to understand sunsets and why the sky is blue and how black holes work and why water is a liquid and then a gas.
It turns out that these are things worth understanding because you can then manipulate and control the universe. And it's obviously advantageous because humans have taken over entirely. I have a fancy microphone that I can have a zoom call with you with. A understandable world is a manipulable world. As I always say, an Arctic fox trotting very well across the Arctic
Tundra is not going, hmm, what's ice made out of? It doesn't care. Now we, for some point between chimpanzees and us, started to care about how the world worked and it obviously was useful because we could do all sorts of things. Fire, shelter, blah, blah, blah. And in addition to understanding the world, we can observe ourselves observing, a process known as metacognition. If we go back to the xylophone, metacognition is thinking, I'm here learning about
this xylophone. I now have a new skill. And metacognition is what lets us explain what a xylophone is to other people. Even if we don't have an actual xylophone in front of us, Allison explains more. So the things that I've been emphasizing are these kind of external exploration and search capacities like going out and doing experiments. But we know that people, including little kids,
do what you might think of as sort of internal search. So they learn a lot. And now they just intrinsically, internally want to say, what are some new things that I could, new conclusions I could draw or new ideas I could have based on what I already know. And that's really different from just what are the statistical patterns in what I already know. And I think two capacities that are really important for that are metacognition and also one that Melanie's looked at more than
anyone else, which is analogy. So being able to say, okay, here's all the things that I think. But how confident am I about that? Why do I think that? How could I use that learning to learn something new or saying, here's the things that I already know. Here's an analogy that would be really different. So I know all about how water works. Let's see, if I think about light, does it have waves the same way that water has waves? So actually learning by just thinking about what you
already know. I find myself constantly changing my position on the one hand, this human capacity to sort of look at yourself computing, a sort of meta cognition, which is consciousness, not just of the outside world and of your body, it's consciousness of your processing of the outside world and your body, right? It's almost as though you used consciousness to look inward at what you were doing. Humans have computations and feelings, they have a special type of feeling and
computation which together is deliberative and that's what I think thinking is. It's feeling your computations. What John is saying is that humans have conscious feelings, our sensations such as hunger or pain, and that our brain performs unconscious computations, like the muscle reflexes that happen when we press an elevator button. What he calls deliberative thought is when we have conscious
feelings or awareness about our computations. You might be solving a math problem and realize with the smay that you don't know how to solve it, or you might get excited if you know exactly what trick will work. This is deliberative thought, having feelings about your internal computations. To John, the conscious and unconscious computations are both intelligent, but only the conscious computations count as thinking. So Melanie, having listened to John and Allison, I'd like to go
back to our original question with you. What do you think is intelligence? Well let me recap some of what Allison and John said. Allison really emphasized the ability to learn about cause and effect. What causes what in the world and how we can predict what's going to happen. And she pointed out that the way we learn this adults and especially kids by doing little experiments, you know, interacting with
the world and seeing what happens and learning about cause and effect that way. She also stressed our ability to generalize, to make analogies, how situations might be similar to each other in an abstract way. And this underlies what we would call our common sense that is our basic understanding of the world. Yeah, that example of the xylophone and the mallet that was very intriguing. As both John and Allison said, humans seem to have a unique drive to gain an understanding of the world,
you know, by experiments like making mistakes, trying things out. And they both emphasize this important role of metacognition or reasoning about one's own thinking. What do you think of that? You know, how important do you think metacognition is? Oh, it's absolutely essential to human intelligence. It's really what underlies I think our uniqueness. John, you know, made this distinction between intelligence and thinking. To him, you know, most of our, what he would call our intelligent behavior
is unconscious. It doesn't involve metacognition. He called it competence without comprehension. And he reserved the term thinking for conscious awareness of what he called one's internal computations. So even though John and Allison have given us some great insights about what makes us smart, I think both would admit that no one has come to a full complete understanding of how
human intelligence works, right? Oh, yeah, we're far from that. But in spite of that, big tech companies like OpenAI and DeepMind are spending huge amounts of money in an effort to make machines that as they say will match or exceed human intelligence. So how close are they to succeeding? Well, in part two, we'll look at how systems like chat GPT learn and whether or not they're even intelligent at all. Part two. How intelligent are today's machines?
If you've been following the news around AI, you may have heard the acronym LLM, which stands for large language model. It's the term that's used to describe the technology behind systems like chat GPT from OpenAI or Gemini from Google. LLM's are trained to find statistical correlations in language using mountains of text and other data from the internet. In short, if you ask chat GPT a question, it will give you an answer based on what it has calculated to be
the most likely response based on the vast amount of information it's ingested. Humans learn by living in the world. We move around, we do little experiments, we build relationships, and we feel large language models don't do any of this. But they do learn from language, which comes from humans in human experience, and they're trained on a lot of it. So does this mean that LLMs could be considered to be intelligent? And how intelligent can they or any form of AI become?
Several tech companies have an explicit goal to achieve something called artificial general intelligence, or AGI. AGI has become a buzzword and everyone defines it a bit differently, but in short, AGI is a system that has human level intelligence. Now this assumes that a computer, like a brain in a jar, can become just as smart or even smarter than a human with a feeling
body. Melanie asked John what he thought about this. You know, I find it confusing when people like Demis Hassebius, who's the one of the co-founders of DeepMind, and he said on an interview that AGI is a system that should be able to do pretty much any cognitive task that humans can do. And he said he expects there's a 50% chance we'll have AGI within a decade. Okay, so I emphasize that word cognitive task, because that term is confusing to me, but it seems so obvious to them.
Yes, I mean, I think it's the belief that everything non-physical at the task level can be written out as a kind of program or algorithm. I just don't know, and maybe it's true when it comes to, you know, ideas, intuitions, creativity. I also asked John if he thought that maybe that separation between cognition and everything else was a fallacy.
Well, it seems to me, you know, it always makes me a bit nervous to argue with you of all people about this, but I would say, I think there's a difference between saying, can we reach human levels of intelligence when it comes to common sense? The way humans do it versus can we end up with the equivalent phenomenon without having to do it the way humans do it? The problem for me with that is that we, like this conversation we're having right now, are capable of open-ended
extrapolatorable thought. We go beyond what we're talking about. I struggle with it, but I'm not going to put myself in this precarious position of denying that a lot of problems in the world can be solved without comprehension. So maybe we're kind of a dead end, comprehension with a great trick, but maybe it's not needed, but if comprehension requires feeling, then I don't quite see how we're going to get AGI in its entirety, but I don't want to sound dogmatic. I'm just practicing my
unease about it. Do you know what I mean? I don't know. Alison is also wary of overhyping our capacity to get to AGI, and one of the great old folktales is called Stone Soup. Or you might have heard it called Nail Soup. There are a few variations. She uses the Stone Soup story as a metaphor for how much our so-called AI technology actually relies on humans. And the basic story of Stone Soup is that there's some visitors who come to a village, and they're hungry, and the villagers won't
share their food with them. So the visitors say, that's fine. We're just going to make a Stone Soup, and they get a big pot, and they put water in it, and they say, we're going to get three nice stones and put it in, and we're going to make wonderful stone soup for everybody. They start boiling it, and they say, this is really good soup, but it would be even better if we had a carrot or an
onion that we could put in it. And of course, the villagers growing get a carrot and onion, and then they say, oh, this is much better, but when we made it for the king, we actually put in a chicken, and that made it even better. And you can imagine what happens. All the villagers contribute all their food. And then in the end, they say, this is amazingly good soup, and it was just made with three stones. And I think there's a nice analogy to what's happened with generative AI.
So the computer scientists come in and say, look, we're going to make intelligence just with next token prediction and gradient descent and transformers. And then they say, but you know, this intelligence would be much better if we just had some more data from people that we could add to it. And then all the villagers go out and add all of the data of everything that they've uploaded
to the internet. And then the computer scientists say, this is doing a good job at being intelligent, but it would be even better if we could have reinforcement learning from human feedback and get all you humans to tell it what you think is intelligent or not. And all the human say, oh, okay,
we'll do that. And then it would say, you know, this is really good. We've got a lot of intelligence here, but it would be even better if the humans could do prompt engineering to decide exactly how they were going to ask the questions so that the systems could do intelligent answers. And then at the end of that, the computer scientists say, see, we got intelligence just with our algorithms. We didn't have to depend on anything else. I think that's a pretty good metaphor for
what's happened in AI recently. The way AGI has been pursued is very different from the way humans learn. Large language models in particular are created with tons of data shoved into the system with a relatively short training period, especially when compared to the length of human childhood. The stone soup method uses brute force to shortcut our way to something akin to human intelligence. I think it's just a category mistake to say things like RLLMs. Smart, it's like asking,
is the University of California Berkeley library smarter than I am? Well, it definitely has more information in it than I do, but it just feels like that's not really the right question. So one of the things about humans in particular is that we've always had this great capacity to learn from other humans. And one of the interesting things about that is that we've had different kinds
of technologies over history that have allowed us to do that. So obviously language itself, you could think of as a device that lets humans learn more from other people than other creatures can do. My view is that the LLMs are kind of the latest development in our ability to get information from other people. But again, this is not trivializing or debunking it. Those changes in our cultural
technology have been among the biggest and most important social changes in our history. So writing completely changed the way that we thought and the way that we functioned and the way that we acted in the world. At the moment, as people have pointed out, the fact that, you know, I have in my pocket a device that will let me get all the information from everybody else in the world, mostly just makes me irritated and miserable most at the time. We would have thought that
that would have been like a great accomplishment. But people felt that same way about writing and print when they started too, the hope is that eventually we'll adjust to that kind of technology. Not everyone shares Allison's view on this. Some researchers think that large language models should be considered to be intelligent entities and some even argue that they have a degree of
consciousness. But thinking of large language models as a type of cultural technology instead of sentient thoughts that might take over the world helps us understand how completely different they are from people. And another important distinction between large language models and humans is that they don't have an inherent drive to explore and understand the world. They're just sort of sitting there and letting the data waft over them rather than actually going out and acting and sensing and
finding out something new. This is in contrast to the one year old saying, the stick works on the xylophone, will it work on the clock or the vase or whatever else that you're trying to keep the baby away from? That's a kind of internal basic drive to generalize to think about, okay, it works in the way that I've been trained. But what will happen if I go outside of the environment
in which I've been trained? Because we have caregivers who have a really distinctive kind of intelligence that we haven't studied enough, I think, who are looking at us, letting us explore, and caregivers are very well designed, even if it feels frustrating when you're doing it. We're very good at kind of getting this balance between how independent should the next age it be, how much should we be constraining them, how much should we be passing on our values, how much should we let them
figure out their own values in a new environment. And I think if we ever do have something like an intelligent AI system, we're going to have to do that. Our role, our relationship to them should be this caregiving role rather than thinking of them as being slaves on the one hand or masters on the other hand, which tends to be the way that we think about them. And as I say, it's not just in computer science, in cognitive science, probably for fairly obvious reasons, we know almost nothing
about the cognitive science of caregiving. How is it that we manage these relationships with other people? So that's actually what I've just got a big grant, that's actually what I'm going to do for my remaining grandmotherly cognitive science years. That sounds very fascinating. I'd be curious to see what goes out of that work. Well, let me give you just a very simple first pass, our first experiment. If you ask three and four year olds, here's Johnny and he can go on the high slide or
he can go on the slide that he already knows about. And what will he do if mom's there? And your intuitions might be maybe the kids will say, well, you don't do the risky thing when mom's there because she'll be mad about it, right? And in fact, it's the opposite. The kids consistently say, no, if mom is there, that will actually let you explore that will let you take risks that will take you to the hospital. Exactly. She's there to actually protect you and make sure that you're
not doing the worst thing. But of course, for humans, it should be a cue to how important care giving is for our intelligence is that we have a much wider range of people investing in much more caregiving. So not just mothers, but my favorite postmenopausal grandmothers, but fathers, older siblings, what are called alloparents, just people around who are helping to take care of the kids.
And it's having that range of caregivers that actually seems to really help. And again, that should be a cue for how important this is in our ability to do all the other things we have, like be intelligent and have culture. If you just look at large language models, you might think we're nowhere near anything like AGI. But there are other ways of training AI systems. Some researchers are trying to build AI models that do have an intrinsic drive to explore
rather than just consume human information. So one of the things that's happened is that quite understandably the success of these large models has meant that everybody's focused on the large models. But in parallel, there's lots of work that's been going on in AI that is trying to get systems that look more like what we know that children are doing. And I think actually if you look at what's gone on in robotics, we're much closer to thinking about systems that look like they're
learning the way that children do. And one of the really interesting developments in robotics has been the idea of building in intrinsic motivation into the systems. So to have systems that aren't just trying to do whatever it is that you program to do, like open up the door, but systems that are looking for novelty that are curious, that are trying to maximize this value of empowerment that are trying to find out all the range of things they could do that have consequences in the
world. And I think, you know, at the moment, the LLMs are the thing that everyone's paying attention to, but I think that route is much more likely to be a route to really understanding a kind of intelligence that looks more like the intelligence that's in those beautiful little fuzzy heads.
And I should say we're trying to do that. So we're collaborating with computer scientists that Berkeley who are exactly trying to see what would happen if we say given intrinsic reward for curiosity, what would happen if you actually had a system that was trying to learn in the way that the children are trying to learn. So our Allison and her team on their way to an HGI breakthrough,
despite all this, Allison is still skeptical. I think it's just again a category mistake to say we'll have something like artificial general intelligence because we don't have natural general intelligence. In Allison's view, we don't have natural general intelligence because human intelligence is not really general. Human intelligence evolved to fit our very particular human needs. So Allison, likewise, doesn't think it makes sense to talk about machines with general
intelligence or machines that are more intelligent than humans. Instead, what we'll have is a lot of systems that can do different things. You know, that might be able to do amazing things, wonderful things, things that we can't do, but that kind of intuitive theory that there's this thing called intelligence that you could have more of or less of, I just don't think it fits anything that we know
from cognitive science. It is striking how different the view of the people, not all the people, but some of the people who are also making billions of dollars out of doing AI are from, I mean, I think this is sincere, but it's still true that their view is so different from the people who are actually studying biological intelligences. John suspects that there's one thing computers may
never have feelings. It's very interesting that I always used pain as the example. In other words, what would it mean for a computer to feel pain and what would it mean for a computer to understand a joke? So I'm very interested in these two things. We have this physical, emotional response. We laugh, we feel good. So when you understand a joke, where should the credit go? Should it go to understanding it
or should it go to the laughter and the feeling that it evokes? And you know, to my sort of chagrin or surprise, or maybe not surprise, Daniel Dennett wrote a whole essay in one of his early books on why computers will never feel pain. He also wrote a whole book on Huber. So in other words, it's kind of wonderful in a way that whether he would end up where I've ended up, but at least
he understood the size of the mystery and the problem. And I agree with him, if I understood his pain essay correctly, and it's influential on what I'm going to write, I just don't know what it means for a computer to feel pain, be thirsty, be hungry, be jealous, have a good laugh. To me, it's a category error. Now if thinking is the combination of feeling and computing, then there's
never going to be deliberative thought in a computer. Do you see what I'm saying? Well, talking to John, he frequently referred to pain receptors as the example of how humans feel with their bodies. But we wanted to know what about the more abstract emotions like joy or jealousy or grief? It's one thing to stub your toe and feel pain radiated up from your foot. It's another to feel pain during a romantic breakup or to feel happy when seeing an old friend. We usually think of those as
all in our heads, right? You know, I'll say something kind of personal, a close friend of mine called me today to tell me that his younger brother had been shot and killed in Baltimore. Now I don't want to be a downer. I'm saying it for a reason. And he was talking to me about the sheer overwhelming physicality of the grief that he was feeling. And I was thinking, what can I say with words to do anything about that pain? And the answer is nothing other than just to try.
But seeing that kind of grief and all that it entails, even more than seeing the patients that I've been looking after for 25 years, is what leads to a little bit of testiness on my part. When one tends to downplay this incredible mixture of meaning and loss and memory and pain, and to know that this is a human being who knows forecasting into the future that he'll never see this person again. Right? It's not just now. Part of that pain is into the infinite future.
Now all I'm saying is we don't know what that glorious and sad amalgam is. But I'm not going to just dismiss it away and explain it away as some sort of peripheral computation that we will solve within a couple of weeks, months or years. Do you see? I find it just slightly enraging, actually. And I just feel that as a doctor and as a friend, we need to know that we don't know how to think about these things yet. I just don't know and I am not convinced of anything yet.
So I think that there is a link between physical pain and emotional pain, but I can tell you from the losses I felt, it's physical as much as it is cognitive. So grief, I don't know what it would mean for a computer to feel grief. I just don't know. I think we should respect the mystery. So Melanie, I noticed that John and Alison are both a bit skeptical about today's approaches to AI. I mean, will it lead to anything like human intelligence? What do you think? Yeah, I think that
today's approaches have some limitations. Alison put a lot of emphasis on the need for agent to be actively interacting in the world as opposed to passively just being receiving language input. And for an agent to have its own intrinsic motivation in order to be intelligent. Alison interestingly sees large language models more like libraries or databases than like intelligent agents. And I really loved her stoned suit metaphor where her point is that all the important
ingredients of large language models come from humans. Yeah, it's such an interesting illustration because it sort of tells us everything that goes on behind the scene. You know, before we see the output that an LLM gives us, John seemed to think that full artificial general intelligence is impossible, even in principle. He said that comprehension requires feeling or the ability to feel one's own internal computations. And he didn't seem to see how computers could ever have such
feelings. And I think most people in AI would disagree with John. Many people in AI don't even think that any kind of embodied interaction with the world is necessary. They'd argue that we shouldn't underestimate the power of language. In our next episode, we'll go deeper into the importance of this cultural technology as Alison would put it. How does language help us learn and construct meaning?
And what's the relationship between language and thinking? You can be in principle good at language without having the ability to do the kind of sequential multi-step reasoning that seems to characterize human thinking. That's next time on Complexity. Complexity is the official podcast of the Santa Fe Institute. This episode was produced by Catherine Moncourt. Our theme song is by Mitch Bignano and additional music from Blue Dot Sessions. I'm Abba, thanks for listening.