Welcome to tech Stuff, a production from I Heart Radio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with I Heart Radio, and I love all things tech and I owe you guys an apology where you're having a rerun episode today. That's because I came down with food poisoning. You could probably hear. I don't quite sound like my normal self.
I apologize for that as well, but it completely laid me low and I was unable to do any work, and so I wasn't able to research right and record an episode, and I feel badly about that. I really love you guys, and I love being able to create great podcasts for you. But yeah, I just I was in bed with a fever pretty much all day yesterday. So this is a rerun from two thousand nineteen about
machine consciousness. I figured this is a good companion piece to some of the things we've talked about recently with artificial intelligence and machine learning and artificial neural networks and all of that sort of stuff. And um, it's it's again one of those tricky concepts. Consciousness is a difficult thing to define, even for humans. So sit back and enjoy this rerun, and I am going to go and eat some apple sauce. I'll talk to you again at
the end of the episode. There's a topic I have touched on on several occasions in past episodes, but I really wanted to dig down today into this topic because it's one of those that's fascinating and is an underpinning for tons of speculative fiction and horror stories. And since we're now in October, I think here this would be kind of thematically linked to Halloween text style. It turns out that's pretty hard to do. Halloween tech anology stories
have already covered stuff like haunted house technology. So today we're going to talk about consciousness and whether or not it might be possible that machines could one day achieve consciousness. Now, I could start this off by talking about the Turing test, which many people have used as the launch point for
a machine intelligence and machine consciousness debates. The way we understand that test today, which by the way, is slightly different from the test that Alan Turing first proposed, is that you have a human interviewer who, through a computer interface, asks questions of a subject, and the subject might be another human, or it might be a computer program posing as a human and the the interviewer just sees text
on a screen. So if the interviewer is unable to pass a certain threshold of being able to tell the difference, to be able to determine whether it was a machine or a person, then the program or machine that's being test did is said to have passed the Turing test. It doesn't mean the program or machine is conscious or even intelligent, but rather says that to outward appearances, it
seems to be intelligent and conscious. See, we humans can't be absolutely sure that other humans are conscious and intelligent. We assume that they are because each of us knows of our own consciousness and our own intelligence. We have a personal experience with that direct personal experience, and other people seem to display behaviors that indicate they too possess
those traits, and they too have a personal experience. But we cannot be those other people, and so we have to grant them the consideration that they too are conscious and intelligent. And I agree that is very big of us. This is actually called the problem of other minds in the field of philosophy, and the problem is this, It is impossible for any one of us to step outside
of ourselves and into any other person's consciousness. We cannot feel what other people are feeling or experience their thoughts firsthand. We are aware of our own abilities, but we are only aware of the appearance that other people share those abilities. So assuming that other people also experience consciousness rather than imitating it, really, really, well, that's a step we all have to take. Turings point is that if we do grant that consideration to other people, why would we not
do it to machines as well? I mean, the machine appears to possess the same qualities as a human. This is a hypothetical machine, so we cannot experience what that machine is going through, just as we can't experience what another person is going through, at least not the intrinsic personal level. So why would we not grant the machine
the same consideration that we would grant to people? And touring was being a little cheeky, But while I just gave kind of a super fast, high level description of the touring test, that's not actually where I want to start. I want to begin with the concept of consciousness itself. Now. The reason I want to do This isn't just to
make a longer podcast. It's because I think one of the most fundamental problems with the discussion about AI intelligence, self awareness, and consciousness is that there tends to be a pretty large disconnect between the biologists and the doctors who specialize in neuroscience, particularly cognitive neuroscience, and does have some understanding about the nature of consciousness and people. And then you have computer scientists who have a deep understanding
of how computers process information. And while we frequently will compare brains to computers, that comparison is not one to one. It is largely a comparison of convenience, and in some cases you could argue it's not terribly useful, it might actually be counterproductive. And so I think at least some of the speculation about machine consciousness is based on a lack of understanding of how complicated and mysterious this topic is in the first place, and this ends up being
really tricky. Consciousness isn't an easily defined quality or quantity. Some people like to say, we don't so much define consciousness by what it is, but rather what it isn't, And this will also will will kind of bring us into the realm of philosophy. Now, I'm gonna be honest with you, guys, the realm of philosophy is not one I'm terribly comfortable in. I'm pretty pragmatic, and philosophy deals with a lot of stuff that is, at least for now, unknowable.
Philosophy sometimes asks questions that we do not and cannot have the answer to, and in many cases we may never ever be able to answer those questions. And the pragmatist in me says, well, why bother asking the question if you can never get the answer. Let's just focus on the stuff we actually can answer. Now, I realize this is a limitation on my part. I'm owning that I'm not out to upset the philosophical apple cart. I'm
just of a different philosophical bent. And I realized that just because we can't answer some questions right now, that doesn't necessarily mean they will all go unanswered for all time. We might glean a way of answering at least some of them, though I suspect a few will be forever unanswered. If we go with the Basic Dictionary definition of consciousness, it's quote the state of being awake and aware of one's surroundings end quote. But what this doesn't tell us
is what's going on that lets us do that. It also doesn't talk about being aware of oneself, which we largely consider consciousness to be part of. Is not just aware of your surroundings, but aware that you exist within those surroundings, your relationship to your surroundings, and things that are going on within you, yourself, your feelings, and your thoughts. The fact that you can process all of this, you can reflect upon yourself. We tend to group that into
consciousness as well. So how is it that we can feel things and be aware of those feelings? How is it that we can have intentions and be aware of our intentions. We are more complex than beings that simply react to sensory input. We are more than beings that respond to stuff like hunger, fear, or the desire to procreate. We have motivations, sometimes really complex motivations, and we can reflect on those. We can examine them, we can question them, we can even change them. So how do we do
this now? We know this is special because some of the things we can do are shared among a very few species on Earth. For example, we humans can recognize our own reflections in a mirror, starting it around age two or so, we can see the mirror image and we recognize the mirriage images of us. Now, there are only eight species that can do this that we know
about anyway. Those species are the great apes. So you've got humans, guerrillas, orangutans, binobos, and chimpanzees, the magpie, the dolphin, and that's it. Oh and the magpies are birds, right, that's that's all of them. Recognizing one's own form and a mirror shows a sense of self awareness, literally, of awareness of one's self. Now, there are a lot of great resources online and offline that go into the theme of consciousness. Heck, there are numerous college level courses and
graduate level courses dedicated to this topic. So I'm not going to be able to go into all the different hypotheses, arguments, counter arguments, et cetera in this episode, but I can cover some basics. Also, highly recommend you check out v Sauces video on YouTube that's titled what Is Consciousness? Because it's really good. And No, I don't know Michael, I have no connection to him. I've never met him. This is just an honest recommendation from me. And I have
no connection whatsoever to that video series. The video includes a link to what v Sauce dubs a lean back, which is a playlist of related videos on the subject at hand, in this case, consciousness. Those are also really fascinating. But I do want to point out that, at least at the time of this recording, a couple of the videos in that playlist have since been delisted from YouTube for whatever reason. So there are a couple of blank
spots in there. But what those videos show, and what countless papers and courses and presentations also show, is that the brain is so incredibly complex and nuanced that we don't know what we don't know. We do know that there are some pretty funky things going up in the gray matter up in our noggins, and we also know that many of the explanations given to describe consciousness rely upon some assumptions that we don't have any substantial evidence for.
You can't really assert something to be true if it's based on a premise that you also don't know to be true. That's not how good science works. This is also why I reject the arguments around stuff like ghost hunting equipment. The use of that equipment is predicated on the argument the ghosts exist and they have certain influences on their environment. But we haven't proven that ghosts exist in the first place, let alone that they can affect
the environment. So selling a meter that supposedly detects a ghostly presence from electromagnetic fluctuations makes no logical sense. Pens For us to know that to be true, we would already have to have established that one ghosts are real and two that they have these electromagnetic fluctuation effects, and we haven't done that. It's like working science in reverse. That's not how it works. Anyway. There are a lot of arguments about consciousness that suggests perhaps there's some ineffable
force that informs it. You can call it the spirit or the soul or whatever. So that argument suggests that this thing we've never proven to have existed is what gives consciousness its own and that's a problem. We can't really state that. I mean, you can't say the reason this thing exists is that this other thing that we've never proven to exist makes it exist. Well, that you've just made it harder to even prove anything. And we have evidence that also shows that that whole idea doesn't
hold water. The evidence comes in the form of brain disorders, brain diseases, and brain image. We have seen that disease and damage to the brain affects consciousness, which suggests that consciousness manifests from the actual form and function of our brains, not from any mysterious force. Our ability to perceive, to process information, to have an understanding of the self, to have an accurate reflection of what's going on around us within our own conceptual reality, all of that appears to
be predicated primarily upon the brain. Now, originally I was planning to give a rundown on some of the prevailing theories about consciousness. In other words, I want to summarize the various schools of thought about how consciousness actually arises. But as I dove down into the research, it became apparent really quickly that such a discussion would require so much groundwork and more importantly, a much deeper understanding on
my part than would be practical for this podcast. So instead of talking about the higher order theory of consciousness versus the general workspace theory versus integrated information theory, I'll take a step back and I'll say there's a lot of ongoing debate about the subject, and no one has conclusively proven that any particular theory or argument is most likely true. Each theory has its strengths and its weaknesses, and complicating matters further is that we haven't refined our
language around the concepts enough to differentiate various ideas. That means you can't talk about an organism being conscious of something and that degree of consciousness is somehow inherently specific, it's not. That's the issue. So, for example, I could say a rat is conscious of a rat terrier, type of dog that hunts down rats, and so as a result of this consciousness of the rat terrier, the rat attempts to remain hidden so as not to be killed.
But does that mean the rat merely perceived eaves the rat terrier and thus is trying to stay out of its way, And that's as far as the consciousness goes, or doesn't mean that the rat actually has a deeper, more meaningful awareness of the rat terrier. The language isn't much help here, and moreover, there's debate about what degrees of consciousness there even are. Also, while I've been harping on consciousness, that's not the only concept we have to consider.
Another is intelligence, which is distinct from consciousness, and there are some similarities. Like consciousness, intelligence is predicated upon brain functions. Again, a long history of investigating brain disorders and brain damage indicates this as it can affect not just consciousness but also intelligence. So what is intelligence? Well, get ready for this, But like consciousness, there's no single agreed upon definition or
theory of intelligence. In general, we use the word intelligence to describe the ability to think, to learn, to absorb knowledge, and to make use of it to develop skills. Intelligence is what allowed humans to learn how to make basic tools, to gain an understanding of how to cultivate plants and develop agriculture, to develop architecture, to understand mathematic principles, and all sorts of stuff. So in humans, we tend to
lump consciousness and intelligence together. We tend to think in terms of being intelligent and being self aware, but the two need not necessarily go hand in hand. There are many people who believe that it could be possible to construct an artificial intelligence or an artificial consciousness independently of one another. When we come back, I'll explain more, but first let's take a quick break. So in a very
general sense. The group of hypotheses that fall into the integrated information theory umbrella state that consciousness emerges through linking elements in our brains. These would be neurons processing large amounts of information, and that it's the scale of this endeavor that then leads to consciousness. In other words, if you have enough processors working on enough information and they're all interconnected with each other and it's very complicated, bang,
you get consciousness. Now, it is clear our brains process a lot of information. If you do a search in textbooks or online, you'll frequently encounter the stat their brains have around one hundred billion neurons in them and ten times as many glial cells. Neurons are like the processors in a computer system, and glial cells would be the
support systems and insulators with those processors. Anyway, those numbers have since come under some dispute, as an associate professor at Vanderbilt University named Susanna Herculano who sell She explained that the old way of estimating how many neurons the brain had appeared to be based on taking slices of the brain. Estimating the number of neurons in that slice and then kind of extrapolating that number to apply across
the brain in general. But that ignores stuff like the density of cells and the distribution of the cells across the brain. So what she did, and this also falls into the category of Halloween horror stories, is she took a brain and she freaking dissolved it. She could then get account of the neuron nuclei that was in the soupy mix. By her accounting, the brain has closer to eighties six billion neurons and just as many glial cells.
Still a lot of cells, mind you, But you gotta admit it's a bit of a blow to lose fourteen billion neurons overnight. Still, we're talking about billions of neurons that interconnect through an incredib doably complex system in our brains, with different regions of the brain handling different things. And so, yeah, we're processing a lot of information all the time, and
we do happen to be conscious. So could it be possible that with a sufficiently powerful computer system, perhaps made up of hundreds or thousands or tens of thousands of individual computers, each with hundreds of processors, that you could end up with an emergent consciousness, or, as some people have proposed, could the Internet itself become conscious due to the fact that it is an enormous system of interconnected nodes that's pushing around incredible amounts of information. Well maybe,
maybe it's possible. But here's the kicker. This theory doesn't actually explain the mechanism by which the consciousness emerges. See, it's one thing to process information, it's another thing to be aware of that experience. So when I perceive a color, I'm not just perceiving a color. I'm aware that I'm experiencing that color. Or to put it in another way, I can relate something to how it makes me feel,
or some other subjective experience that's personal to me. So a machine might objectively be able to return data about stuff like what is a color of a piece of paper? That analyzes the light that's being reflected off that piece of paper, it compares that light to a spectrum of colors. But that's still not the same thing as having the
subjective experience of perceiving the color. And there may well be some connection between the complexity of the interconnected neurons in our brains and the amount of information that we're processing and our sense of consciousness, But the theory doesn't actually explain what that connection is. It's more like saying, hey, maybe this thing we have, this consciousness experience, is also linked to this other thing, without actually making the link
between the two. It appears to be correlative but not necessarily causal to relate that to our personal experience. Imagine that you've just poofed into existence. You have no prior knowledge of the world, or the physics in that world, or basic stuff like that, So you're drawing conclusions about the world around you based solely on your observations as
you wander around and do stuff. And at one point you see an interesting looking rock on the path, so you bend over and you pick up the rock, and when you do, it starts to rain, and you think, well, maybe I caused it to rain because I picked up this rock. And maybe it happens a few times where you pick up a rock and it starts to rain, which seems to support your thesis. But does that mean
you're actually causing the effects that you are observing. If so, what is it about picking up the rock that's making it rain? Now, even in this absurd case, that I'm making. You could argue that if there's never an instance in which picking up the rock wasn't immediately followed by rain, there's a lot of evidence to suggest the two are linked, but you still can't explain why they are linked, why
does one cause the other. And that's a problem because without that piece, you're never really totally sure that you're on the right track. That's kind of where we are with consciousness. We've got a lot of ideas about what makes it happen, but those ideas are mostly missing key pieces that explain why it's happening. Now, it's possible that we cannot reduce consciousness any further than we already have, and maybe that means we never really get a handle
on what makes it happen. It's also possible that we could facilitate the emergence of consciousness and machines without knowing how we did it. Essentially, that would be like stumbling upon the phenomenon by luck. We just happened to create the conditions necessary to allow some form of artificial ansciousness to emerge. Now, I think this might be possible, but
it strikes me as a long shot. I think of it like being locked in a dark warehouse filled with every mechanical part you can imagine, and you start trying to put things together in complete darkness, and then the lights come on and you see that you have created a perfect replica of an F fifteen fighter jet. Is that possible? Well, I mean, yeah, I guess, but it seems overwhelmingly unlikely. But again, this is based off ignorance. It's based off the fact that it hasn't happened yet,
so I could be totally wrong here. Now, on the flip side of that, programmers, engineers, and scientists have created computer systems that can process information in intricate ways to come up with solutions to problems that seem, at least at first glance, to be similar to how we humans think. We even have names for systems that reflect biological systems, like artificial neural networks. Now the name might make it
sound like it's a robot brain, but it's not quite that. Instead, it's a model for computing in which components in the system act kind of like neurons. They're interconnected and each
one does a specific process. The nodes in the computer system connect to other nodes, so you feed the system input whatever it is you want to process, and then the nodes that accept that input performs some form of operation on it, and then send that resulting data the the answer after they've processed this information onto other nodes
in the network. It's a nonlinear approach to computing, and by adjusting the processes each node performs this is also like known as adjusting the weight of the nodes, you can tweak the outcomes. Now, this is incredibly useful. If you already know the outcome you want, you can tweak the system so that it learns or is trained to recognize something specific. For example, you could train a computer system to recognize faces, so you would feed it images.
Some of the images would have faces in them, some would not have faces in them. Some might have something that could be a face, but it's hard to tell. Maybe it's a shape in a picture that looks kind of like a face, but it's not actually someone's face. Anyway, you train the computer model to try and separate the faces from the non faces, and it might take many iterations to get the model trained up using your starting
data your training data. Now, once you do have your computer model trained up, you've tweaked all the nodes so that it is reliably producing results that say, yes, this is a face or no, this isn't. You can now feed that same computer model brand new images that it has never seen before, and it can perform the same functions you have taught the computer model how to do something. But this isn't like spontaneous intelligens and it's not connected
to consciousness. You couldn't really call it thinking so much as just being trained to recognize specific patterns. Pretty well. Now, that's just one example of putting an artificial neural network to use. There are lots of others, and there are also systems like IBM S Watson, which also appears at you know, casual glance to think. This was helped in no small part by the very public display of Watson competing on special episodes of Jeopardy, and which went up
against human opponents who were former Jeopardy champions themselves. Watson famously couldn't call upon the Internet to search for answers. All the data the computer could access was self contained in its undeniably voluminous storage, and the computer had to parse what the clues and Jeopardy were actually looking for then come up with an appropriate response, And to make matters more tricky, the computer wasn't returning a guaranteed right answer.
The computer had to come to a judgment on how confident it was that the answer it had arrived at was the correct one. If the confidence met a certain threshold, then Watson would submit an answer. If it did not meet that threshold, Watson would remain silent. It's a remarkable achievement, and it has lots of potential applications, many of which
are actually in action today. But it's still not quite at the level of a machine thinking like a human, and I don't think anyone at IBM would suggest that it possesses any sense of consciousness. When we come back, i'll talk about a famous thought experiment that really starts to examine whether or not machines could ever attain intelligence
and consciousness. But first let's take another quick break. And now this brings me to a famous thought experiment proposed by John Searle, a philosoph of her who questioned whether we could say a machine, even one so proficient that could deliver reliable answers on demand, would ever truly be intelligent, at least on a level similar to what we humans identify as being intelligent. It's called the Chinese room argument, which Searle included in his article titled Minds, Brains, and
Programs for the Behavioral and Brain Sciences Journal. Here's the premise of the thought experiment. Imagine that you are in a simple room. The room has a table and a chair. There's a ream of blank paper, there's a brush, there's some ink, and there's also a large book within the room that contains pairs of Chinese symbols. In the book. Oh and we also have to imagine that you don't understand or recognize these Chinese symbols. They mean nothing to you.
There's also a door to the room, and the door has a mail slot, and every now and again someone slides a piece of paper through the slot. The piece of paper has one of those Chinese symbols printed on it, and it's your job to go through the book and find the matching symbol in the book, plus the corresponding symbol in the pair, because remember I said there were
symbols that were paired together. You then take a blank sheet of paper, You draw the corresponding symbol from that pair onto the sheet of paper, and finally you slip that piece of paper through the mail slot, presumably to the person who gave you the first piece of paper and the original part of this problem. So to an outside observer, let's say it's actually the person who's slipping the piece of paper to you, it would seem that
whomever is inside the door actually understands Chinese symbols. They can recognize the significance of whatever symbol was was contributed, was sent in through the mail slot, and then match it to whatever the corresponding data is for that particular symbol, and then return that to the user. So to the outside observer, it appears as though whatever is inside the room comprehends what it is doing. But argues sirle, that's only an illusion because the person inside the room doesn't
know what any of those symbols actually means. So, if if this is you, you have no context. You don't know what any individual symbol stands for, nor do you understand why any symbol would be prepared with any other symbol. You don't know the reasoning behind that. All you have is a book of rules. But the rules only state
what your response should be given a specific input. The rules don't tell you why either on a granular level of what the symbols actually mean or on a larger scale when it comes to what you're actually accomplishing in this endeavor. All you are doing is filling a physical action over and over based on a set of rules you don't understand. And Searle then uses this argument to say that, essentially we have to think the same way about machines. The machines process information based on the input
they receive and the program that they are following. That's it. They don't have awareness or understanding of what the information is. Searle was taking aim at a particular concept in AI, often dubbed strong AI or general AI. It's a sort of general artificial intelligence. So it's something that we could or would compared directly to human intelligence, even if it didn't work the same way as our intelligence works. The argument is that the capacity and the outcomes would be
similar enough for us to make the comparison. This is the type of intelligence that we see in science fiction doomsday scenarios where the machines have rebelled against humans, or the machines appear to misinterpret simple requests, or the machines come to conclud lusions that, while logically sound, spell doom
for us all. The classic example of this, by the way, is appealing to a super smart artificial intelligence and you say, could you please bring about world peace because we're we're all sorts of messed up, and the intelligence processes this and then concludes that while there are at least two humans, there can never be a guarantee for peace because there's always the opportunity for disagreement and violence between two humans, and so to achieve true peace, the computer then goes
on a killing spree to wipe out all of humanity. Now, Cerl is not necessarily saying that computers won't contribute to a catastrophic outcome for humanity. Instead, he's saying they're not actually thinking or processing information in a truly intelligent way.
They are arriving in outcomes through a series of processes that might appear to be intelligent at first glance, but when you break them down, it all reveal themselves to be nothing more than a very complex series of mathematical processes.
You could even break it down further into binary and say that ultimately each apparent decision would just be a particular sequence of switches that are in the on or off position, and the stats of each switch would be determined by the input and the program you were running, not some intelligent artificial creation that is reasoning through a problem. Essentially, Cearle's argument boils down to the difference between syntax and semantics. Syntax would be the set of rules that you would
follow with those symbols. For example, in English, the letter Q is nearly always followed by the letter you. The few exceptions to this rule mostly involved romanizing words from other language, uh, in which the letter Q represents a sound that's not natively present in English. So you could program a machine to follow the basic rule that the symbol Q should be followed by the symbol you, assuming you're eliminating all those exceptions I just mentioned. But that
doesn't lead to a grasp of semantics, which is actual meaning. Moreover, Searle asserts that it's impossible to come to a grasp of semantics merely through a mastery of syntax. You might know those rules flawlessly, but Searle argues, you still wouldn't understand why there are rules, or what the output of
those rules means, or even what the input means. There are some general counter arguments that philosophers have made to Searle's thought experiment, and according to the Stanford Encyclopedia of Philosophy, which is a phenomenal resource, it's also incredibly dense. But these counter arguments tend to fall into three groups. The first group agrees with Searle that the person inside the
room clearly has no understanding of the Chinese symbols. But the group counters the notion that the system as a whole can't understand it. In fact, they say the opposite. They say, yes, the person inside the room doesn't understand, but you're looking at a specific component of a larger system. And if we consider the system, or maybe a virtual mind that exists due to the system, that does have an understanding, this is sort of like saying a neuron
in the brain doesn't understand anything. It sends along signals that collectively and through mechanisms we don't fully understand, become thoughts that we can become conscious of. So in this argument, the person in the room is just a component of an overall system, and the system possesses intelligence even if
the component does not. The second group argues that if the computer system either could simulate the operation of a brain, perhaps with billions of nodes, approaching the complexity of a human brain with billions of neurons, or if the system were to inhabit a robotic body that could have direct interaction with its environment, then the system could manifest intelligence.
The third group rejects Searle's arguments more thoroughly and on the basis of various grounds, ranging from Searle's experiment being too narrow in scope to an argument about what the word understand actually means. This is where things get a bit more loosey goosey, and sometimes I feel like arguments in this group amount to oh yeah, but again, I'm pragmatic, so I tend to have a pretty strong bias against these arguments, and I recognize that this means I'm not
giving them fair consideration because of those biases. A few of these arguments take issue with Searle's assertion that one cannot grasp semantics through an understanding of syntax. And here's something that I find really interesting. Searle originally published this argument way back in nineteen It's been nearly forty years since first proposed it, and to this day, there is no consensus on whether or not his argument is sound.
So why is that? Well, it's because, as I've covered in this episode, the concepts of intelligence and more to the point, consciousness are wibbly wobbly, though not as far as I can tell, Timey, Whymy. When we can't even nail down specific definitions for words like understand, it becomes difficult to even tell when we're agreeing or disagreeing on certain topics. It could be that while people are in a debate and are using words in different ways, it
turns out there actually in agreement with one another. Such is the messiness that is intelligence. Further, we've not yet observed anything in the machine world that seems, upon closer examination, to reflect true intelligence and consciousness, at least as the way we experience it. In fact, we can't say that we've seen any artificial constructs that have experienced any thing, because, as far as we know, no such device has any
awareness of itself. Now, I'm not sure if we'll ever create a machine that will have true intelligence and consciousness, using the word true here to mean human like. Now, I feel pretty confident that, if it is possible, we will get around to it eventually. It might take way more resources than we currently estimate, or maybe it will just require a different computational approach, maybe it'll rely on bleeding edge technologies like quantum computing. I figure, if it's
something we can do, we will do it. It's just a question of time, really, and further, it's hard for me to come to a conclusion other than it will ultimately prove possible to make an intelligent conscious construct. Now. I believe that because I believe our own intelligence and our own consciousness is firmly rooted in our brains. I
don't think there's an anything mystical involved. And while we don't have a full picture of how it happens in our brains, we at least know that it does happen, and we know some of the questions to ask and have some ideas on how to search for answers. It's not a complete picture, and we still have a very long way to go, but I think it's if it's possible to build a full understanding of how our brains work with regard to intelligence and consciousness, we'll get there too,
sooner or later. Probably later, I suppose there's still the chance that we could create an intelligent and or conscious machine just by luck or accident. And while I intuitively feel that this is unlikely, I have to admit that intuition isn't really reliable in these matters. It feels to me like it is the longest of long shots, but that's entirely based on the fact that we haven't managed to do it up until now, and including now. Maybe the right see points of events is right around the corner.
Just because it hasn't happened yet doesn't mean it can't or won't happen at all, And it's good to remember the machines don't need to be particularly intelligent or conscious to be useful or potentially dangerous. We can see examples of that playing out already with devices that have some limited or weak AI. And by limited I mean it's not general intelligence. I don't mean that the AI itself
is somehow unsophisticated or primitive. So it may not even matter if we never create devices that have true or human like intelligence. We might be able to accomplish just as much with something that does not have those capabilities. And in other words, this is a very complicated topic, one that I think gets oversimplified, and a lot of fiction, and also just a lot of speculative prognostications about the future.
I mean, you'll see a lot of videos about how in the future AI is going to perform a more intrinsic role, or maybe it will be an existential threat to humanity or whatever it may be. And I think a lot of that is predicated upon, uh, a deep misunderstanding or underestimation of how complicated cognitive neuroscience actually is and how little we really understand when it comes to our own consciousness, let alone how we would bring about
such a thing in a different device. I hope you enjoyed that rerun, uh, and I promise will be back to new episodes very soon. I hope to have a news episode for you tomorrow. That's the plan. I have an interview I have to do for another show today, but after that, I plan on jumping on the Tech Stuff news episode. So, uh, trying to get back running. You know, I'm not a percent yet, but gosh darn ittt, it's this. This show is really what keeps me going.
So we're gonna We're gonna soldier on the show, as they say, must keep on going. I know, I make that joke a lot all right. Well, that's it. If you have any suggestions for future episodes of tech Stuff, please reach out to me. The best way to do that is on Twitter. The handle is tech stuff h s W you guys, take care. I can tell you of food poisoning is no fun. Uh, but the show sometimes is. All Right, that's it for me. Bye, I'll talk to you again really soon. Text Stuff is an
I Heart Radio production. For more podcasts from my Heart Radio, visit the i Heart Radio app, Apple Podcasts, or wherever you listen to your favorite shows.