I is a promising and inevitable technology. But today's A I is too brittle to trust, at least to trust enough that we would run it with no hands. We're not going to make the next leap in A I unless we can trust it. So must we wait around until A I is good enough to trust? I argue that we have to trust A I before it can make the next great leap in intelligence. This has to do with the fact that genuine intelligence works via flawed people working in groups.
I argue that the next paradigm in A I research will need to have humans and A is cooperating and competing with each other biases and flaws included, I'm Sean mcclure and you're listening to nontrivial. So ever since the dawn of the information age, we've wanted to make our machines smarter. We wanted to, to, to add a, a kind of intelligence to the tools that we create, even if it's not genuine intelligence.
So, you know, we, we think of computers obviously as these machines that can chug through numbers, you know, a lot more effectively than humans can, at least in terms of raw computation. And so it seems to make sense to take advantage of that kind of raw computation and apply it to a lot of the machines that we, we, we create, right?
So if we're talking about the assembly line and we have a bunch of people, right, using their hands to, you know, piece things together uh as they would in an assembly line, you know, maybe we can take some of the humans out of the loop and get things done a little more quickly, a little more efficiently and possibly even, you know, a little bit more safely, right?
Uh you know, because computers can uh you know, do a kind of a low level decision making if you will, based on the information that they're given the data that's input, right? It can essentially map inputs to outputs in this, this deterministic fashion and it does that very reliably, right? Because the, the software wouldn't work if it didn't do that, right.
In that rules based paradigm of computing, which is what we're talking about right now, this kind of classic paradigm of here's the input and then I produce the output. So we can hard code those rules into the machine, right? That's a very, as long as it's running, that's a very, it, it will reliably produce those outputs, right. And uh and it can do that at scale, it can do that rapidly.
So if we can fit our problem to that paradigm of, look, this is just a bunch of input and we just need to produce the output, then, you know, it can often make sense to, to apply to, to let the machine take over as opposed to having uh the human do that. And so we've been trying to add that kind of smartness type automation to our machines really ever since computers have been here.
Um And there's upsides to that, you know, from a, from a company perspective again, like I said, it could be more efficient. There are potential downsides because you might be removing people from the job market. Um In war, there could be an upside if you're applying, you know, AAA kind of a smartness to like, let's say, guided missiles or things like this or drones, the upside may be that you're taking more people out of the war.
Uh And, and their lives aren't in danger because machines are doing it. But of course, there's a downside to that too because now you're automated killing, automating, killing essentially. And so there's upside, there's downsides, but automation has been something that the information information technology has given us to machines that would otherwise have required essentially full human uh interaction in order to, to actually use them.
But the smartness that we've been able to add to machines through computing technology has never really been that smart, right? I mean, it's pretty, it's pretty fragile smartness. We would never confuse it with genuine intelligence. And this is true for almost all of computing history that uses this what I would call, you know, the kind of a classic paradigm of computing, which is rules based.
In other words, if you want your computer software to do something, then you have to know beforehand what it wants to do what it needs to do. And then you basically pre bake that those uh that that information into the machine as rules, right? So if you see this, you produce this output, right? If you have this kind of data, you know, produce that output, so you have to know beforehand what you want your machine to do.
And if you, if you decide that that's, you know what it should do, then you go program that almost as a kind of knowledge into the machine and it will carry out that very reliably, th th those steps very reliably, but they're exact deterministic steps, right? And, and because of that exact determinism, it lacks the flexibility that we would normally associate with genuine intelligence, right? People don't walk around as rules, machines, right? With inputs and outputs.
We don't, you know, chug numbers in that fashion. We don't definitely don't make decisions in real life in that way. You know, we, we use uh approximations, we use, we have, you know, biases, we have heuristics that we use, we're very messy, we're very creative, you know, uh we, we don't think the way that computers seem to think. Right. Uh, and, and, and actually we would just say that computers don't think at all they're just kind of dumbly converting inputs to output.
So, even though since the dawn of the information age, we've been wanting to add intelligence to machines, uh, you know, we don't ever add intelligence to machines that although they can sometimes seem smart, we really just, you know, pre bake a bunch of rules so that inputs can vary effectively and reliably get converted to outputs.
And if we can fit the problem to that, so if it makes sense that the assembly line has deterministic things to it, where inputs are just coming to outputs, then we can potentially apply information technology. Uh You know, and that's why computers are just in almost every product today. I mean, you know, it's in, it's in automobiles, it's in assembly lines. Uh, the military uses it for technology for drones, guided missiles, right.
Any time you can, you know, leverage the computer's ability to rapidly convert inputs to outputs. It makes sense to just let the machine do that. Uh you know, aside from other arguments with respect to, uh you know, removing people from the job market and things like that, right? Other than, than getting into that debate, it does make sense to just use computers, but they're not that smart. So if we do want to, I mean, we still have this push to make our tools, smarter and smarter, right?
We want to leverage machines for that. We might want humans to be out of the loop more. Uh we might, you know, we, we, we often want more automation trying to improve the quality of our lives or you know, which again is of course debatable, but there's always this push to increase automation in the things that we create. So if we're gonna do that, then we, if we're gonna keep pushing the needle on that, we're gonna have to have machines that are smarter, right?
I think we're kind of maxing out a lot of what basic rules based, you know, calculation can do with our tools and we kind of need to enter this new realm of, you know, machines need to be able to kind of decide the way humans do.
You know, if we're talking about military technology, you know, if the drone still has to wait for the human to make the final call, then the drone is only gonna be able to, you know, to decrease efficiency, efficiency and warfare by so much, you know, again, ethics as decide whether we agree or disagree that this is a good idea, a good way to use technology. You know, we can apply this to anything, right? Even the assembly line, right? Or, or uh you know, the automobile, right?
Uh It's gonna be able to automate quite a few things if we add information technology to a vehicle, you know, but if you really want to automate it, you know, something like self driving cars, then you're gonna have to really hand the reins over to the machine and it's gonna have to be able to think a lot better than it thinks now. Right.
It's not, you can't just chug a bunch of, you know, numbers together and try to come up with, you know, pre baked rules because we, you know, real life complex situations, we don't know what's coming. You know, imagine a car driving down a road.
You know, even, even if you, you try to, you know, prep program everything about that road that you possibly could, maybe you had data from Google Maps and you had, you know, extra survey teams going out, you try to get the topography and the whatever it is. You've also got the weather, you've got different lighting conditions. You know, if, if it snows, you could have ice, you could have black ice, you could have all these kinds of things. There's just no way to pre bake that into the machine.
And so classic computing that kind of dumb if you will rules based input to output paradigm of computation isn't going to cut it if we're gonna continue to, you know, push the needle on adding smartness to our machines. And that's because genuine intelligence doesn't work off a bunch of pre baked rules, right? And we've got all kinds of evidence to suggest that, I mean, you know, obviously at a high level, just people don't think like that we use approximations, we use heuristics.
If you look at computer science and you look at the approaches that actually solve genuinely hard problems, they're using approximations, they're using heuristics, you know, in a fashion similar to the way people would recall in my last episode, when we talked about facial recognition, right? You can try to create a piece of software yourself if you want that does facial recognition and you could try to just add a bunch of rules together.
You could try to define what a face is in terms of distance between eyes and distance between eyes and nose and width of nose and width of lips and all this kind of stuff.
And you can just thousands thousands, go ahead and collect a million different rules and you'll never do it, you'll never get at the essence of what a face is by trying to pre bake that knowledge into the machine and then hard code it and and and then use that machine so that every time it, you know, sees a face or an image of a face and translates that into data that it's gonna be able to somehow spit out, you know, the identity of the face or some type of other object recognition, right?
The the the way I if we're going to create intelligence and keep pushing the needle on automation, keep handing over, you know, responsibility to the machine. We need a different type of computing paradigm. And that's why today's artificial intelligence technology isn't based on classic computing, even though classic rules based computing still kind of provides the scaffolding of of the way that, you know, software gets put together.
But the meat of the solution, the actual, you know, uh software that gets at the essence of a face, for example, in facial recognition or you know, the the ability to to self drive a car is not based on deterministic input output like that. It's not based on that pre baked knowledge. It's not based on those hard coded rules. It instead uses a technology called machine learning, which is a fundamentally different computing paradigm.
It's a different way to try to figure out how to convert an input to an output. It is not an exact mapping. It instead uses, you know, statistical models and different approaches to build what is essentially an approximation. It's kind of a soft way of trying to figure out how the input becomes the output. So instead of trying to hard code the rule rules, it takes a look at, you know, a bunch of data and then it knows the target that it's trying to reach.
And initially it gets it very wrong and then it iterates at 1,001,000 thousands, if not millions of times until it tries to converge. It tries to basically close the distance between what is a large error because it's not guessing correctly and a very low error when it starts to guess correctly.
And uh this is very much a kind of a black box approach because even, you know, the engineers who are, you know, creating these machine learning models don't really know exactly how it's able to produce the output. Because the way the software has been created is not using a bunch of known knowledge about the pre bake rules about the situation.
It's instead coding in the process of iteration and convergence and the ability to create a statistical model based on massive amounts of data to try to get at the quote unquote essence of what, you know, a face is or uh you know, a, a convoluted road is that, that, that a machine would have to navigate, you know, or whatever whatever the application might be. So A I is A, is a promising and inevitable technology.
And it's based, it's just kind of a continuation of this story where one we always try to make our machines smarter, we always have. And, and ever since the information age began, the way we've done that is through computers, but we have to realize that they never have until very recently been that smart.
And because the overwhelming history of computation is based on this classic approach of hard coded rules, inputs and outputs are done deterministically and they depend on that, you know, kind of the fore knowledge, the existing knowledge of the situation because that's what you have to program into the machine. So it's always been very brittle.
And so there's this realization that we need something other than those, those kind of classic computers to, to, to push the needle on automation, to try, try to get something more intelligent that makes the kind of decisions that humans make. And so genuine intelligence is not gonna be possible through a bunch of pre baked rules.
And so artificial intelligence today looks at approaches like machine learning, which doesn't rely on those rules and instead is able to, you know, take a look at a bunch of data about situation, find correlations about how inputs seem to map the outputs and just iterate and iterate and iterate until it starts to converge in ways that we don't really know, but we know it's converging.
And if it does converge to a good solution, it seems to have something within its makeup that will allow us to even in a narrow sense, reliably map, you know, the inputs of a situation to some intelligent outputs. And of course, that mirrors in some way uh the way humans seem to make uh their decisions, right? Our decisions, the way that we learn, right? You think of a child learning a language, they don't use a dictionary, don't study syntax, don't study grammar, right?
They just talk to people and they mumble and they do things very, very incorrectly for quite a while until they start to converge through iteration through trial and error into, on, on, on to what we call language. And eventually they get the right words and they piece them together correctly and they become sensible. Uh And so, uh so it, it mirrors that process and that's the computing paradigm that we use today to try to continue to add smartness to our machines.
But today's A I is very narrow, right? It only works so well. Uh by narrow, I mean, you know, there's a certain problem that the A I has been trained to uh to, to solve or to produce outputs for, right, a certain type of decision or a prediction that it's supposed to make. And if you change the situation even just a little bit often today's A I, uh today's A I will fail it, it won't work that well.
So even though it's very, very impressive, let's say it's, you know, facial recognition, image recognition, something like this, you know, or speech recognition, self driving cars, whatever it is, obviously, when it's doing that, it's doing something that we didn't think, you know, even 5, 10 years ago machines would be able to do. So it's very, very impressive. But then you change, you know, some of the conditions and all of a sudden it it just goes out of whack.
Um And so we, we see this with things like uh adversarial attacks, right? Where or nefarious people can, you know, actually try to take advantage of this weakness of A I if there are certain products out there that are using today's artificial intelligence technology for their product features. If you were to uh you know, change the data that it's trying to predict against just a little bit, it can throw it for a loop and sometimes that can expose vulnerabilities.
So, you know, you might have a uh an object recognition model that, you know, generally knows when it sees a school bus, but you change the uh you know, picture of a school bus to purple and all of a sudden it calls it an eggplant or something like that. So, you know, this is not something that humans do, right? Generally speaking, we're not gonna confuse a a uh school bus for an eggplant very often.
So there's something about genuine intelligence that we obviously see in humans that is not being showcased in today's artificial intelligence technology, it's actually quite brittle. And so what does this mean? Well, we only use A I as long as humans have the majority of control.
And this makes sense, you know, whether you're in banking, uh you know, certain industries like health care, particularly, you know, if, if today's A I technology is going to be making critical decisions, you obviously don't want to hand over the reins completely. You want humans to have the final say, you probably want humans to have quite a bit of the say, ok now, and, and that's just because we can't trust it that much. It's cool. It works really well.
It can be a very, very powerful kind of ally to decision making for, you know, businesses and, and, and you know, individuals as well, but we're not gonna hand over the reins completely. We want people to have that, that final say. Well, a consequence of that is we realize that we don't really work with A I correctly. At least I would argue that we don't work with A I correctly because it cannot be trusted to any great degrees. What do I mean by this?
Well, we usually frame technology as though it's complimentary, right? Obviously, you know, a car is a compliment to my lifestyle because I can, you know, get to A to B a lot more quickly. I mean, I could walk but it would take longer. So a car is a compliment to my life. Uh any example, you know, the assembly line, whatever it is, you know, a pocket calculator, my cell phone, it's obviously complements our existence. A lot of them are not necessarily needed.
Although in modern life, you could argue a lot of them are, but they definitely definitely help you uh get things done more effectively. So technology is a compliment to the human being. You know, we always think of humans obviously as having the creative kind of strategic uh you know, ability to solve problems. Right? I talked about using the approximations, the heuristics kind of a lot more flexible, right? Than what a uh a computer would do.
Whereas the machine is just this kind of raw calculator that can do that, that kind of very narrow task, very, very fast. And so you bring those two together, they're a compliment to each other. Well, things being complementary requires a lack of overlap, right? That's why things are complimentary or at least that's a necessary condition for things to be complimentary to each other. Right? If you and I do the exact same things, we're not really a compliment to each other, right?
Things are complimentary when you bring them together and they, they kind of have a synergistic effect, right? A plus B leads to something better than just A and B by itself. It's, it's, you know, your skills make up for my lack of skills and vice versa. That's when things are complimentary. But A I, at least what A I is supposed to be is something much more akin to what human beings are in terms of cog cognition and thinking, right? That's the goal.
Artificial intelligence is supposed to think, at least similarly to how humans think that's the kinds of decisions we want to do again when we're, we're pushing the needle on automation, we're not just trying to do what classic machines have done already. We've kind of reached that threshold, that limit. Now, what we want to do is to, to continue to push that needle is have machines, make the kind of decisions that people make. So that's not a complimentary thing. That's an overlapping thing.
So we have today's A I which is very narrow and we, we, we have humans in the mix to a great degree because we can't really trust it. But that's making us not really work with A I correctly because it's not really supposed to be in us versus them, right? A humans versus kind of machines thing. Not with A I. It's, it actually has quite a bit of overlap between humanity and A I or at least it's supposed to and, and that's what's needed to work with A I technology correctly.
So we need to allow A I to make important decisions on its own if we continue to, to improve automation. So something's kind of out of whack here. So we normally think of progress in A I in terms of waiting until it's good enough to trust, right. That makes sense. If we're gonna hand over the reins to artificial intelligence, you know, in our products and services, then we have to trust that it's going to do its job. And so that means it's strong enough, right?
It's not that brittle, it's not that fragile. If you take the self-driving car, you know, we're not gonna take the human out of that loop all together until there's, you know, sufficient evidence to suggest that it really is able to self drive.
In other words, even if it's wet roads or there's black ice or, you know, pedestrian, all of a sudden darts into the street that it's able to handle these situations or at least able to handle them, you know, with a high, a higher success rate than humans are able to handle them. Right. There has to be some evidence of that. It would seem so normally we think of progress and A I in terms of waiting until it's good enough that we can trust it.
But what if we have to trust A I before it can make the necessary progress? What if we have to kind of reverse the directionality on that? What if we have to trust A I while it isn't that great in order for it to get to the point where it is, you know, kind of on parity with, with human intelligence. Well, first, just think of that for a second, it would mean that we would have to reframe how we work with today's A I. OK.
Assuming that's true that we have to trust A I before it makes the necessary progress. It means we have to reframe how we work with A I, right. We talked about the overlap in the last section, how we usually frame technology as complimentary and we still do that uh with A I technology. Because of the way it gets used. And I think that's not, that's not the correct way to use it.
If we reframe how we work with A I, if we have to trust it, then it can't be that complimentary type technology anymore. It has to be a very overlapping technology. It's like working with people. Right. If I'm gonna trust a, I, I'm gonna hand the reins over, you know, it's like doing it to a coworker.
You know, it's like handing the wheel over to another driver to another pilot of the plane to another assembly line worker, uh trusting decisions or even policies that would be created, you know, at the strategic level by, you know, I get a uh another person that you're working with or, you know, someone else in the organization that is a person, right? You would have to work with A I similar to how you work with people.
You'd have to reframe how you work with A I, if you had to trust A I before you could make the necessary progress. Well, here's why I'm framing it like this because if you look at intelligence in humans, OK? It works to be a massive collaboration. I've talked a lot about this before. In other words, I'm not a fan of thinking of intelligence in this kind of localized fashion. So if you think of human intelligence, most of us are probably envisioning like the human brain, right?
It's, it's sitting inside a skull and inside that skull is all this intelligence and, and like the human being, like an individual is really, really smart. What we saw, you know, in the, or, or heard in, in my episode when I talked about, uh you know, the economics uh episode where I said, like, look, you know, one of the biggest problems with traditional economics is it frames individuals as though they're really, really smart people acting in kind of a dumb world, right?
Where it's actually the, the, the opposite of that, the precise opposite of that. It, it makes much more sense to view people as quite simple creatures, you know, who don't make super rational decisions and take into account inflation rate rates and government spending and all this and their everyday decisions. I mean, they don't take into account much. We're very approximate, we're pretty simple creatures. Uh at least in terms of the way we think of decision making.
Uh and the world itself is what's complex, you know, and now we operate very effectively in that complex world because you have the so, you know, sophisticated machinery of heuristics approximations. But it, it makes sense when you, you think of, of the mass of people getting together in society that we're kind of simple creatures making basic decisions throughout life. But in aggregate, are able to make very very sophisticated decisions at the group level.
And so why I'm saying this is that when we think about intelligence in human beings. You know, I I argue, I argue that it makes more sense to think of it as a really a social phenomenon, right? Something that happens via massive collaboration. OK. So if we go back to what I was saying before, we normally think of progress in A I, in terms of waiting until it's good enough to trust. But what if we have to trust A I before it can make the necessary progress, right?
I mean, we have to reframe how we work with A I, we have to work with them as we do people. And if I'm arguing that intelligence in humans works via massive collaboration and that that's a better way to think of intelligence. And what that means is that we need to allow to uh today's A I I would say to participate with humans as though they are another human as though they are uh you know, collaborating.
So machines are collaborating with people, people are collaborating with people, people are collaborating with machines and it's kind of just one group that gets together. So in other words, stop waiting for artificial intelligence technology to get to a point where we can trust it because it, you know, and and this is what I think is, is kind of wrong with the current paradigm of artificial intelligence research is it's very much focused on trying to improve the architecture.
So if you take, you know, the the, you know, the hottest type of uh artificial intelligence technology today, it falls under the banner of machine learning, but it's, it's deep learning, right? It's a specific type of approach uh you know, called connections A I that takes basically, you know, a bunch of artificial neurons and gets some network together and is able to approximate uh you know, functions essentially, but basically can map input to output very effectively.
But in an approximate fashion that mimics how we believe humans do it.
The point is is that with deep learning, uh you know, a lot of the work is trying to get the best architecture, the architecture being essentially how the neurons are arranged, how many of them, how to stack them, how to communicate between them and how to use, you know, kind of different techniques like dropout and all this that, you know, I don't need to get into but how to set all those artificial neurons up in such a way to produce the best results.
But it puts the focus uh on the individual, right, on a person, but an in almost almost like an android, right? Or, or, or a robot as if it was like an artificial deep learning neural net that sits within, you know, a single skull of an android or something. It's very localized idea like get this architecture to work. And I think that's really, really problematic.
And in fact, some of the best results come from, you know, kind of stepping away from that uh kind of localized notion of intelligence. There's these meta learning techniques where you basically, you know, get a lot of different models to work together and none of them in particular may be that good. But on average, you kind of take the vote of all the predictions and then you get a better prediction overall from that.
Uh you can find you, you can use genetic algorithms to sweep through all kinds of different architectures almost at random and then try to uh allow the best architecture to pop out. So if you look at these kind of techniques, artificial intelligence, you're, you're, you're getting a hint at a much better approach, which is to not just think of intelligence in a very singular localized fashion, but to think about it as something that works at a more aggregate level.
Now, I still think even with the metal learning and, and some of these, you know, kind of genetic algorithm approaches to finding good architectures, they're still trying to find like the best architecture right singular. So I still think it's, it's, you know, it's hinting at the right approach because you're meta which is something I always argue for to kind of step outside and and focus on that, that higher level process.
But they're still targeting kind of that ultimate architecture as though that's gonna be the A I, that's when it's gonna be good enough that's gonna be something we can trust. And I think that's a mistake. I think a localized view of intelligence where, you know, as though it's something that sits within an individual's head is uh is, is severely problematic. I think it's wrong. I think it's backwards. I think uh intelligence in humans works.
You know, we, we are simple creatures living in a very complex world. We navigate this complex world. Uh because of our social interaction, we share ideas, you know, we have, you know, I talked about patterns like simultaneous invention before, you know that the same invention is gonna pop out at a different time.
It's virtually guaranteed, you know, you didn't need Einstein in order to get relativity all this kind of stuff because it's not about the individual, it's not about the so-called, you know, quote unquote genius, it's about massive collaboration, lots of iteration convergence, a ton of failure, the exact same thing as natural selection.
I mean, really at, at a high level, it is the exact same thing because it all subsumes into the ultimate universal algorithm that that's how creativity happens and that's how it works. And that's how intelligent, that's a much better way to frame intelligence as, as, as a phenomenon that works in aggregate.
So if we normally think of progress in A I in terms of waiting till a good enough model or or until it's good enough to trust, there's something wrong with that because that's a singular localized notion of intelligence. Like wait till we get that architecture and deep learning. Good enough, wait till it's, you know, we make enough, you know, progress in, in, in A I research until we have the model that does what it needs to do, you know.
But if we have to trust A I before we make the necessary progress, that would mean we have to reframe how we work with today's A I. And it makes sense that you would have to trust it before you use it because of that process that I'm talking about. If you think about things working as massive collaboration, like I said, there's a lot of failure there, you know, most people bring dumb ideas to the table, right?
But those dumb ideas are critical and, and they're quote unquote dumb in the sense that they're not the idea that ended up, you know, being the solution, but it's never an individual's idea that ends up being the solution.
Although we might give them credit because they might be the one that finally out of all the collaboration was able to put it all together and say, oh wait, and, and there's the final picture, but it's only because of all those inputs and all that massive failure, all that, you know, kind of surrounding fragility that had to be there to get to get something that was uh you know, not brutal and, and, and it was the true output that was needed. So I hope that makes sense.
So the point is is that to get really intelligent outputs, you gotta have a lot of things come together, a lot of imperfect things, kind of quote unquote, dumb agents, dumb individuals working in unison, which at the aggregate level produces really good outputs.
So we need to allow a is to participate in that process to be a part of that process of working with each other machine, machine and with humans, humans machine in a very error filled fashion, error filled like lots of bad ideas coming together as though you're searching through the search space and, and trying to find, you know, the right thing to pop out as opposed to specifically engineering something in a localized fashion.
OK. So even though we think of progress, an air in terms of waiting until it's good enough to trust, I think we have to trust it before it makes the necessary progress. We have to allow it to be air filled. We have to allow it to work with humans the way faulty humans work together. And that's what's gonna allow machines to, to, to really become good. Uh And, and, and to approach this kind of dream of artificial general intelligence.
Uh And, and even if we don't reach that, just getting a I much less brittle again, going back to the previous point, it is too brittle right now to trust because of that lack of trust in A I, we don't really work with A I in the correct way. We work with it as a complementary technology. It's not supposed to be a complimentary technology. That's not intelligence, intelligence would be noncompliant. It would be very similar. If not the same as us, it supposed to be overlapped.
It's not complimentary is supposed to do what we're doing. Right. Otherwise we're not pushing the needle needle on the automation. We're not pushing, we're not adding more smartness to the machine. OK. So let's, let's not wait until it's good enough. Let's bring it in now, work with it as humans do reframe how we work with A I. So that A I uh A I technology can make its proper progress. I hope that makes sense.
So let's talk more about that, that kind of underlying mechanism of genuine intelligence working via flawed groups because I really think that's at the heart of this as, as you know, from a mechanistic standpoint. So again, intelligence in humans is not really a local phenomenon. I don't believe that uh although it's the easiest way to think of it, you know, and it's not to say there is no locality to it.
Of course, if just me as an individual is stranded on an island, I'm not like, you know, completely stupid and bumping into trees, you know, I can navigate around, I can try to craft shelter, you know, I can attempt to make a fire. And, you know, obviously I've got a bunch of knowledge in my head from things that I've read. You know, I'm definitely no survivalist, but I'm not an idiot. The point is, is that there, there is some kind of local aspect to intelligence just within my skull, right?
But, you know, that's, that's very much a byproduct of, you know, decades worth of interacting with other human beings and it's worth a failing and having also some successes in there and just, you know, sitting on top of an absolute mountain of contribution of other individuals. It's because I'm an individual in a society that, that anybody gets to look at me as an individual and, and, and associate any level of intelligence to me. So again, it's not a local phenomenon.
OK. But again, a I research keeps trying to make the best model as though, you know, this architecture is gonna be is go online, you know, go Google artificial intelligence, go look at deep learning networks, you know, this architecture, this architecture, this does this thing, this does this thing and, and don't get me wrong. I mean, there's obviously a lot of success here, but again, they're all brittle, they're all very, very brittle.
They all are, are, are, are subject to, you know, the adversarial tax that I mentioned previously where you change a little bit data uh data and then, and then it fails, they're all based on this assumption that the, you know, the, basically the distribution of data that the model was trained on is going to be the distribution of data that it faces when it meets the real world.
And uh and if that's the case great and, and, and if, and when it is not the case, which inevitably won't be, you know, it's just gonna break. So it's, it's, it's good. It does kind of gimmicky, nice stuff and, and I shouldn't just say gimmicky, I mean, I'm in a I, I do A I stuff with my company. There's definitely useful product features that we can create with it, but the human always has to be there, right?
It's, it's like an ally to the human and that's not necessarily a bad thing, but it's not the dream of A I and we're already kind of hitting that wall. So again, I, I just think A I research needs to, to get away from that kind of localized idea of what intelligence is. And, and I, I think society get needs to get away from that idea, you know, writ large, but that is a, is a, is kind of a broader conversation.
So we realize, or at least as I am arguing that A I needs to move away from a localized notion of intelligence. You know, I use that traditional economics example before, right? We just see this, it kind of goes back to that same, same kind of uh you know mechanism that we see in, in so many places.
Uh you know, we kind of see it with the physics MV, we see it where social sciences tries to reverse engineer and decompose things and just think of something in a very kind of reductionist sense and it just, it ignores what occurs in aggregate and uh and, and that emergence is not, is not like a funny thing that happens in some biological, you know, worlds, it emergence is everything, in my opinion, it really is what matters, you know, my last episode shifting up, I talked about it's all about what happens in aggregate.
So anyway, I think that's where from a mechanistic standpoint, if you will like, you know, a kind of AAA framework, a better framework for A I research to help push that needless thinking more in terms of the aggregate. So, so I think what's really interesting here just to kind of wrap the whole thing up is, is trying to involve humans in that process, like what would that look like?
Right, because I'm talking about, you know, let's not work with A I as though, you know, again, it's just another pretty face, right? It's just another shiny piece of technology and we're just trying to work with it and turn the knobs and you know, in that localized fashion, try to like get this piece of machine to just work better instead kind of step back and think, look it needs to be treated as another human being right now. Obviously not exactly the same.
I mean, there's all kinds of ethical considerations to make and, and uh there's all kinds of things that we would do with people that we wouldn't do uh with a, is at least not yet. But we, we should be thinking about that when it comes to HCIS, if you think about human computer interaction or, hm, I, human machine interaction kind of same thing. Um you know, basically looks to uh you know, study uh the interfaces, right between humans and machines, human computer interaction.
But that entire kind of field is really based on this outdated or kind of classic machine based idea of machines as rules based this this thing that humans don't really do that well, where machines are doing things and because they can't do what humans do well, right? That, that, that clear distinction between human and machine. And so they think about how that interface between the human and machine should exist and, and the entire software industry has really been based off this, right?
But with A I and, and especially when we really start getting into the, you know, powerful deep learning networks, when A I really starts to take over the kind of decision making that we see in humans, that is not really a human machine interaction anymore. It should be considered more like a human human interaction because again, uh whereas HC I human computer interaction assumes that technology is complementary. I'm saying that A I technology is not complimentary, it's more like us.
So we need to actually understand human human interaction, which we probably have a lot more data on, right? Because we've been, you know, studying uh obviously our species for, you know, since we began pretty much and thinking about how humans interact with other humans or about all kinds of different research on that. And and I think more to the point from a non-academic standpoint, we have that built in us, right? Again, because humans predominantly work in that social aggregate fashion.
We are equipped from an evolutionary standpoint to have the skill sets to work with people. Obviously, some do that better than others, but we have that right? We know how to talk to people, we can read, pick up on body language. There's all, there's this rich tool set of how to interact with humans. Well, I think a lot of that is going to be useful in interacting with A I and I think we need to start doing that.
Now, I think the human human interaction is actually more useful when it comes to A I. Um you know, I'd say humans interacting with A is would allow cross pollination of various like. So what might this actually look like? Right. Well, humans interacting with a is could allow for cross pollination, uh pollination of various ideas and many of them again would be flawed.
But just as, you know, if you and I were to go to coffee and just have a conversation, there'd be all kinds of stupid ideas put on the table because we'd just be talking like people do.
But some of them, all of a sudden, you know, a smart one would pop out because you said something and I said something and you remember this guy or girl said something over here, you know, and especially if it was a bigger group or we're online having a session and, you know, a few weeks went by and we do this, we know what it's like to work in teams to collaborate, you know, that collaboration is, is a very powerful thing.
And arguably the overwhelming majority of uh you know, progress in science has come from, you know, the sharing of ideas, the making those analogical connections between ideas that, that, you know, uh superficially seem very different, but then at a deeper level or actually, you know, sharing properties between them, you know, that, that analogy plays such a strong role.
Well, you know, and, and I don't know exactly what that's gonna look like with A I, I mean, maybe it's literally a conversational bot with, obviously, the conversation wouldn't be as rich as it would with humans, but there needs to be some way to kind of interact with A I technology and it would be very flawed and, and we're still very flawed, you know, in more ways than we're willing to admit.
But allow those kinds of ideas to go back and forth, you know, one cool uh area that's popping up now in mathematics, some mathematicians anyways are starting to use uh A I programs to help them with proofs because mathematical proofs are extremely lengthy at this point, you know, they can go for thousands of pages and every little line is exactly precise and can't have an error in it.
So they gotta check and recheck and recheck, you know, and of course, proofs themselves are going to use, you know, uh they'll, they'll be doing mathematics for a little bit and then they'll run into a pattern, they should recognize that pattern and then, you know, whether it's via analogy or just pattern recognition, they'll remember. Oh, wait, that's used in this other area of mathematics.
Then they realize they can take a chunk out of there and put that into their proof because this already leads to that yada yada. So there's a lot of this kind of pattern recognition and analogy that A I could, could help with because it could scour write tons of uh mathematical journals, you know, maybe textbooks too, whatever.
And it would, you would have a lot of pattern recognition uh that it's learned about, uh you know, different uh well, patterns leading to other patterns, which is what a lot of, you know, mathematical proofs would be. Right. So it can assist. And again, maybe a lot of, of what A I would be suggesting is pretty stupid because you, you know, you got the human there to correct that, but it might make them think of something different even if it was a dumb idea.
And this is the thing is that even if the A I recommends something that is completely out of this world that doesn't even make any sense, it could still trigger something in the individual and then that might take them down an interesting creative path and then that might lead to something that is good. And you might say, well, you know, is that really useful? Do we really want stupid ideas?
But again, I think if we're being honest, that is how it is with humans, I think that we talk back and forth, we say random thing, not, not completely random, but, you know, more random than we're willing to admit. We kind of make these jumps and it's very nonlinear, but that's the beauty of it. And that's literally the tractability of it.
I mean, that's what allows really hard problems to get solved because you gotta bounce, bounce around that kind of design space or that possibility space rapidly and in and, and and in an almost haphazard fashion, you know, in order to, to make the hard problem tractable.
So anyways, I think the next paradigm in A I should look very different than it looks right now and I think it's gonna involve a lot of that kind of human, human interaction as opposed to the kind of classic human machine interaction. So a I just wrap the whole thing up. I don't think it's just another pretty face. It's not just another shiny piece of technology to be treated as we have with other pieces of technology. It's different because it's supposed to be making decisions as people do.
That's the point of it. And that means it's not really a complementary technology. It actually has a lot more overlap. OK. Things aren't complimentary when they're overlapping. And so I think creating intelligence will nece, necessitate human flaws. It is something that needs to be worked in aggregate, get us away from this kind of localized idea of what intelligence is and see things at a larger group level. That's gonna be it for this episode. Thanks so much for listening as always.
Until next time. Take care.