What is intelligence? That is the question that a lot of us have been asking ourselves as AI is showing up in more and more places. Some of us are using it now at our jobs. Some of us may be fearing that someday AI may make our jobs unavailable. Some of us only know AI on our phones and on our other devices. A lot of us have a lot of questions.
What is AI and how it works? What exactly does AI know? How does AI know what it knows? And what does AI not know? These questions are critically important as they relate directly to our safety. They relate to the reliability of the products that we are using that have AI as part of them. They relate directly to our ability to do what we do as humans in our hobbies and in other pursuits. These are the kind of questions that we are going to talk about right here on this show.
So, for the last several months I have picked up a book here called A Brief History of Intelligence by Max S. Bennett. Essentially what this book does is it talks about brains and brain-like organelles all throughout nature. There are chapters that talk about brains and insects or fish or reptiles or mammals and humans.
It talks about how they know what they know and essentially what hardware is required for that. It is fabulous. Quick example from the book. My favorite example is it talks about reptiles and it says that reptiles, unlike humans, are not able to visualize their own body inside.
So, if an alligator is walking and it steps over an obstacle, it is unable to move its back legs to account for that obstacle. It just runs right into it. It's back legs. I don't want to say they are on autopilot but it certainly doesn't have the ability to visualize. That's amazing. That fascinates me.
So, I suggest to any of our viewers or listeners, check out this book. You can go along. We also have a discussion happening on Slack at Breaking Math Podcast or Breaking Math Pod as well as Discord or even our emails here. So, hey. Now, I have a very special guest today. My guest today is a creator who has worked with Khan Academy. He has worked with Pixar, with Disney. He has worked with Google. He runs a YouTube channel all about electrical engineering, computer science and machine learning.
For the last five or so years, he has been making a series of documentary style videos on machine learning. His name is Brick Cruz and I would like to introduce him. Brick Cruz, welcome to the show. How are you, sir? Hi, Gabe. Thrilled to be here. I am thrilled to have you here. Since you have been in this field for the last five years, but really, you have been doing your channel on computer science for the last 11 years?
You have a lot to say on this topic. Can you look quick? Tell our audience a little bit about the history of your YouTube channel as well as your work at Khan Academy. Sure. Actually, it is funny because it connects to memories. So, when I think back, there is one strong memory which is I was, I think maybe 20 or something. And while I was in university or right after I finished, I went and did the classic working on the farm in the summer. I just went way up North Quebec.
And again, I just thinking back now as a parent, so busy with kids having that sort of like weeks or months of total clarity, just literally digging holes and doing farm work and then at night having no internet. Towards the end of that trip, I remember being like, all right, I've got to go back to school in the working world and 21, I'm going to do.
And in one notebook, I was like, well, what are you good at? List some things. The two things where I knew I was good or good is what you're excited about, excited about video making and all forms. Also having to be good at explaining things. And again, that's an interesting skill. I don't know if it's learned or innate. And so I just wrote like, I'll just do, I now also like some TV shows like connections and I'm like, I bet if I did my own show, that would be good.
And I wrote literally like, I would do a show on cryptography and then I would do information theory, your science, physics. And one day I'd get to AI and it's at the end of the book. And I know because it's one of those I kept one or two books in my life and threw it in a box in the attic and that one's in there and actually climbed up and looked like over a decade later. I'm like, wow, that little insight on that farm was a thread that continues to this day.
Wow, very cool. Very cool. And then, so I know there's a lot more to go from that to go from, you worked as a computer science teacher. But I saw that you have a very unique approach and I'm sorry, I don't mean to skip around. I'll go back to the farm story in a second. No, skip around. Oh yeah, sure. Why not? You have a unique approach to computer science. I've heard traditionally a lot of folks don't like a degree program that focuses on here's all of your tools that you may or may not use.
And I know that on your channel specifically, you don't start with the tools. You start with looking at the problem. Can you tell us a little bit about that? Yeah. And so I could put it in one line, which is you've got to teach forwards, not backwards. And just as you're saying, it actually got like chills on like literally the torture of going through.
Let's call it a computer science program, circa whatever, not this year. I don't know what it is this year. But since all of history, it was a lot of pain and struggle that didn't need to be there. I think it's just kind of imposed on purpose to filter people out. That's an argument about universities. But in the context of computer science, teaching backwards is like trying to keep up with a bunch of modern things, a lot, which won't even matter in the future.
And then sometimes in a course being like, Oh, when 200 years ago, so and so, set that. And like, you're like, and I remember in school, there was like one or two moments where I'm like, what?
There was something to, there was something before we even had electricity that matters in computer science. I want to know more. And so the takeaway there is when I was thinking about how I would teach is kind of simple. I would just teach forwards teaching forwards means you have to unlearn what you know and go back to blank slate.
Oh man, that's important. And if you can go to a blank slate, which is any teacher can just naturally a good teacher can just jump to blank slate and rebuild the explanation in that moment. That is what you have to do. And so that's how I thought that's how I approached it. Wow. So here's the amazing thing is you're talking about this. This the rates relates directly to the philosophy of machine learning itself. And I'm sure you're probably aware of this as well.
Where with machine learning big big mess. Yeah, like you don't tell machine learning how to solve the problem. You literally just tell it the parameters and say teach yourself how to solve the problem. And then through trial and error and playing things it discovers itself machine learning can teach itself how to play chess or how to play the game go or any other gamer, any of the you know or a wide swath of other problems.
And so you know the optimal learning for humans is similar to to AI where we figure out the tools and just have freedom to explore. I love that. You said earlier that you said you had a. Oh, sorry, go ahead. Yeah, so you just gave me a thought which is on the philosophy of machine learning you could look at it on the one hand is a very simple thing is can humans take their hands off the controls and the divide is from the beginning.
You have humans who want to have their hands on the controls because we're smart and we are going to have good ideas and we feel good when we have good ideas. And that's that's what we call good old fashioned AI literally writing the code step by step for how to be smart. Which is again the classic example hits a wall with images because it's too complex so human algorithm will work the other camp going back to the beginning was we're going to model biology and we mean it.
We're not fake like kind of modeling biology and then writing on this human code around the edge. And so that's a connection is theory, which is we're going to build. And this is important is going to connect to future questions we're going to build a net a mesh of connections and we don't even care about those connections it could be random or we could have all of them.
And we're going to learn what we need to do to perform based on some reward. And that thread has always been there and people are usually on one or the other very rarely both. And so just like a political divide it's been fun to watch the history of this as the people who need to have the hands on the controls and don't and that leads to the divide today. And so it's so simple but I can't say how important that divide is.
Oh my god. It's just the research today you could put the papers in two different piles. How about I do this? How about I ask an AI if in the style of Jordan Peterson it can summarize the divide and philosophy between chaos and order and getting order for your life by using chaos appropriately or something like that. As long as it doesn't have a Canadian accent I'll take it seriously.
Oh for sure. Yeah I love it. It's okay. So many things I was going to say before. So you mentioned a pension for education. I'm actually a former educator. It has been told to me that I'm very very good at explaining things before. I had a miserable time in education. I think I had like anxiety and panic attacks and that like crippled my classroom at management.
I had a rough go at it but I salute teachers who are good at it. They are they are an inspiration. But still I still have a I've been told at least I have an ability to tell to explain things with stories and analogies. Which is also a part of machine learning and how knowledge is stored distributively. We'll get to that here in a bit. So it's interesting. I want to mention my my late co host Sophia Baca.
She would love this conversation. She would love talking to you and I miss her very much. We've done an episode already how Sophia passed away this last year. Part of why I want this show to go on is in honor of her and the way she'd like to do things. She was very very creative as well but also my math tutor. So I was able to somehow code switch between creative ideas and analytical ideas.
How do you meet people who are good at doing both. Sophia was definitely one of those people and left their imprint on this show for sure. Amazing individual. So yeah happy to be part of that effort. Yeah thank you thank you also I admire if you don't mind you also you and I are not too dissimilar. We both started off having a show that came out of our enjoyment of math science and creativity in your case your show is art of the problem on YouTube.
And our show is the breaking math podcast. We are totally writing the co-tails of breaking math or sorry breaking bad somebody once said what if you call your show crystal math and I said no I don't I don't think we're going to go that route. But yeah yeah I love the idea of creative storytellers who are who are sharing science knowledge and analytical knowledge so it's it's kind of a cool way of reflecting on the history of knowledge.
Now in this show to give our audience a quick preview we are going to talk about some of Brits previous videos that follow the same format as this book where they break down knowledge in nature in brains in nature and then they get to the hips. Happy accident. Yeah very cool very cool. Yeah so I suppose I watched every one of your videos this last these last three weeks and I tried to summarize them.
And essentially whenever I watch a good sermon or a good talk usually I have like a big five takeaways or a big three takeaways. I had a big three takeaway from your videos but then I made it a big five takeaways three takeaways about layers of learning and then two additional takeaways about what neural networks are and how they learn.
Let me know how I do on the big takeaways and let me know if you can elaborate on this. What I wrote is from watching your videos the three takeaways with learning are that in nature you have examples of trial and error which involves randomly trying something and then reinforcing it that is used all throughout nature including when toddlers learn or when bacteria is spreading or just about anywhere we look.
There's another layer that's a little more complex that in that layer is what we know of as classical conditioning for those who have studied psychology that's when was it VF Skinner I believe who trained his dog to salivate whenever he run a bell because his dog began to associate that type of learning with a bell you know from he would always feed his dog whenever he bring the bell and eventually just the bell itself would cause salivating in his dog.
So that's that's associating a sense and an experience and then the third form of learning is the most impressive. I think it's abstract imagining and according to this book it's how humans and mammals have the ability to imagine scenarios where you can play out a scenario in your head. You can imagine what happens if I walk over that pothole and I fall but you don't actually do it. It's a step beyond associative learning and I've heard it called internal modeling and simulation.
Will you say that's a pretty good summary on the layers of learning? Yeah that's really good and so I'll just compress your summary now which is that in the three layers the first one is genetic learning and the reward is life or death. And the way it manifests is our genes spill out into pre-wire connections that they fixed our whole life and that's why insects you were saying earlier. Yeah he is on all the that alligator and it's a he. Oh yeah.
Is on autopilot in that context but more specifically when I start the video it's that insect brain fixed connections that second layer is then we spill out into connections. Our DNA spills out into the ability for connections to change in life so changeable connections that's a key and then what's neat about the third layer it has nothing to do with connections changing.
It's thought patterns in our brain are kind of at a higher level of abstraction making connections right nice nice and right now it's say if we're talking about neural network. I've been trying to find the list of activities that neural networks can't yet do quite right this is where there's some ambiguity with with our current technology and what neural nets can do can you name some activities that right now a neural network chat you know whether it's chat G.B.T. or anything else cannot do.
Sure yeah two quick points point one is be very careful when you hear someone say what neural networks can't do. Yes 99.94% they're wrong and I'll give an example where I even found myself wrong because there's a human instinct to say to draw line way out and do the meta thing on the machines and be like you'll never catch me up here.
And the one the one practical one I had is and this is true any time it has a strongly learned pattern something it's known very well and in the human analogy it would be some over habituated behavior like looking at your phone. It's very hard to do the opposite.
This is also known as a know it's connects to the know free lunch theorem you learn something you there's a cost and so I was using even in some talks my favorite was you can't play tic tac toe backwards and I would and I chart all the models and they all were failing and I was like I get to be up on my high horse my human horse and make fun of it and everyone laughs in the crowd.
Number in one talk like was is four months ago it feels like 10 years I'm like once it does this and other similar examples then I'll be kind of scared and guess what now it can easily do that when we went to the GPT for model.
And so a be careful or a where does it actually also like I'm still kind of sure it fails is self awareness and so the practical thing there is run away errors not being aware of errors as it is making them and it leads to this explosion of errors by the way I want to talk about hallucinations because people got that wrong.
But even see when I say that it's not self aware of itself and it's going to have run away errors I already know research where they're trying to again just have had another layer of a neural network in this case a large language model looking at itself and you can get out of that air. So I'm trying to partake that it's not a list of what it can do and not do it's a very blurry everyone's walking in the dark with their hands out right now.
Oh that's so philosophical so existential and that's the world that we're in right now isn't it I love it I love it. Oh man you got a stage at high. Yeah in fact that's literally what we're talking about at work right now is when chat GPT hallucinates I've got a guy at work that says well let's just create an app that does a quick fact check on it that splices out the factual information which you know is a patch.
But even humans continually make mistakes humans the way our brains work we always have our own run run away mistakes so I think it'll be an ongoing process and every patch is going to be between 0 and 100% 90% effective and you know with with girdles and completeness there I think there's always it's always an evolving game you know so fast and it's up.
Just back to the divide of hands on the controls are not hands on the controls yes can't are those people in quiz to you yeah yeah absolutely what's the patches where would you put them in yeah so okay so for those who are falling real quick you said you know there's the three modes of learning are three layers there's trial and error randomly trying something there is classical conditioning and then there is imagination and simulation I want to talk about points for
five real quick these are points about the philosophy of machine learning in a neural network the first thing that we're going to talk about later in this episode is in a neural network a concept or concepts plural like dogs and cats they're stored distributively like a constellation of stars but like throughout layers of a neural net and multiple concepts like a dog and a cat are going to share a bunch of the same things like they have a lot of similar architecture and a similar attributes and they both got
two eyes you know and and a mouth but they're different as well so in a neural network like our own brain or machine learning it's stored distributively and they're also connected is there a better way to word that that you can think of Britain. You're doing a great job by the way thank you so a car I like to go back to feelings let's let's teach forwards so we have feelings and so let's just use we can use any example but even if I use one of a tree
when we both think about a tree we simulate a tree and that means a repeatable unique set of neurons activated in our brain and there's technology today that can actually see know what you're thinking just by looking at your neurons and but why that's important is so a unique group of neurons what is that from our perspective that's a feeling when we're feeling different things and
this is the thing there's two layers to feeling I think about concepts as a mental feeling so like the feeling of an apple this versus a sponge yeah those are unique neural sets and we just feel them as different feelings and so neural networks are storing the things we perceive as just unique I like how you said constellation in the stars that's a good one.
Thank you I use that on your YouTube in fact quick story for our audience I probably put 20 plus comments on Brits YouTube asking a million questions about AI and consciousness and you thankfully responded which brings us to our conversation today so yeah go to the YouTube channel art over the problem and Brits is very responsive and so are the other folks so yeah thank you so much I appreciate that now that that concept of distributed concepts you know in a concept
of this you know in a constellation that is a recurrent theme as we talk about what machine learning knows because there's a push to understand all those connections it's very random at first but as we understand it we can say okay so at this layer we're putting together textures at this layer we're assembling the textures that helps us to know what machine learning knows and it's helpful even that itself is knowledge which is helpful for you know other things there's an AI that's not going to be a good one.
But so I'll be doing two more information and then we'll talk about another things that have to different things there's an AI that we'll talk about there's examples of AI one of my favorite sections here is after our background stuff we have examples of AI there's an AI that identifies bread and a Japanese bakery that is now been repurposed I shouldn't say repurposed it's been a similar architecture has been used to identify cancer in. does it do it?
Because if we're able to peel back those layers, and we say, oh, okay, okay, here, it's identifying pixels that are in this one pattern. Then we can say, oh, is that something that humans didn't know? And we can now have that knowledge, and you use that same emergent pixel pattern in other cancers, or in maybe skin textures that aren't as successfully read in this AI.
What I'm trying to say is, we need to know how AI knows what it knows, or rather, if we do that, it'll help us to better design our AI. Isn't it, you say that's a good goal, right?
Yeah, interpretability, it's called, and if you look at the word is now, there's a really close boundary in terms of people can understand, like when I dug into this, like the first two layers, but you always hit this point where I call it, like the magic zone, and then our X-plum mobility just goes to zero, it's interesting. And so the more we know, the more we know. Yeah, yeah.
It's a little scary in one sense that there's knowledge that exists that we don't grasp yet, yet still, that knowledge is key to improving our own understanding, and even like an overall theory of knowledge. Okay, so in this outline, I want to real quick shift to, going over your first video, you have some fabulous diagrams here, where you have some diagrams of what's happening in basic brains, then we have examples.
Some of my favorite examples from your first video is, there's things like a bacteria that can only sense smell and can only either go in a random direction or a fixed direction, we talk about Venus fly traps, as well as, oh, I didn't get the name of the leaf, there's a leaf that curls up, but it can learn to not curl up. Can we pull up the diagram, Allegra, by the way, my producer today is Allegra, she's in the back room, she's pulling up all the images, how are you doing the Allegra?
Yeah, Allegra, you're awesome, can you pull up the diagram, the sense action diagram, I think it's the second image in the folder? Okay, well there's a, oh, not that one, oh, that's a cool one though, and that's a cool one, keep it up though, okay, oh, that's a cheque one. These sense apps. I'm not my portfolio. Yeah, we have a whole portfolio of them, which is fine, these are some great, that is one of the AIs that are learning about itself, I believe.
Sorry, not learning about, sorry, it's drawing a picture of itself. Oh, there it is. There it is, there it is, okay, simple simple diagram, very simple diagram, there's a blue circle with goals, then it has an action in a sense, but would you mind explaining the simplicity of this diagram?
Sure, so in the middle there, I know we might have audio is just a circle representing, just call it an entity, which has some goal, and so the goal again, just to be clear, could be survival, could be something else, could be a sub goal of that. And it can sense things, perceptions, arrow going in, and it can act sometimes by changing its body in some way. And so what I'm done here is actually loop the line, so it goes, the action loops around and becomes its part of its sense.
And that's a simple but important idea. I'm actually looking at it, now it's so simple, I've like, I lose sight of why I drew it. It's all good. All right, I'd like to say thank you so much. You can go back to the main cameras now. All right, awesome. So what I love that is I love the simplicity because the question in your videos is, what is the simplest brain out there?
And we were talking, and I forget if we were talking on Twitter or X, whatever it is, but we were talking about how, when sense folds in on itself into action, and if something is able to like smell something and then choose an action from that, that is essentially the most basic type of a brain. It's something that responds to the environment. One example I've heard brought up is thermostat.
You could look at a thermostat as a brain or a coil and in response to hot or cold, it expands or contracts. So that's one example, but I like your examples better. You had a, we said earlier, the first one was, imagine a bacterium, a single cell organism that only senses smell. And this baffles me because at a philosophical level, what does it mean to smell something?
It's a very hard sensation, but is it, from evolutionary standpoint, is that the first sense that ever evolved to your knowledge? Yes, this, well, it specifically smell. I just want to say that all senses are the same. And so they're just energy levels that hit this sensory neuron. And I want to say senses, but I'm not 100% sure it felt right. Cool. Yeah, I think that's great sense. Like you pick up information about your environment and you make a choice on it.
And in a bacterium that's one cell, it can just smell and then it can either go randomly or in one direction. And from that, it can find a food source. Yeah, and so the only other one, what's cool about senses is it's kind of like predicting something and that's about to happen, which is quite advanced. And so this other more primitive sense of just physical contact, which I'm aware of, now that I think of it, likely that came first, right?
And we have examples of, you know, you touch coral, you touch a fly trap. It knows what it knows when it hits it. Smell is you're sensing something before it happens, which is neat. Yeah, yeah. Big question that we have that we're going to explore further in this podcast is how do our senses work?
A long-term project is if you were to make an AI that had many neural nets that integrated a bunch of different senses, how could we approximate, you know, mammal behavior in some sense, to some degree? That's why I ask this question now. Now, real quick, I want to mention something very important here. This here, without any more complexity, this very, very simple brain, this one diagram, one loop, it's a fixed action. You use the word fixed action in your video. That means it can't change.
Something like, and I think the example used is a Venus fly trap or any kind of a trap. You know, if you stimulate the sense on a Venus fly trap, it closes. And that's just about it. That it'll always close, and it's always the exact same thing. There's a slight evolution here when, and I think the example you gave us, let's say you've got a mutation, and you have some more connections pop up inside that brain, and you've got some more wiring. And there's a diagram as well.
And a diagram, if you could pull up the second diagram, or I'm sorry, I think it's labeled the third diagram, it looks just like this last one, but it's got more red lines. Ah, look at this one, this one. Look at this diagram. I'm thinking about a very, very early neural network.
You may have a different diagram in your videos, but I was thinking of, you know, those red lines on the inside, basically like, if you have a neural network, there's a bunch of possible pathways, and not every sense, not every sense, will have the same results, and it can change over time. Thank you so much, I appreciate that.
And do you remember the example that you used of, there's a plant where if you touch it, it rolls up, but eventually if you're not a threat, it'll learn to not roll up with certain sensations. Do you remember the plant, what it was? Yeah, and so it's good to repeat that like, what's machine learning really doing? It's how to act, how to act is given an input, what's the output? I'd just like to repeat that, because things can get confusing really quick to anyone new to this, how to act in out.
And so I was looking for examples where the connection between input and output can change in life, which is a huge, huge advantage. And the way it first changes is not through new connections growing or anything. It's actually just turning down a connection, inhibiting a connection gradually. And this is what so cool where habituation comes from. It's just not doing something as much. That's kind of like this baby step in in life learning.
And this is a leaf that after a while realizes that getting hit by water is not a bad thing. Yeah, it's funny, it's all in how you explain it, right? You know, I've had a whole lot of philosophical conversations with people about our plant's conscious, whether you talk to somebody who's no offense to hippies or anything, but like our plant's conscious. And I would think in my background, no, no, a plant's conscious, there's no brain.
Yet if we just break down the single task of learning through this slightly more complex internal wiring, yeah, it just doesn't curl up as much when it doesn't die. Or it's not threatened. And it's not that it understands what you are specifically. It just is able to build an understanding by virtue of still existing and being healthy. The sensation to curl up weekends over time. That's all it is.
Yet we could still say, or if you choose to word it this way, it gets used to you touching it, or it gets used to raindrops and no longer curls up. So it's fascinating, it's really fascinating. So wow, wow. In the next section, I talk more about conditional learning, but I think we touched on that pretty well actually, where we talk about dogs and bells ringing and celebrating. You do talk in the video about what makes you think. Again, sorry, just pause. Oh, sorry, just to interject there.
Because this is an example where I get really confused with like, okay, you got the one Pavlov's experiment everyone knows about, but there's so many experiments. And you can think that you need to know them all. And if you're thinking from the top down, the human brain's so complicated, yet lost very quickly. And so I'm glad you moved on because really, what you have to know is, can a connection change in life? I don't care about the context. We'll get all confused thinking about context.
It's can a connection change. Yeah, and that's basically it. And then other examples in this video, you move on to abstract thought in human brains and all that. And yeah, it all comes down to these basics and just through mutations and hardware, we get human brains as we know them today. All right, so what I'd actually like to do is move on to your video series on machine learning. Now this is one where in this outline, I didn't have a whole lot of videos.
There was so many videos to choose from. I didn't get a sampling here. And part of me wishes that I did. But I'd like to talk a little bit about the early research into machine learning. And some of the clumsy early models of a neural network, ones where you had to manually change each dial. It's hoping that you could give us just a quick little preview of early research in the clumsy early models. Sure. Yeah, I'm glad you brought up a dial.
So the original dream was let's make a mesh of wires with just neurons and wires connected. And a neuron is like an electrical terms. It's a transistor. If it gets enough energy, it turns on. So you need something to be your neuron and that thing basically you can use a transistor. And all the only other thing you need is connections. But we need to be able to change connections. And so this is why I used, and I made this up.
I don't think they didn't use this, but I used a dimmer switch to hint at this idea of a dimmer switch as a variable resistor, which allows you to change the strength of that connection electrically. That's all you need. Then you have to give it, again, machine learning, all of learning, input, output, how to act. And so the first experiment was so great. Rosenblatz was just like he used like 50 neurons all connected together, thousands of wires.
But he would either draw on like a light bright style screen, very low resolution, a circle or a square. And then have the machine learn circle versus square. And how do you learn? We've got to give it some experience. So he'd draw a circle, put it on an initially random mesh of connections. And that I need to amplify super clear, because there's not much to machine learning if you got the core right. Random mesh of wires? Well, it doesn't work at all at first. What's not work mean?
Well, the output is forced to be one of two things. And we can call it circle or square. And in this case, that's what he wanted to do. And so when you put a, but it does nothing at first. So you put a circle in this machine, and what happens with the two light bulbs at the end? The machine doesn't know anything. They're both kind of lit up, just kind of randomly. And so he had has to learn what's learning. Well, in this case, the human kind of cheats a bit where we show it a square.
It does some random thing that doesn't work. Then one by one, we go through and wiggle each dimmer switch. And sometimes going one way will help, meaning, oh, I'm an eye human. I'm going to call, let's call the top light bulb a square in the bottom light bulb of circle. Any time I put a square in, and I do a wiggle, when the right light bulb gets brighter, I'm going to keep that wiggle. And if it doesn't help, I'm going to go the other way. And you literally just do that through all the neurons.
Keep doing that. And you hit a point where you put circles and squares in this network, and it doesn't need you to do any more wiggling. That's the time that it has generalized, which means it'll work on what it was trained on. But most importantly, it will work on new circles and squares you draw, different people with different handwriting, different pixels, basically. This was the very first half of my interview with Brick Crews.
This first half we focused on what is intelligence and what are examples of intelligence in the natural world, different kinds of brains, and that sort of thing. The next half of the interview, which will be airing next week at the same time and place, is all about machine learning specifically and what are the architectures in artificial intelligence that allow it to be so successful, things like attention networks and transformers and things like that.
So next week is all about the artificial intelligence side stay tuned, we'll have a great interview next week.