Welcome to Part 2 of a two-part interview featuring Brit Cruise, the YouTube channel Art of the Problem. In Part 1, we explored examples of intelligence in nature, including how brains work in fish, mammals, reptiles, birds, insects, and humans. And in Part 2, we'll be exploring how artificial intelligence works with machine and learning and neural networks and how various architectures in machine learning
have allowed machines to think a little bit more like humans. The transcript for this entire episode is available to anyone who's interested by emailing us at Breaking Math Podcast at gmail.com and will be posted shortly to our website at breakingmath.io. And for listeners interested in a commercial free experience, all episodes are posted to Patreon for our $3 monthly tier.
We'll talk a little bit about the early research into machine learning and some of the clumsy early models of a neural network, ones where you had to manually change each dial. I was hoping that you could give us just a quick little preview of early research in the clumsy early models.
Sure, yeah, I'm glad you brought up a dial. So the original dream was let's make a mesh of wires with just neurons and wires connected. And a neuron is like an electrical term. It's a transistor. If it gets enough energy, it turns on. So you need something to be your neuron and that thing basically you can use a transistor.
And all the only other thing you need is connections, but we need to be able to change connections. And so this is why I use, and I made this up, I don't think they didn't use this, but I use the dimmer switch to hint at this idea of a dimmer switch is a variable resistor, which allows you to change the strengths of that connection electrically.
That's all you need. Then you have to give it again machine learning all of learning input output how to act. And so the first experiment was so great. It rose in blots was just like he used like 50 neurons all connected together thousands of wires, but he would either draw on like a light bright style screen very low resolution, a circle or a square.
And then have the machine learn circle versus square and how do you learn we got to give it some experience so he draw circle put it on an initially random mesh of connections and that I need to amplify super clear because there's not much to machine learning if you got the core rate. Random mesh wires well doesn't work at all at first what's not work me will the output is forced to be one of two things and we can call it circle or square and in this case that's what he wanted to do.
And so when you put a but it does nothing at first so you put a circle in this machine and what happens with the two light bulbs at the end the machine doesn't know anything. They're both kind of lit up just kind of randomly and so he had has to learn what's learning well in this case the human kind of cheats a bit where we show it a square.
It does some random thing that doesn't work and one by one we go through and wiggle each dimmer switch and sometimes going one way will help meaning oh I'm an I human I'm going to call let's call the top light bulb a square in the bottom light bulb a circle.
Any time I put a square in and I do a wiggle when the right light bulb gets brighter I'm going to keep that wiggle if it doesn't help I'm going to go the other way and you literally just do that through all the neurons keep doing that and you hit a point where you put circles and squares in this network and it doesn't need you to do any more wiggling that's the time that it has generalized which means it'll work on what it was trained on.
But most importantly it will work on new circles and squares you draw different people with different handwriting different pixels basically. And that is like what we're still doing today nothing really has changed.
That's saying one of my questions I was watching your videos is so theoretically you could build a neural network out of like anything you know a bunch of breadboard pieces you could build manual switches and like have it out on a table and yes it would be very clumsy to go there and manually change everything and that's one of the reasons why machine learning didn't really take off until the hardware caught up and I think the car gosh 2009 when graphics cards became available I think that was a major turning point in artificial neural network.
That's right because it can multiply a big chunk of numbers instead of one at a time. It wasn't anything special beyond a happy accident very cool very good was there's two videos that we're going to watch from your series here that'll be up here in a second one of them is on deep learning it involves the paper folding analogy and the other one is from the exact same video shortly after it's on probing neural network layers where if you've got a neural net that's going to be a lot of work.
And neural net that is trained on recognizing images what happens when you peel back every single layer and you look at what's built up throughout so I think without further ado let's go and do it. Allegra if you could play the first video it's called I think it's called paper folding. Organic brains use layers of neural activations to process their inputs.
The importance of depth or many layers is the least understood aspect of neural networks so let's pause and consider a simple analogy to understand why multi-layered networks are better at partitioning the perception space than a single layer network. Imagine this is our perception space and we have two kinds of input data types each neuron we add in the first layer acts like a fold in this space.
With two neurons we can make two folds like this and we keep going folding and unfolding the paper to carve out regions to separate these points. This will take six separate folds. This allows us to then group regions containing the same type of points using a final neuron which activates if any of those regions are active. But now consider what happens if we layer our folds that is we don't unfold after each fold.
So let's do the first fold again then the second then the third fold across that layer like this. That ends up carving the space in the exact same way using three folds instead of six. And if we were to continue this process with a fourth fold it results in 16 regions. Practically this means that neurons deep in a network are not simple linear partitions but are instead activated by a complex pattern of linear partitions.
Awesome awesome and you may have noticed there in order to fit all these videos in this awesome interview I sped them up to 1.5 except the very last section that went back to its normal speed. But still I think that was a fabulous video can you tell us a little bit about that video and just put it into more layman's terms even though it's already in layman's terms. Yeah you can't get it. So when we're talking about neurons they just have inputs and an output.
One way when you want to think about it mathematically and let's just use the example of a temperature we're trying to make a neural network that can just let's go to a thermostat. But you just set what you want your temperature you're dividing line and not really in math world where you're just plotting temperature on a line like 60 degrees versus 62.
What is the neuron it's actually just a divider think of it as a point on that line or in 2d space if you have a couple variables like a pressure sensor also attached it's a line it's a partition it's just dividing sets of points which are senses. The sense is just in a mathematical term a point on a line or point on a plane into two categories and that's how it knows one thing or is another or versus another that essential difference that we build everything on.
So that's all a neuron is and when you put lots of neurons together you get lots of partitions and what do you do with lots of partitions well you can divide you can define complicated shapes. And that allows you to capture more complex concepts and so this is why there's a result in no in an a i that where you only need one layer really to do anything a neural network with like an infinitely wide layer of neurons and all the connections going in and out.
In theory it can do everything but in practice it you can't it doesn't work because you can't make something big enough in practice to do problems you need too many partitions too many neurons and the trick they found was if you layer neurons instead of trying to do like 100 million there on wide network you just do like 10.
And then another layer connected to 10 another layer connected to 10 those actually multiply together leading to the same amount of partitions with very few neurons and so I this sounds so complicated so if you're confused it makes sense so I needed a physical analogy my physical analogy is.
You fold piece of paper that fold is a neuron and so if you try to fold and unfold the paper to do all your folds it's you have to do one fold per fold but if you fold and then fold again and do that classic thing or if you fold the piece of paper like 12 times it goes to the moon when you unfold your paper every fold you end up with is like a neuron so with just a few folds you can get many folds.
I think that concept right there is key to understanding some of the alchemy that machine learning is you know me it's just how our conclusions arrived at through through the combination of neurons so that's a very cool one and then we got this next video and then coming up we're going to talk about a whole lot of examples of AI that this one of my favorite parts of the conversation then we're going to talk about AI and business as well just to give a quick heads up of what's left on this episode so again this next episode we're going to see I don't know what AI is.
I don't know what AI this was but it's something that's trained on images and we're going to see what happens at multiple layers so let's see here I'm like if you could can you play the third video. I sped this one up to so it go a little faster. I think they're there. And was the next layers are activated by different lines of textures and deeper into the network these textures get more specific and as you move deeper into the network the textures get more complex.
neurons that are activated by entire objects, such as dogs, wheels, houses, or trees. These complex activation patterns are possible due to the layered structure of the network, and so if we cut open a neural network, we'll find that deep layers contain representations of a perception based on what level of different things or patterns they contain, which is defined by how active those specific neurons are. Very cool. Yeah, I think that was a fabulous video.
Would you find that example, if I may ask? I believe that visualization is probably from the work of Chris Ola. Look up in. They've done and distilled and done a lot of great work visualizing the innards of neural networks. Hi, it's OK. I may ask you for that link. I'll include that in this video as well, if anyone wants to see some more visualizations of neural networks. I found some amazing ones myself. I'm a visual guy. Very, very, very helpful.
Next part of this podcast, I have a section where I want to talk about some of our favorite examples of AI and why as well as some of the examples that are brought up in your videos that we haven't discussed yet. Do you have any favorite examples of AI in any use in industry or entertainment or elsewhere? What I find most impressive is not any specific use, but the pattern of what's happening.
I just put it simply, it can handle such messy output, input, unstructured, messy, noisy, input, that all we need to do now to interface with computers is utterances and gestures, or one or the other. Just with the most simple utterances or simple hand waving, the machine has everything it needs to do what it needs to do. And so whether that means there's new graphic design software where I just kind of go make a big, make an ocean.
And even when I say it, the sound's all fuzzy and maybe I pronounce it wrong and I'm like, and make the wave look like this and I just wish my arm around. It's going to output a beautiful painting. And I remember as soon as this was possible, I made some for one year for holiday guests. And so you just see that again and again and again, the simple utterances, look at ChatGbT, but also just simple gestures is all you need to do to interface with the machine.
And I think that's just, we're still getting our heads around that. That's amazing. I kind of think of like when a husband and a wife know each other after many, many years and you know just from their facial expressions how they feel about a certain situation, you know? So it's a, you know them so well, you know not even that, you know them just from the types of, the types of silence. Yeah. Just like the way people breathe, it's incredible how our brains can absorb just what it needs to know.
Oh yeah, yeah, absolutely. There's a part at the very, very end of our outline here where we're talking about your talk at Google and you had mentioned some really impressive AIs. I think one of them was, was it a student who you saw build a social media platform just with AI in a very, very short time? Can you talk about that a little bit? Yeah, so this is an example of a woman who kind of had no coding experience. And this is a, you'll see this pattern everywhere now.
That's why this year's exciting. And it said, oh, I want to kind of make a dingo meets Netflix for learning languages. And how the app works isn't so much important. What's important is she was able to prototype it in three days and be walking around on the street, handing her app to somebody and anyone who's tried to build anything knows going from idea to someone holding it in their phone, testing it. It never happens or takes years or takes months.
It doesn't take three days with some simple utterances. And so she was able to do that and that's where we live now. So everything's being kind of restructured around that truth. That is amazing. Okay, so she said, Netflix makes do a lingo. So something, I mean, you have more details on, I'm curious about the app now actually. So I think it was, I know, right? So traditionally you just get subtitles. Just put it gamified the subtitles.
So imagine all, no one's even thought of that, which is, I don't have the app, but just imagine some words are different colors or maybe questions pop up on top of it. I don't know, but it was this idea of gamifying subtitles like how brilliantly do lingo gamifies things. Yeah. Oh, fascinating. Fascinating. Yeah, actually, how would you build that? You'd need multiple engineers. And you need super, like you need a couple engineers or one brilliant one and a lot of time. But she didn't.
Okay, that is amazing. That's amazing. It worked. We talked about AI all the time. And one thing we thought about is a coworker of mine really loves the idea of AI avatars. And he's trying to think of some use cases for them. We thought it would be amazing if you had some kind of an entertainment like a murder mystery. And you set out your audience as the investigators and they can interview AI witnesses or AI ghosts of characters that know that have passed on and get some information.
And obviously, as an AI, you could put it, you know, have all this information in it. But I have like some hidden knowledge where if you ask the right question with the right reference, it'll give you some key information. And then you'd see how successful distributed problem solving really, really is because we've been terrified at how intelligent fans are at figuring out what movie writers and screenplay writers are doing in mystery. So that's one application that we haven't done yet.
But I'd love to see it. That would be really cool. I wanted to bring up this section about AI specifically because it's the, I don't want to say strange. It's the unique ones that really show the capability of AI that I find it most interesting. The example I previously mentioned is the AI called Bakery Scan that was built for a Japanese bakery that is now used or a model of it is now used to identify pre-cancerous screens or sorry, tumors in an MRI screen.
What fascinated me about this AI is in this Japanese bakery, there were two AI's that were created. One of them was done through reinforcement learning over a period of five years by a team of programmers that really understood the bakery and another one was done entirely unsupervised through deep learning, only machine learning. But the original purpose, which was identifying baked goods in the bakery, they were about on par, they were about equal.
But then there's a critical task where one of them outperformed the other. The critical task is learning new baked goods. If you introduce a new shape or a new recipe, you've got a new item, how soon until your tool, your machine is learned it. What they found, and I've got my citations here, what they found is that the supervised learning model learned it multiple orders of magnitude faster than the deep learning model. That to me was fascinating. I didn't see that coming.
I literally thought the deep learning model would learn it much faster, but that's not the case. In that use case itself, I'm dying to know why is it that the model trained by a human who knew the bakery inside and out and he knew what to look for, why is it able to learn new things faster than the deep learning model? I don't have an answer for that, but that's the kind of questions that are relevant right now. I don't know if you can answer that.
There's one potential confusion there, which is both our deep learning, meaning they're using layers of neurons, but they're using two different approaches, which is supervised and unsupervised. That's the key difference. What's neat about unsupervised, what it really means is the humans providing the reward every time. Give it bread and say what it is, do counter examples on those sides versus unsupervised where you're letting it learn that itself.
In many cases, and still to this day, I call this narrow silo day eye. When you use deep, again, it is deep learning, but it's humans providing the labels and the examples over and over. You get very high performing systems, but they're not generalizable, so they can't do other things. That's why there's a cost benefit there when you go to more general systems that may fail narrow. I hope that was helpful. That is. You're right. Think of your correction. I always want to be corrected.
That's how I do things best. They are both deep learning, one of the misupervised words. I also thought that in terms of acquiring knowledge for a specific task, in this case, the task is a human-centric task. Humans are the ones that make a big good. Humans are the ones that eat them and that you evaluate what's good or what's not good. Human purpose is, frankly speaking, we have insider knowledge on what makes something good for us.
That also is very helpful for something that learns new tasks. That is, yeah. I think that that's optimistic that humans will be in the loop for a lot of tasks in the future. We won't be obsolete quite so soon. At least that's my assumption. I don't know if you'd argue on that one. I don't want to argue. I want to throw in one, for example, I didn't tell you about. Again, is this a failure motor or a feature?
On a different cancer detection test, they had an AI looking at cells and it could tell whether or not it was cancer. They gave it to, for example, two batches of data, had it learned, and it did it better than a human. And then, so, hey, you could stop there and say, wow, it's brilliant. They looked into why.
And again, it learned how to act, act, mean say cancer or no cancer, with what we thought was a hack, which is it noticed that in all the cancer images in the data set, there is a ruler in the image. And in all the non-cancer, there wasn't a ruler in the image. And so, what did it learn? It just learned immediately. You see ruler, it's cancer. And so, that's a fun example of, like, it'll learn what it needs to learn to do it. So, if you take that ruler out, well, guess what?
It's going to note our other subtle differences and it will work. But I love that ruler example because is that a bug or a feature? I would say it's the feature. But as humans have to be careful with our data sets, that's the key. The data set can really, when we try to hand engineer it, I'm also very skeptical of that. That's the humans who want to get on the controls again. And I'm usually like, I don't know if that is the way for it. That's it.
So, obviously, the lesson here is be very, very aware of your data set. And also, even if you're doing the deep learning unsupervised model, just be very aware of what you're training it on and what you're not training it on, what's missing. And for our purposes, that requires a lot of awareness there. We're like, oh, and I keep being told that we are down to under 10 minutes. I want to ask my producers, if we can extend it a little bit, if that's all right.
Mark, is that okay if we extend a little bit past our 10 minute deadline? Thank you, sir. I do appreciate that. Awesome. Awesome, because this conversation is just so fun. One other thing I'll talk about is, in the examples here, there's countless examples that we can talk about. Even the history of chess and AI, we know that early models were programmed with what we thought of as the best tricks for humans into AI. Later models said, no, let the AI figure out a play chess.
And it was extremely successful. From the last I read from 2020, so this is almost four years out of date now, is that in some cases, the best chess players were combinations of humans and the AI for certain tasks. I don't remember the parameters. I only saw a bunch of releases and mainstream media. But I don't know if you heard about humans and AI working together at certain tasks and how successful they are? Yeah. And broadly speaking, it never hurts.
But again, it's tempting to be like, oh, yeah, that means the human was why. And I'm like, no, it's actually just that the state of the art in the AI plus human doesn't get worse. And that's not so amazing to me, but it is the case. Yeah, for sure. Okay. And I guess this next section, and given our time limit, I'm trying to prioritize what we talk about in the rest of the conversation because there's this conversation is fascinating to know and so I may jump around a little bit here.
Are there any interesting, complete failures of AI that you're interested in talking about? Maybe just that when I mentioned earlier, it's, I want to say it's lack of awareness of itself, but I don't know how strong I feel about it right now. So I'll just say what the current kind of boundary of what it can't seem to do is on the one hand people say it doesn't have a world model. It doesn't really understand the physical world because it didn't grow up crawling around and getting dirty.
But we're, so that's the boundary. Like it doesn't really, it, with language, it can sound intelligent. But people's, their argument still, this is not my argument is when a, when chat to PT says water, it hasn't felt water. Fine, but that is currently the edge of research right now and it's amazing. I sent you one video of just teaching a simple robot to walk in the grass.
We went from taking like impossible to 20 weeks to two hours and this thing's babbling around learning to walk and so that video was a cool robot to be the, the failure. Yeah, right? Yeah, yeah, it was absolutely incredible.
And then actually when you talk about how quickly the show about learning how to walk three ways about the stuff is I tried to argue on the side of humans and I said, wait a minute, AI isn't as good as humans and everything and I showed them some videos about the awesome robots from Boston Dynamics. These robots do amazing gymnastics and cartwheels and all these scary things. You look at them and you're like, oh no, it's like the Terminator. These things are better, better than us.
But those specific examples, those robots have a very hard time with the task like sitting in different chairs that are different sizes or walking on a surface that's not flat. They're, they're clumsy and they, they, they fall down if they don't perfectly understand their environment. So I thought, I thought naively that that was an example of, you know, where AI's have a long way to go and I sent that to you, Brit and you just came back with a video and you're like, ah, ah, not, not so fast.
So just said earlier, in, you know, in a period of two hours a robot was able to learn how to walk on all kinds of different surfaces just because it was, it was trained with modern machine learning methods on that task. So that, that was incredible. And one of my questions is, you're like, is this like an artificial brain? Yeah. One of the questions that I've been emailing you a lot about lately is I'm trying to think about digital learning through a neural network versus a biological brain.
You know, because like we can learn to play chess and you know, there's some, I imagine there's some rewiring, I, I think there's rewiring. I'm not sure how much actual physical rewiring is happening in our brain. But you know, I, I think about power consumption of, of our brain versus a machine and what are the limitations and what, you know, a machine can learn in two hours, but really we're fighting against the fact that our rewiring is not just a digital thing.
It's actually a, a, a physical thing. So it's more like a muscle is, I just don't, this is where my neuroscience lacks. I don't know if you know a lot about the actual human process of learning and, and pruning of neurons. How, how is that different than machines? I'll go to examples on this power consumption thing, which is of course evolution has found a million awesome hacks.
So one is that as neural networks in the brain don't actually aren't on or off all the time, they're sending spikes at different rates. So fast is on and slower is off and they can be continuous. So on the one hand, that, that's smart just because it's only sending bits of energy once in a while for on, not staying on. But also it gives it an ability to send continuous signals, which is, can process much more information. So it's always, can you do more with less?
And another one, what we call a machine learning is pruning. As we know with the human brain, you could like go in there and, uh, often just snip random neurons and nothing's going to happen. It's only when you damage a big chunk, like billions of neurons, even then sometimes nothing happens. It's incredible. You have to like torch your brain and pull chunks out for it to fail unless you happen to hit this perfect neuron that that's, that's critical.
So pruning in, in AI, what we do in machine learning is you just like randomly just instead of having a fully connected network where everything's connected to everything, you just randomly start deleting stuff until it stops working. And you can, this is called a sparse network. You can get by with a very sparse network. And so nature is doing those things and many more and things we don't understand, but it's just doing more with less power. It's really cool. Yeah. That's, that's incredible.
Wow. There's a whole bunch of topics in this area. One thing I also thought about is, you know, we are, we, we exist for some, you know, on average, I don't know what the average lifespan of a human is. We obviously shed our mortal coil, we, we die at some point in the end, but we also have kids. We can propagate our DNA. And through mutations, we can get new hardware. That got me thinking about a new hardware for a neural network.
Obviously, a neural network is fixed on a server right now that there's no talk about, about a neural network itself producing a new hardware. Now I don't know how far along we are. That's where it gets scary. A neural network could be aware of its own hardware and design its own next iteration. So maybe we're not that far off. I don't know.
Have you ever ever thought about a neural net designing its own next iteration, kind of like, oh, the Avengers, what was the Avengers movie with age of Ultron where he kept designing his next iteration? Yes. And so this is called metal learning and the results are amazing. And so of course, it helps and it works. And so there's a lot of work that's been done on the, you're kind of going up to the metal level where the neural networks changing its own structure and to be more efficient.
Okay. And it's cool. Check it out. Metal learning. Metal learning. Let me add that to this outline because I need to add that. Yeah. Metal learning. Okay. New hardware. We're going to add that to our outline and we're going to do a lot more on this topic. By the way, I'm going to do a quick little break here. Obviously, my passion right now is machine learning and I'd love to go to be involved in machine learning research. We have a community that I've just started.
There is a discord server on the breaking math podcast and there's also a, oh gosh, what's the app that businesses use to collaborate? My wife just told me to download it and to use one for this podcast. It's, I'm, oh, slack. Thank you. That was kind of funny. Tell them to forgot the name. I just created a slack and it's either breaking math pod or it's breaking math podcast.
We're going to put interesting papers here on both slack and discord and we're going to have interesting concepts and explore these further and talk about what will be on future episodes. We're also going to provide the papers that we're talking about and the resources so you as a listener can get smart, get smarter than we are right now by reading a log. I'll also put Brits channel as well and I found a professor that teaches data science and machine learning who just put out a textbook.
His Twitter handle or his ex-handle is Igan Steve and he has an entire YouTube series with hundreds of videos where he goes through his textbook on machine learning and it's phenomenal. I know that at his Twitter or ex-handle he's got his entire textbook for free on PDF. He also has all of his Python code as well as his Matlab code for almost all of the examples in his textbook.
I only want to mention this because for our listeners and our viewers who really want to dive in deeper and be part of the conversation and see what's on the cutting edge of this, I want to make those resources available for you. It would be wonderful to publish something. I'm not opposed to collaborating on things or just sharing these ideas. When we first began this interview, Brits, I wanted to say I was very encouraged because my background is more creative.
I come from a storyteller background and I'm okay at engineering but I found people who with traditional engineering tasks are just amazing and I'm just not that way. I'm much more creative, at least historically, but I fight with designing machine learning architecture. This is a chance for creative to really shine. When you talk about that, we'll collect just in the context of what you said to me before the interview started.
Yeah, well, in any field that can stagnate and whatever regardless of what you're talking about, any of what we think of as fresh ideas, fresh or novel ideas or just ideas that aren't common. What does that mean? Often, experts know so much about a certain area that it can blind them.
It doesn't mean they're not creative because you can have experts who are both, but what it also means is non-experts are almost as important as experts because it's looking at something new and being like, why isn't it this way? Why don't you turn it around? Generative networks are running a neural network backwards. There's just some of me.
Actually, when you go through and look at the recent discoveries from Ian Goodfellow and onwards, it's like he said, he was out with his friends, they're having some beer and someone's like, what if we had two networks compete with each other and it led to all this explosion and image generation? Being creative, I used to think of as just either embracing or being good at unknowns or being around unknowns. It's nice that you're new to a field and you're excited about the unknowns.
If you're excited about the unknowns, that can be very useful. Nice. Nice. One thing I also want to mention is you had mentioned before our episode today is that you have some future videos that you're working on with IEEE in just in general. Is there anything at this stage that you're able to tell us about? Yeah. If not that, then other ideas that are in the future. Yeah. I make videos sometimes with the IEEE Information Theory Society.
We should be working on two this year and I think we're going to do one on quantum communication. The other one on IEEE bugging them like information theory has to say something about modern day neural network, the explosion in applications. Because at the bottom of an information theory is kind of core, even more core than all the research. There is information theory research happening, but we want to pull out what we call the information bottleneck problem.
You could think of it as given a neural network. How much can you squeeze it before it stops functioning? This gets your question of optimization and how nature would do it. Information bottleneck and quantum. That's the two videos I want to do with them. The video I also just thinking about in my own time is the follow up to the my last video on chat GBT, which I think is the end of the AI series and it will be on reinforcement learning. Awesome.
Actually, you sent me an email asking me about some resources. I think I gave you two the absolute best quantum mechanics book I've ever read and I've wanted to get the author on this show, Jim Baggett. His book is phenomenal. That's the history of quantum mechanics going back from Max Plank inspired by Boltzman and it goes all the way to about 2008. I believe it doesn't get far past that, but I've never ever read something that is as well spoken as that.
The other channel I'd love to give a shout out to is through the Looking Glass universe. The name has changed. It used to be just looking glass. Yes, she's awesome. She talks about the nuance of things. There's no actual, the collapse of the wave function is a very misleading term because there's no actual collapse. She's very much a many worlds proponent where somebody likes to be a hosenfelder who's also very, very good. It doesn't specialize as much in this area specifically.
She offers some very, very good critiques of the many worlds problem. There's some great resources. I hope those are for your disposal. I would love to work with triple, with IEEE. If they ever need a deep dive dialogue, oh man, if you know somebody, please send them my way, I haven't established a relationship with them, but that would be my dream. I'd love to work with IEEE. For our audience, it doesn't know. IEEE, I sometimes say they're like the Vatican of electrical engineering.
Essentially, it's the institute for electrical and electronics engineering. How is it going to go? If you pick up, that's right. If you pick up any, just reach the nearest corridor, electronic thing you have. If you look close enough at the numbers in there, you're going to find something I'd get you to make sure that I was part of creating. Yes. Yeah, I should believe it. I'm just standards. I'm just standards underlying everything. They're incredible.
There's so many things on this outline that sadly we didn't get to that I know we are running out here on time. I want to do one more video. There's one video that talks about art in language and it involves the field of linguistics, which is a human activity. I'm hoping to play this video real quick. Allegra, can you play the TikTok from the creator, etymology nerd?
There's actually a few of them, but this creator who made a video on art in language and I want us to watch it thinking about chat GPT and AI's experience and engagement with language. That's the one. I don't think people realize that there is no difference between art and language. If you start with the earliest K paintings, they do the same thing words do. They serve as symbols representing a concept.
Just like we can use the word ox to talk about the concept of an ox, this picture also communicates the idea of an ox. It's precisely because of the blurred boundary between art and language that writing systems developed in the first place. Eventually, people realized that an ox could represent more than just an ox.
So they start using stylized ox heads to represent sounds and then that evolved into the letter A. The same thing happened when the impressionist and the post-impressionist realized they didn't have to paint reality as it is, but that they could abstract the representations of it. Then slowly but surely painters like Picasso realized they could push their representations further and further from reality.
Which is why I hate it when people look at a painting like Elaine Nekunning's abstraction of a standing bowl and say, oh my child could draw that. This is actually an anarchic critique of the relationship between symbols and meaning. And all poetry does the same thing. This description of a pair of oxen doesn't describe the oxen as they exist in reality. But it serves as a representation that each of us will visualize slightly differently. At its core, art and language do the same thing.
There are inextricable desperate attempts to capture pieces of our conscious experience which is so beautiful. I don't think- Okay, awesome. Thank you. Watching that video and watching all of your videos, I had this whole idea about how an image generator or something like mid-journey or dolly or any of those things, they have- they've been trained on these images and their knowledge of any concept like a cat or dog exists distributively in their head.
But just like humans, at some point have the ability or the desire or the need to draw a minimalist depiction of these images in their head and they were recognized, there's this key data that's captured in the drawing of a cat that multiple people can recognize. So I'm curious how soon it is until mid-journey or dolly is able to create minimalist pictures and then further abstract them into their own alphabet.
I'd love to write a paper on an artificial alphabet created entirely by an image generation network. What do you think about that idea? Yeah, it's a fun area to think about. I love information theory because communication at its core is how one mind influences another and how efficiently can you do that. So of course our language and art are the same. But if I wanted to tell you something and I'm going to draw a whole picture of it, that's going to take so much time.
So the more compressed you can get that representation, the better. Yeah, I've seen a bit of research on neural networks talking to each other in their own code or their own language and super fruitful area to think about, especially when we want to download our current mental states and recall them later. Yeah, I am. Now I want to recognize from our lovely producers that we are on a bottle of time right now.
So I'm not going to go too much longer, but the areas that we haven't gone into yet, we didn't talk, for example, I'll just give a little preview here. We didn't talk at all about the innovation that was called attention or transform networks that was introduced or it was introduced in your last video on chat, GPT-4. So for our viewers, I recommend you go to Bridge channel and watch his last video on the history of chat, GPT.
When I first saw attention networks and I saw a layer within layers where it links every neuron to itself, I thought, whoa, that network layer is self-aware. What I mean by that is it considers the information in each individual neuron and how it relates to other parts. And to me, I don't know, that seemed like an emergent self-awareness.
So then I thought, well, could you have more attention networks at higher levels and you have a higher level of self-awareness and I'm intentionally using that term? At least that was the significance that I brought with it. Is that what roughly what you would say or am I talking woo-woo science at this point? No, you're on the right path. I'm just going back to our three layers of learning, that mysterious third one where you're learning by closing your eyes and imagining things.
Yeah, that's not growing new connections and attention is a way to create a kind of synthetic connection, a short term one, as needed. And that's what attention allows you to do. It's really neat. And so just a lot to do there in thinking about where it goes.
Yeah. So technical term attention, which was the term adopted as it's used in machine learning and how it's, I encourage audience to watch Brits video and ask yourself if that could be thought of as emergent self-awareness at an individual layer. And if you took that same concept at higher and higher and more complex layers, what that might mean. Oh, that sends shivers down my spot. All right. Before we wrap it up, is there anything else that you would like to do?
To talk about anything you'd like to plug? Sure. This was a lot of fun. The outside of YouTube, the other thing I do is I have a company that created a product called Story Experiential. That's experiential with an X with my two partners, Tony and Elise. And it's a kind of online peer-to-peer learning model I'm really excited about. Talking about what won't AI do is facilitate human to human creation. It won't get rid of the humans.
And so I'm really interested in this peer-to-peer learning model online. So that's, I want to build other experientials and other domains as well. So check out Story Experiential. Awesome. Awesome. Thank you so much. Oh, yeah. That was our last topic. We didn't even touch on businesses that have gone under or been sufficiently challenged by GPD. So that's very interesting.
And short, I think our audience should, you know, those interested should really familiarize yourselves with AI and think about how you can build your business as, and I'm totally stealing, not stealing, but riffing on your idea here, how can your business be community centric so that your business model stays afloat even if AI automates certain things? So all right. Well, we're at thank you so much for coming on the show. I've had a blast.
This has been a journey, both with preparing this outline, but also just preparing the body of knowledge that is artificial intelligence and where we're at. One of the biggest challenges was to do this in a way that breaks it down for an audience that may not be familiar with it. I've tried very hard to do that. But again, we will continue this conversation. If you're on Slack, I'll put all the links in the YouTube video as well as in the podcast link.
Hopefully this conversation through email or through Twitter or X or through Slack or Discord, all those, all that information will be there. Also everything we mentioned from the text book by the professor Steve or who goes by Igan Steve on Twitter and everything else. So I think that's it. Thanks again, Britt has been a pleasure. I hope to continue our conversations as time goes on. Thank you Gabe. It was a lot of fun. Yes, sir. Sure was. Yes it made everyone tasty as time. Ohhh man. Whoop.
Alright guys. Again we're'nt ready again right now but what is moment of life? Oh man.