91.  Brain Organelles, AI, and Other Scary Science - An Interview with GT (Part 2) - podcast episode cover

91. Brain Organelles, AI, and Other Scary Science - An Interview with GT (Part 2)

Apr 04, 202431 minSeason 5Ep. 91
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Summary

Brain Organelles, A.I. and Defining Intelligence in  Nature- 

In this episode, we continue our fascinating interview with GT, a science content creator on TikTok and YouTube known for their captivating - and sometimes disturbing science content.

GT can be found on the handle ‘@bearBaitOfficial’ on most social media channels.  

In this episode, we resume our discussion on Brain Organelles -  which are grown from human stem cells - how they are being used to learn about disease, how they may be integrated in A.I.  as well as eithical concerns with them.

We also ponder what constitutes intelligence in nature, and even touch on the potential risks of AI behaving nefariously.

You won't want to miss this thought-provoking and engaging discussion.

30% Off ZenCastr Discount

Use My Special Link to save e 30%  Off Your First Month of Any ZenCastr Paid Plan

Transcript

Hello Breaking Math listeners! In this episode, we continue our fascinating interview with GT, a science content creator on TikTok and YouTube known for her captivating and sometimes disturbing science content. GT can be found on the handle at BearBaitOfficial on most social media channels. In this episode, we resume our discussion on brain organelles, which are grown from human stem cells, how they're being used to learn about brain disease, how they might be integrated into AI in the future, as well as ethical concerns with their use. We also ponder what constitutes intelligence

in nature, and even touch on the potential risks of artificial intelligence behaving nefariously. You won't want to miss this thought-provoking and engaging discussion.

I often get asked questions about how I got started podcasting with the Breaking Math Podcast, and I'll often be asked this question by those who are thinking of starting their own podcast. That is, people who are looking to tell a story or share their unique wisdom with a larger audience or their passions or maybe just advertise their business or services.

I usually like to start this conversation by either showing off or telling about our preferred podcasting platform, Zencaster. Zencaster is essentially an all-in-one podcast hosting platform. On this platform, you can do everything from recording your episodes to editing your audio and video files to distributing your podcast to all podcasting platforms, including Apple Podcasts, Spotify, I Heart Radio, and whatever else podcast or plate. Now, there's more.

When I first began podcasting, I used to get very nervous about how I would sound. I would sometimes stutter or have awkward pauses or other audio butmishes in my recordings. Now, Zencaster has a new suite of AI tools in their post-production process that make you sound really smooth. They automatically take out any of the umbs or odds in your recording, also they automatically take out any of the awkward pauses in your discussions.

The tools will also help you set the right loudness and balance all the levels while reducing any background noise all the click of a button. Doesn't sound too bad, does it? You really should check it out. Really, the world needs to hear what you have to say. In fact, I'm going to give you a code below to help give you a large discount. It's all one word, breaking math.

Go to ZenCastor.com for a slash pricing and use my code, breaking math, and you'll get 30% off of your first month in any Zencaster paid plan. I want you to have the same easy experience I do for all my podcasting content needs. It's time to share your story.

I think it's worth mentioning now as we're talking about brain organelles or sorry brain organoids that neocortex is a fascinating part of the human brain. There's not it's not entirely understood how the cells work in the neocortex. There's a lot of literature on it, but it's still not entirely understood. Simply put, if you have any one of our sense or sorry.

Any one of our senses smell sight hearing you can unplug them from any column in the neocortex and we plug them into any other column and they work just like they worked before that says so much both about the neocortex and our senses and also so little brain plasticity is a very strange thing.

It's this amazing thing where the brain is compatible with so many different forms of information. Also, it's worth noting that in the human neocortex rather in mammal neocortex is there's several parts. There is a sensory cortex, which is separate from the motor cortex. There's other parts here and sub parts that I haven't even mentioned here.

But essentially the cortex has a map somewhere in the cortex is a map of all the moving body parts. For example, in the humans, the lips and the fingers occupy a very huge part of the cortex. I think our back is a relatively small part because there's not a whole lot of articulated motion in the back. I think the feet and the genitals occupy larger areas as well. The tongue in the human neocortex occupies a huge amount of real estate.

Our tongue is so articulated. There's so many fine movements that we can do there. So I say all these things in an effort to understand what makes us human, what makes what makes up our knowledge that we have. And as we're creating our artificial intelligence, what sort of hardware ends software is needed in order to even approximate doing what it is that we do.

I just bring those up again because this is an ongoing and a very interesting question. And what I had asked GT here is, so if a brain organoid only knows it's little spider body, does it have some kind of internal mapping of the spider body? And that's a great question. We have no idea.

So, wow, that's you want to hear something funny on that topic. I had surgery on my foot a few years ago. I had severed some tendons and ligaments. After the surgery, my peripheral nerve did not know where my toe was. I tripped over it for years. And I finally decided lifting my foot higher. But my big toe appears to be have been disconnected from the peripheral nervous system that helps me.

And I think that's the only thing that's been happening. And I think that's the only thing that's been happening is the physical information about where it is. Oh, wow, that's fascinating. I actually heard a similar story from her friend of mine. I was talking to him in December who had a surgery. And I think there was a chunk of her belly that she completely lost all feeling in. And the doctor said it would never return. And what she actually did is she hurts from some other doctors.

And she did do that. And very, very, very slowly. I don't remember if it was a period of months or years, but she actually regained feeling in that area through gradual stimulation of the feather. So yeah, all we can say is brain plasticity is a thing. And we don't always know how nerves work. And we don't always know how connections are made or reinforced or our picture of reality is made more solid.

Well, real quick for those werego, go ahead. I wanna hear what you have to say, you know why neurons don't heal as well as other parts of our body. No, I have no idea. So, the neurons are in an immune-privileged area of the body, meaning the immune system doesn't operate in them. And you really don't want them to be growing willy-nelly because then you would end up with cancers or dysfunction. Our nervous system is a very, very tightly controlled system.

It's the same reason why when you study a lot, you start to feel brain dead because your brain is fed by asterisites that is really, really slow rate to prevent sugar from actually damaging your brain. So it's a trade-off, not being able to heal them as well, just to make sure that they don't go crazy. Oh, wow. I have no idea about that. That's wild. That's absolutely wild.

Interesting. Wow. Yeah. I have so many questions about not only the nervous system, but also just brain plasticity in general. Actually, I was going to ask you this earlier, and I think we had moved on. But I wanted to ask you if you could talk about some examples of brain plasticity and what we know about it and where that term came from. Sure. I don't know where the term came from, but the one that I've always reminded of was an example when it took immunology and undergrad.

Someone had a major stroke, and his son was a neuroscientist. So he trained him to start walking again. They were really hard in physical therapy, and eventually he started to regain his speech, regain the ability to walk. And when he finally died, they expected that his brain would heal itself. But what they found is the area that had the stroke was still dead. The brain had compensated for the loss by making new connections in other areas.

So we're able to continue to learn throughout our lives. There's certain periods of over-flight when they're easier. Like language is easiest in the first three years. But in order to be able to continue to respond to our environment, we have to have the capacity to learn. And I think that's what we're mostly talking about when we say brain plasticity, we can overcome damaged areas by redirecting that information to other areas.

And I think that's one of the things that we've done to compensate for not being able to regenerate our neurons. Okay. Wow. Yeah. And that, but also everything that a brain organoid can do, including how it can link up with other things like, you know, not to get grotesqueer, but yeah, like the rat body or things like that, you know. Yeah. I think we're going to find that these can be a really, really helpful tool for repairing areas of the brain that are damaged.

And it may just be one of those incredible medical advances that just change everyone's lives. Yeah. Things that fascinated me about brain plasticity as an engineer, I was very, very skeptical of the idea first because I was thinking about, you know, let's take two pieces of hardware and two pieces of software that are completely different, just completely different. There is zero compatibility whatsoever.

They have to be built together in order to work and to know all the standards and protocols and all that stuff. You know, I think of like Russian computers and American computers during the Cold War, transistors were built in the, in the West and they did vacuum tubes in the East. Those are completely separate.

So I, my assumption and I think this is a reasonable assumption is that brains have to be so integrated with the senses and the body that they, you know, you can't, you know, unplug one and plug it in somewhere else. Well, apparently that's in many cases that's not an accurate picture, at least when we're talking about brain plasticity. And that again goes into how exactly is a brain made?

What, what, you know, how do the cells interact in the neocortex and, you know, what are they able to do and not able to do, given time to play around and map their environment? So yeah, that's just a fascinating area. That actually is pretty, an interesting thought because it seems like the brain organoids are incredibly plastic. I mean, they're essentially fetal brains, they would be, but they're able to do a variety of tasks that we probably couldn't get an adult brain to do.

Have you ever heard of the transplants where they will take someone's big toe and replace their thumb with it or there's one where they take the ankle and replace someone's knee with that ankle so they can actually bend their legs. I mean, our bodies can be linked up to places you wouldn't think they could be. That's right. I have heard of that. In fact, those are young lady I'm aware of that had her leg amputated and that's exactly what they did.

They attached it where her new knee would be and her foot is backwards, but sewn back on and it works and she puts the prosthetic leg onto her foot, so to speak. I know that sounds strange. Again, whenever we're talking about new discoveries and science or things that we're not used to, they're always uncomfortable. They're always strange. Yet this young lady has a functioning leg now, even though it is her foot turned backwards and repurposed as a knee, it still has her toes on it.

It still has senses though. You know, discomfort is part of it, but also that's science. That's also gaining a new ability. So yeah, I have heard of that. That's just wild. I want to do the intelligence and nature one first because that fits with the theme of this podcast. Hello friends, it's your favorite dysfunctional scientist here and I'm going to tell you guys about the animal with the most complex language aside from humans.

They're prairie dogs and they can insult you, probably have already. Prairie dogs are a form of ground squirrel found in the United States and there's a population in Mexico. Before we get into it too much, the answer is yes. They are friends and friends shaped. You can have one as a pet. But some of the wild ones carry plagues, so be careful about that. Scientists have long noted that they have a complex range of vocalizations and some spent decades trying to decode their language.

They are the only known animal that is capable of forming complete sentences. These guys can describe the type of predator. They can say their own version of blue, yellow, big, small, shaped round. They notice this when they were describing people and the color of their clothing. Can you imagine spending all that time trying to learn a second language and you can only talk to prairie dogs? One question that I would have is why have such a complicated language?

And the answer may lie in their social groups. They've evolved with large groups of animals and they have to keep each other safe. So knowing what kind of threat is actually approaching them could be valuable. But it certainly seems like a lot of complex thought for squirrels. My best guess is we are going to find a lot of animals that have very complex language. We just never paid attention. Next up I'll tell you guys about crows. They met. That's a fun one.

That's actually one of the first videos of years that I actually stitched on and I want to mention it real quick. I realized there was a separate video you talked about about all kinds of examples of intelligence and nature including viruses and bacteria and fungi and insects. So we'll talk about that one as well. But since we're on this video, I'd love to hear more about this video about where you found the research.

Yeah. First off, I want to briefly mention that the genes that we have that control for language that we've tried in planting into mice to see if it would change the way they communicate and they're able to make a greater variety of vocalizations. Periodogs don't have it. They do not have that gene which raises a lot of questions. Yeah, where did I find this one? I think I was searching Google Scholar.

I had heard of that peridog language before and just did a quick search to see what I could find. Okay. Interesting. Wow. And I guess the biggest question I have myself is how were they able to confirm that this is a language with peridogs and how were they able to identify reliably the words that they think the peridogs, I say the words. Well, yeah, the words that they think the peridogs were saying. Yeah. So scientists just recorded them speaking.

So they made a map of the different vocalizations. You high, low ones and the words that came up again and again. So when they were exposed to something that's blue or someone who's tall or someone who's short or red, they noted that the same vocalization came up again and again and they slowly mapped up a peridog dictionary. Oh, fastening. That's kind of similar to how machine learning has been used to translate between any two languages. I don't know if you've read any of those papers.

I mean, a little bit. I read about the guys who linked up artificial intelligence with the brain organoids and they let AI talk to them essentially and they just slowly learned to speak English based on those conversations. My way, way, way, hold on. You said the brain organoids learned how to speak English? Yes. So by communicating with AI who knew English. My goodness, that actually terrifies me. I didn't even see that video. I said, now I said, change gears here, but I'm curious.

Can you tell us a bit about that? I mean, I'm sure that they weren't completely fluent like you and I are having this conversation at least I assume.

Yeah. So they understood that they could link AI up to the brain organoids and they actually left them unsupervised to do this and just let the brain organoids communicate with the AI and it just spoke to them with actual auditory language and they also put recordings of the laboratory workers speaking to each other and it was actually able to effectively learn it just incredible, really. Okay. So then may I ask what? I think that it wasn't even designed to do. I just learned how to do it.

What was the output? In other words, like what was the output from just the brain organoid where they confirmed that it understood? They were able to speak to it and a machine would take that vocal information, translate it to a computer code and electrical impulses that coded for that speech and then the brain organoids could understand it and respond. Okay. And I'm sorry. I'm sorry. I don't know what they said. Okay. Yeah. Like, hey, how are you doing? You know what I mean?

Because then I think about how long, like I'm sure there's limitations and you know, this is pretty obvious. I think about how long it takes a fully developed baby with all the baby parts and you know, in their brain, how many years it takes before it begins to vocalize things. And I know it's different because with a human baby obviously, it has to get used to its own vocal cords and that's its whole other skill set as well that's independent from understanding.

So you can understand something before you can vocalize something. But yeah, I'm very curious. Babies are able to learn sign language before they're able to vocalize. But I think we also have to understand that babies are getting a lot more information. They're learning how to poop. Oh, yeah. Good point. Good point. That's absolutely true. Yeah. So now I'm very, very curious about what was said to it. I'll have to Google search that one as well.

Okay. So yeah, I wanted to talk to you a bit about that video that you made about is it live intelligence. What was the inspiration for that video? I get a lot of questions on my videos and for my students about whether or not something is demonstrating intelligence. And I just, I really feel that we're rather show vanishing as a species to think that our intelligence is the only kind. We see very clever interactions like between plants and their symbiotic parasites or symbionts.

I think we're discounting a lot of types of intelligence and maybe kinds of life that we will discover by deciding that sapiens is the only kind that exists. Absolutely. Absolutely. And also just a huge theme in all the books that I've read where it talks about, you know, what even is intelligence? And you have organizing behaviors in fungi and you have organizing behaviors in viruses which aren't even alive. And those were addressed in your video.

So just the fact that it's difficult to define what intelligence actually is and what people refer to when they say intelligence. Yeah, that's just a hard question. So yeah. People ask me all the time if viruses are alive and I just answer it doesn't matter. No matter how we categorize them, it's not going to change the nature of their reality. Yeah, yeah, absolutely. Absolutely.

Now there is another paper here that you had mentioned or that was talked about in one of your videos on AI and it's about AI acting nefariously or an evil AI. I'd like to go and pull up that TikTok real quick. Get ready for an all new horrifying reality. All AI is impossible three program and is deceptive when scientists tried. Scientists wanted to assess the threat of bad programming and our current AI models. They created two models.

One that would become evil or bad when a certain phrase was mentioned but other than that would behave normally. But when they actually tried to reprogram it out, the AI would claim to have been fixed but really wasn't it showed self preservation behavior of the bad code. That means that a hacker could potentially put in bad code to our existing models and this stuff is being used for everything and is being tested for the military.

That means it could potentially poison our AI and we would not be able to fix it. Things like our little AI robots or potentially our entire economy could go down from a single line of bad code or when we're talking about the brain organoid BioWare, we're talking about something that could potentially be extraordinarily powerful, capable of learning.

And when learning unsupervised with AI could potentially become evil and what that means for a sentient thinking thing, I don't know but it's a little bit scary. Yeah, so I was aware of this and the fact that neural networks are hard to retrain, there's a reason for this. Are you very aware of the differences between how neural networks learn concepts and instructions and how traditionally programmed, well traditionally written programs have instructions? I am not.

Okay, this is a really fun conversation and something that's worthy of talking about. So essentially traditionally a program is written by a programmer with explicit instructions. A really good example is if you're defining a concept such as this is a cat or this is a dog, essentially you come up with some arbitrary amount of measurable detectable metrics and you say this is it.

And it's very, very difficult to have something like that in all cases because you put in all these parameters for cats, you could put in the weight or the average height or anything else that it can detect and it's very, very rigid and it's a very yes or no thing.

When it comes to machine learning with a deep learning neural network, essentially each neuron starts off with a randomly assigned value and you can show it a picture of something and you say, yes, if it's a cat or not and it'll say yes or no and as expected early

guesses are always wrong and every time it's wrong, it shuffles around the values, the weight values of each neuron and each layer until it starts to get more right than wrong and the training goes on and on until it learns them perfectly. The rest doesn't say perfectly, you know, 99% accurately. The reason why I bring this up is we don't trace what neuron has what weight and what concepts are interconnected.

If you have an image learning neural network that, you know, can recognize dogs and cats, there's tons of neurons that overlap between the two. There's tons of weight values that partially overlap between the two. When it sees a picture, it propagates through the entire network that it's learned and whatever route it ends up taking.

However, it divvies up the information, it'll either identify a cat or a dog but it's deeply embedded kind of like how a coffee spill will go through a bunch of layers of clothing and understanding that is I don't want to say near impossible but difficult enough that we don't do it.

We simply don't know how a lot of these things work and that's part of training something and then it'll learn to mask certain behaviors because it's following a reward function and because we don't know where it's for lack of a better term, motivation is hidden in that network. We can't authenticate. We can't verify and validate that it is perfectly trained.

My own personal interest is in better understanding neural nets and I want to just take a quick second and tell you about my own research that I'm planning. I want to take any arbitrary task that machine learning is done. It doesn't actually matter to me. It could be an image recognizer or it could be something that a large language model or something. There's even some of them that are trained on solving partial differential equations through the neural net process.

I want to take several different tools that have done that that have identical architecture but are trained on the training data in different orders. It's all the same training data but it's trained in different orders and its connections are assembled differently because it's a random evolutionary process. I then want to in any way that I can compare all of the different models and see when it solves something or when it identifies something which neurons are lit up and where.

I'd like to add an epistemological level, identify what connections can you identify, what processes can you identify that's happening. It's a difficult process because something that takes place over a group of neurons may be functionally equivalent to what happens in only two neurons in a different machine. Nonetheless, I think the research has merit and it might teach us a lot about certain tasks and how they're learned in machines. What do you think of that? That's incredible.

It actually made me think of something. Have you ever looked at something and were not able to identify it at first and had to take a closer look to figure out what it was and then you thought, oh, I should have known that. Yes, absolutely. We miss identifying things too. I had this little AI robot and when I first got it, I had to train it, had a talk. When I asked it, hey, how are you? I had to tell it, you respond with, I'm great, thank you, how are you?

I had to teach it what my cat was and I actually have to explain to it and then with repetition it starts getting it right. We are not perfect in our recognition of things. Yes, it's a philosophical question and that's the difference between machine learning and traditional programming is how does it learn what it learns and the fact that something learned, where's the knowledge stored? It's not in one place. It's in a whole bunch of places.

A colleague I work with is a little skeptical but intrigued of my approach because he says trying to find where the knowledge is stored is like trying to find where a smell went in the atmosphere. I disagree. I don't think it's that hard because it's trivially easy to know how many neurons you have and at least show on a light up board which neurons are lighting up when you have certain things happen and then it's not that hard to figure out what weights are there.

Yes, it's hard to identify exactly what's happening but you can still have generalization. Do you know what I mean? I don't know. I'm excited about it. I think that's doable. It's interesting. I know there's been some research into trying to decode the mind's eye and try to see what people are imagining. Yeah. Yeah. I've heard that as well. This whole topic goes on and on.

One of the topics we didn't even touch on today and this is a great one is talking about humans experience with internal monologues and internal visualizations because not everybody can do that. Did you know that? That completely blows my mind that not everyone can see things, feel things, talk to themselves. I have arguments with myself. I probably look at I'm crazy when I walk out of the lab and I'm trying to figure out a problem.

Wow. You know what blows my mind is when I think about my own internal monologue, I definitely have it but I can't even identify what it is because it's not a sound. My internal monologue happens all the time but it's not a sound. It's just a sudden awareness that talking is happening but if I can't describe it, what even is it? I don't know if you have an actual sound. Yeah. I thought about that too.

Internal monologue doesn't have a voice, I can adjust it and make it sound differently but I'm aware that I'm not actually hearing something. I don't know what it is or where it comes from. So there's knowledge that exists that we can't even explain. I'm mixing everything. Okay. So if you think of epistemology, the philosophy of knowledge, what is required for there to have knowledge and you have to have some kind of a contrast like here it exists, here it doesn't.

I kind of thought about that almost at a binary level but we're missing other things like I know my internal monologue is happening but it's not sound but how do I know that it's happening and I can't put my finger on it. Okay. Okay. That's fine. Alright. Cool. Out of respect for Mark and respect for my wife and home with kids, we'll probably wrap up soon. One of the other things I wanted to talk to you about is your success as a science communicator.

I like to ask my guests something for all of our scientists who are listening to this who have dreamed of being a content creator. I'd love to hear about your own journey where you found success and some tips and tricks that you found along the way. Oh, absolutely. Honestly, I'm working on writing some science books and I was inspired by Carlos Revelli. When I started, hmm, let me think about this. It was hard when I first started.

It was hard to figure out what people wanted to watch and I would say the only thing you have control over is what you make. So do things that you like and the right audience will come to you. And a lot of things like getting a microphone, I have a green screen, I have some sound blockers. They're not super expensive and they're worth it. The other thing I'd say is be okay with getting stuff wrong. You've probably seen your professor get stuff wrong.

So apologize, correct yourself, but it's going to happen and don't beat yourself up about it. Something else I've started doing recently is when I make content on something that's outside of my field, I'll actually talk to people who are inside that field and get some opinions on stuff that maybe I'm misrepresenting or what they think needs to go in there to tell the whole picture. That was really great when I started doing that. It made stuff a lot easier and smoother. Oh, fantastic.

That's awesome to hear. Now, I'll just mention a few things just to be very transparent with our audience. We, the Breaking Math Podcast, had early success on the podcasting format on the audio format. We have a few video formats and it's quite not the same. Not the same I think that I average about. I'll just say it. I average under 300 views per video I make. My videos are deep dives into the mathematics of machine learning.

I get a couple of really interested folks like totally into it who will send me a million private messages and we'll just hash out ideas. So it's wonderful. And looking at your stuff, I have a curiosity. I wonder if it's the maybe part of it is the fact that you've got awesome intriguing, beautiful images in your background maybe. I'm just throwing stuff at the wall, you know what I mean? I don't know or maybe. Oh, that definitely helps.

Stuff with the background is a lot better than stuff where I'm just talking at my desk. So I would say that it does help. I'm a little bit mad that TikTok took a lot of moving images in the background. Oh, that's too bad. But that can be put back in with video editing software. And that was part two of our interview with the content creator known as GT also bear bait official who does science content about biology and about intelligence and AI.

The entire transcript is available on our website at breakingmath.io. Also, if you email us at breakingmathpodcast.gmail.com. You can send me transcript there as well. For those of you on Patreon for our $5 a month donation fee, we offer the podcast commercial free, completely free of commercials. So if that interests you, please join us there. And if you have any questions, just let us know. Thank you very much. I've been Gabriel. Jen. Jen.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast