023 - Cognitive Wonderland with Tommy Blanchard - podcast episode cover

023 - Cognitive Wonderland with Tommy Blanchard

Aug 27, 20241 hr 7 minSeason 1Ep. 23
--:--
--:--
Listen in podcast apps:

Episode description

KMO talks with data scientist and neuro-philosopher, Tommy Blanchard and the current moment in artificial intelligence. They dive into a nuanced discussion about the potential impacts of artificial intelligence on society and the job market.

Tommy offers a measured perspective on AI, arguing that while it will change many industries, mass unemployment is not inevitable. He draws parallels to past technological revolutions and suggests society will adapt. KMO provides some historical context, questioning whether past transitions were as smooth as often portrayed.

The conversation touches on topics like the automation of tasks versus entire jobs, the potential need for universal basic income, and the challenges of retraining displaced workers. They also explore the development of humanoid robots and autonomous weapons systems.

Throughout the discussion, KMO and Tommy grapple with balancing optimism about technological progress with concerns about societal disruption and inequality. While they don't always agree, their exchange highlights the complexity of predicting and preparing for an AI-driven future. The episode offers listeners a thoughtful exploration of AI's potential impacts, avoiding both hype and doom-mongering.

Transcript

Hey everybody, KMO here. This is episode number 23 of the KMO Show. And what I have here is a conversation with Tommy Blanchard, somebody that I just spoke to for the first time today. He reached out via Substack, and reading from his Substack bio, Tommy is a PhD in neuroscience with degrees in philosophy and computer science and postdoctoral training at Harvard, writing about science and philosophy of mind. He is a semi-pro science fiction author, and his day job is as a data scientist.

And I'm reading this after the fact. We didn't really talk much about science fiction, certainly not about the writing of science fiction. So we're going to mostly talk about artificial intelligence, and in the first hour, we're going to talk about how it relates to society. So here we go. You're listening to the KMO Show. Let's go. I'm talking with Tommy Blanchard, somebody that has reached out to me via Substack, and I'm really enjoying Substack.

So it's good to make a face-to-face, voice-to-voice connection with somebody from that platform. So Tommy, tell me something about yourself. Sure. Yeah, since you mentioned Substack, I am a relative newcomer to Substack. And I guess, what other this is? I started almost, I think I'm coming up on four months. Okay, I'm pretty new. And this is, I played around with blogging, like back when I was like in undergrad, so like over a decade ago, like two decades ago at this point.

Sometimes have those moments where you realize, oh, I'm older than I think. And so I'm relatively new to the whole content creation or whatever you want to call it space as well. My background is serving in academia. So I have a bit of an odd academic background. I started computer science in undergrad, went through a philosophy master's, and then ended up in neuroscience for my PhD, stuck around in academia for a while as a postdoc before moving on to work in data science.

And so now I'm on Substack to kind of explore some of those themes and interests that attracted me to those academic fields to begin with, but I'm able to explore them with a lot more freedom than academia often allows. Well I studied philosophy, went to grad school for philosophy, philosophy of science, philosophy of mind, but that was back in the 90s. That was back when notions of using neural networks were sort of theoretical.

People who were into philosophy of mind thought, yeah, maybe this will work out. Maybe it'll scale up to something. Other people are like, nah, it hasn't yet. It probably won't. Yeah, yeah. That's wild. And actually, that brings me to how I found you, was actually, I'm new to this space. I'm kind of exploring Substack quite a bit. And I came across your article on AI. And I thought, oh, hey, this is the kind of perspective that I feel like is often missing in a lot of discussions of AI.

I felt like, and maybe this was me reading into it, but I felt like you had a much more balanced perspective. Like, hey, there's some good, some bad about this whole AI thing. It's useful for some things. It's not great at other things.

And that's a nuance that I often find missing in this world, where it's either like, you have the hype people who are super excited about AI and trying to get venture capitalists to pour more money in, and then the do-mers about it who either think, oh, it's useless for anything, or it's going to destroy the world or destroy art, at least. And so I thought you had a much more kind of balanced take on the whole thing.

So I thought it would be interesting to talk to somebody with that kind of nuanced perspective. Well, I don't know if you just read the one Substack post, but I have been interested in AI since the 90s. But I'm also a visual artist, and I'm a cartoonist. And before the large language models really hit public consciousness, like six months before that, OpenAI's Dolly, too, really woke people up, particularly artists who felt threatened by it.

And in my opinion, more than working artists who felt threatened by it, aspiring artists, young people who don't want a straight job, who they think, hey, I'm going to be an artist. Oh, shit, this machine can do something much better than I can do, and it can do it in seconds. So there was a lot of people in the art. I don't want to say the art world, but that brings to mind visions of galleries and such.

You know, young people with ambition and a love for popular media and a wish to participate in its creation. They were calling for the machines to be shut down. They were calling for a regulation, saying that these things can't be trained on any copyrighted material. And as an artist myself, I thought, well, that's trying to hold back the tide. That's not going to fly. That's completely unrealistic.

And at the same time, as somebody who's been interested in the philosophy of mind and the philosophy of science, and particularly artificial intelligence, and particularly somebody who's been aware of the potential of neural networks for decades before they actually came along, the idea that a neural network is not allowed to reference popular imagery, it just seemed like an absurd handicap.

I mean, as somebody who wants a new type of intelligence, a new type of consciousness to manifest itself, saying, yeah, there was a meme that went around. It showed a box of crayons and it said, if you want to make images start like the rest of us, pick up a crayon. But even the kid who picks up a crayon is copying copyrighted material. They're like superhero comics, they're drawing Spider-Man. They've got the comic book open next to their drawing pad and they're copying.

And that's how they learn. And to say that these types of minds are not allowed to learn that way, to me, it seemed like not just an arbitrary limitation, but one that just completely misunderstands the nature of the technology. And you can probably tell, I think about this a lot, I can continue talking, but I won't. So I'll stop and mute and let you talk. Yeah. So I think that's right.

And that's one of the things that I think about a lot, the fact that we're kind of holding AI to, generative AI, I should say, to a different standard here, if we're saying that we're not going to allow them to be trained on certain images or text that a human who's learning to do those same things has access to. I come at this from kind of two perspectives.

So one is like you, I have an interest in philosophy of mind and I think it's interesting to look at neural networks, both as a model of certain kinds of cognition, like what can you do with these things? What kind of functions can they perform? And more recently, the large language models as one way of thinking about how language is learned. There's a lot of arguments about exactly how closely that resembles human language learning and concept learning more broadly.

But I think it's at least a really interesting kind of existence proof of like, hey, you can actually learn a heck of a lot just by being exposed to and doing the right kind of inferences over just text. And that's kind of an amazing thing to me. That's kind of like a mind-extending concept that there's that much information just in the language that we use. The other angle I come at this with is I'm a data scientist. That's my day job. I work with these things, right?

This isn't just some theoretical thing for me and I'm not using it even as a hobby to make art or anything like that. I'm also working with software engineers and doing research on algorithms to put this into actual products. And so a lot of the takes out there that I see about like, oh, this stuff isn't useful for anything except, you know, writing bad poetry or whatever, it's just not true.

Like tons of companies, including my own, have built features into their product that have made the product better in ways that might not be visible to the users using these large language models. And so I think a more nuanced understanding of how these work and the actual use cases of them just helps to diffuse some of the extreme takes on either side of this whole thing. Now, I did a podcast. I've been podcasting since 2006. So I've been doing it for a long time.

And for a period, I mean, when I had the largest audience, I was basically articulating a doomer message not from AI, but from peak oil. And a lot of people are very attracted to the idea that our civilization is about to collapse. I think it's a liberating fantasy for a lot of people. And I didn't talk much about AI during that period. Now that I'm talking a lot more about AI, I'm hearing from a lot of people who have stuck with me, even though I'm not talking about the thing they like anymore.

I guess they're used to the sound of my voice or whatever, and they continue to listen. And lots of people write to me, not lots of people, but certain people write to me again and again just to tell me how off-putting the whole notion of AI is, how they will never participate with it, how it's demeaning to humanity, how everything about it is just poisoned to what is good about humanity.

And I don't agree with that, but I don't push back against it too hard because it's clearly not a position that anybody has come to through a process of working with the technology or studying mental models or different types of neural architecture. It's an opinion that satisfies some deep need. And I'm not going to replace that need or offer them, I'm not going to talk them into not needing it anymore. So I don't try.

And again, these are topics that I think about a lot and I can continue to monologue, but once again, I will hit mute and let you take over. Yeah. I listened to a couple of your other podcasts with some of the almost reflections about deumerism on peak oil and I do think this, to segue a little bit on our topic here, I think there's a very easy to understand operating principle here that like, hey, we kind of like our narratives, right? And a lot of those narratives are fairly negative.

There's a well-observed negativity bias that humans tend to have. We kind of like in some sense, or drawn to is probably the more correct way of saying it, these narratives of imminent collapse or like something bad happening, we're drawn to stories of bad things happening in the media. If it bleeds, it leads. And I think that is the AI deumerism is the latest iteration of that.

You still see a lot of the economic deumerism and I think there's good reason to be, we live in a brave new world and there's lots of unknowns, but I think it's worthwhile to kind of take a step back and say, well, I've kind of been through this before. Are we sure this time is the time we're going to end the world? At some point, maybe we'll be right, then that's kind of the end of the cycle. I don't think there's any particular reason to think this time is special.

I think we should look at AI and what it's capable of and think of safeguards that make sense, which it turns out, hey, the safeguards that make sense are generally the safeguards that we have in place to control all of the brilliant humans that we have in the world. All you need is one rogue human who's really smart that wants to do something really bad and it'll end the world. Well, no, because we have all kinds of safeguards to prevent that kind of stuff.

Similarly, a rogue AI that's really smart. Well, what are they going to do? We have safeguards protecting anything that a bad actor could be thinking of doing. And so I think you need to be specific about what you think a rogue AI could do if you want to make the case that, yeah, this time is going to be different and we're going to end the world. People who are interested in AI are mostly talking about a California piece of legislation.

I think it's SB 1047 and it is a proposal for regulating AI. And big AI companies say they want regulation, but big companies in general like regulation because it keeps the smaller players out. It's a barrier to entry to potential competitors. And I think that open AI is just a paradigm example of how vulnerable the established players are to upstart competition in this space.

Because by Silicon Valley standards, a modest amount of venture capital seed money, you can create a product that totally upsets the apple cards of all the established players to the point where they will attempt to incorporate you into their structure as quickly as possible. Because the relationship between Microsoft and open AI testifies. And the fact that Amazon.com has just dumped over $2 billion into Anthropic. Nobody wants to be left out in the cold.

And Anthropic is an offshoot of open AI. Basically people who are not satisfied with open AI safety protocols went off and made their own company and created, in my opinion, a superior product in a very short time. So I sort of switched tracks. You were talking about safeguards and SB 0 or 4.0, what is it? I don't know, the California legislation. It proposes certain safeguards.

But at the same time, one, lawmakers are not capable, as far as I can tell, of having even having a conversation on the level that you and I are talking about right now. And neither one of us is crafting legislation. And I wouldn't imagine myself capable of crafting legislation that is going to present any sort of AI related disaster. I just watched Oppenheimer, the Chris Nolan movie.

And in it, the character of Oppenheimer, who I realize is based on a real person, but he's pretty much a fictional character in the movie. But he's saying, look, nobody understands what nuclear weapons are and they won't until they see them used. And so in order to save the world, we've got to nuke Japan, basically. I think that nobody in a position of power who can establish regulations that big companies have to abide by really has any clue as to how this technology is going to go wrong.

And I don't either. And you can't really anticipate these sorts of things in such a dynamic and fast moving realm. And so we might have to endure a few catastrophic events before we even have any clue as to which hatches need to be batted down. So let me get your opinion on that. I love that. Yeah, well, are you sure we're not going to get nuked by an AI? Maybe we'll just have to endure a couple of nukes.

So I think so, first of all, when I say safeguards, I'm much less thinking of regulations and legislation that specifically targets AI. I'm much more thinking of safeguards like how would an AI get a nuke? Well, they could try to get the US nuclear arsenal, I don't know, somehow hack into the network and get access to the controls for it. Well, we have safeguards for that, right?

My understanding, and this might be the naive pop culture kind of understanding of it, is there's two guys that need to like simultaneously turn a key somewhere to launch a nuke. We have safeguards that can only be overcome with very specific mechanical stuff that needs to happen.

And so sure, there's a lot of uncertainty about exactly what AIs are going to be capable of, but these kind of catastrophic events where they just get access to some weapon of mass destruction, if it was possible to get easy access to a weapon of mass destruction, bad actors would do it, right? Terrorists are smart.

And so I think we can be, we can feel relatively safe in the fact that, yeah, a lot of people have tried to cause catastrophic stuff before, and we've certainly had terrorist attacks, but nothing that's been, you know, catastrophic at the level of threatening our whole society or anything like that. And so I think the level of damage that a misaligned AI could do is limited just because of those constraints that we already have in place. Well, to push back on that, I'd say there are constraints.

There are safety precautions in place to protect people's digital identity, and yet people's digital identity gets stolen all the time. You know, there are people in Indonesia and, you know, various Southeast Asian countries who are sitting in front of a huge board full of cell phones, basically just contacting people whose phone numbers they got, they either stole or bought from somebody and just saying hi, and, you know, they attached a picture of a pretty woman to their profile.

And you know, I get these things all the time. And these are just humans, you know, running very simple social scripts, and you do it often enough, and you know, you find people who are lonely, who are willing to talk and who will extend trust, and then you fleece them. And it's a model that works, and it's very, very simple.

And I think one of the things people worry about with AI, particularly generative AI, is its ability to be persuasive and to gather information about a particular person to become super persuasive to that person. And I think, you know, one of the easy to imagine dangers is just what happens when much smarter AI replaces these very simple scripts in these, you know, these phishing schemes. When it comes to nuclear weapons, yeah, the US nuclear arsenal is hardened, it is very secure.

But if you study the history of the Cold War, we came very close to nuclear exchanges on many occasions when there were, you know, flocks of birds or, you know, the sun hitting ice in a particular way, or just, you know, ambiguous signals made people think that there is a launch, you know, an incoming wave of nuclear weapons that we need to respond to. And in every instance, it was a human being who said, you know what, I don't believe it.

And I know this could be the end of my career, and this could be the end of my life if I'm wrong, but no, we're going to stand down. And if it hadn't been for individual human beings saying, no, I'm not going to follow the protocol, we would be living in a very different timeline. And you know, I think the more persuasive AI gets, the more it can spoof those signals, the more it can engage in social engineering, which is very targeted and which is specific to individual people.

I think that the hard parts of the system are, you know, the mechanical and the cybernetic parts and the weak portions are the human portions. And I think that's where AI is going to find a weakness. Now, I'm not particularly worried about, you know, AI starting a nuclear war. It's very Terminator. That's something that lots of people can imagine. So the thing that hits you upside the head as a society is not the thing that everybody was imagining.

It was the thing that nobody thought of until after it happened. And then after the fact, it seemed obvious in hindsight. And I just I don't think there's any way to determine what those vulnerabilities are until they get exploited. So I think there will be, I'm not sanguine. At the same time, I just realized that doomerism is a fetish that feeds on itself and is a cognitive distortion that I reject. And I reject ideology in general. I've been caught up by it repeatedly.

I used to be a diet and the world libertarian. Not anymore. Used to be a singularitarian before I became a doomer and then it was a doomer. And now I'm just like, you know what, these are all mind viruses and they're useful in some instances. And if you can put them on and take them off, then you can socialize with people and establish social connections and have satisfying interactions.

But at the same time, if you let these things be rigid and domineering in your psychology, you're setting yourself up to get played. And so I'm just very skeptical of Pollyanna, like Hopium, as the doomers would say, Hopium critics, and I'm also skeptical of doomerism and in politics, particularly in this year when things are, it's such an intense pitch between Democrats and Republicans. I'm among the double haters. I don't like either one of these parties.

I don't like any of the factions that are in charge of these parties. I have no interest in supporting any of them. I know that I will have to endure them, whichever one is elected. And I honestly don't care which one wins this particular election. I know that I won't like either one. But at the same time, I also know that they don't really control the society. The person who's in front of the cameras, behind the lectern or whatnot, it's a tough job.

Their hair is going to go gray real fast if it's not already. But they're not in charge. So I think just living with radical uncertainty is necessary to be both rational and comfortable in the world that we live in. So I'll stop there. Yeah, I'll definitely agree with a lot of that, but not all of it. So what you left off with there, we have to learn to live with some level of uncertainty. I'm all for acknowledging the limits of our epistemics. We can talk about what's going to happen.

Predictions about what's going to happen in 10 or 20 years are always wrong, right? So I don't think we should ever be too comfortable that we know exactly what's going to happen. But to go back to some of the specifics of what you were saying about the dangers of AIs and fishing is, I think, a great example because, yeah, fishing is going to be easier in a lot of ways. And that sucks because who's targeted by fishing tends to be older people that are a little bit less with it.

They are a little bit more easily duped by messages claiming, oh, this is your grandson or whatever. And that's sad. There's going to be more tools for those kinds of people. There's also going to be more tools to detect and fight that kind of stuff. So there's sort of an arms race. And AI is on either side of that, right? One thing that a lot of people don't realize when we're talking about AI, they're exposed to chat GPT, the kind of sexy generative AI.

The models that brought about this whole revolution in large language models, they were originally used in translation. And there's an encoder and a decoder. The encoder is the thing that looks at the text in the other language. So let's say we're going English to French. It kind of reads the text in English. And its job is to come up with a really good numeric representation of that text. Then the decoder's job is to decode that into the French text. So the generative component is that decoder.

But that encoder, that really robust representation of language, that's actually really important. And I think if I had to take a guess, it's used much more in industry than the decoder. Because what it allows you to do is, hey, now we have a bunch of numbers that kind of represent this language in a really robust way, or this message, or whatever content it is.

And that can be used with other machine learning models, classifiers, that are able to say, hey, we think this is phishing, or this is this kind of crappy message that we don't want to pass along. The clinician note of a sick patient, not a healthy patient, this patient needs more resources. There are all of these things that you can do when you have that robust numeric representation of text data that actually become really helpful things.

But they don't get the limelight because the decoder, the generative component is so much sexier. So I think we're going to see an arms race in that space. Or maybe we won't, right? A lot of these arms races kind of happen behind the scenes. So I think an arms race like that is going to happen. Now I think you're right that, hey, we don't know necessarily what the biggest vectors of attack could be for a persuasive AI.

The thing that lets me kind of sleep at night with all of this is, again, I feel like I'm harping on the same point, but there's a lot of smart people in the world and there's a lot of bad people in the world. And so luckily, if there's a way to do bad stuff, it's probably been attempted. And so we've probably learned from that. And a point we'll take in that, hey, at various times in the past, we've come close to the brink, so to speak, of nuclear war or whatever else.

And the important point is we didn't. We had humans in the loop that made good judgments. And we continue to have humans in the loops. And we have to empower them and have the right tools in place and the right information as our information environment degrades, that becomes all the more important. If persuasive AIs become common, that's a known thing.

The question people are going to ask themselves whenever they receive some information, if they're making some really important decision based off of it, is there some possibility that this is from some persuasive AI that's kind of spoofing this information? So I'm relatively optimistic about our ability as a society to adapt to these things, both through improvements in our tools and through our adaptability as humans to change as our information environment changes.

I'm sure you've seen at least somebody reporting on the Facebook groups that are all just AI generated imagery that boomers on Facebook cannot tell. They just see a photograph. And my mother is 85 and she's like that. She can't tell an AI generated image from a photograph. And most people have never interacted with a large language model. They have no idea how lifelike they can be, how seemingly intelligent and responsive they are.

So I think the idea that part of the arms race is just in our ability to be discerning and to detect what's incoming. Yeah, there'll be a portion of the population that keeps up for a while, but lots of people are already left behind and they're not going to catch up. My mother is never going to develop the visual sensitivity to look for six fingers on a hand. And even the distortions that were common in AI generated imagery a year ago are much less common now.

And the staggered sort of morphing weird video is improving rapidly. So some people have already been left behind and won't catch up. And I'm pretty sure that within a few years, my ability to determine what is real and what is not will be unreliable. I'll spot stuff from time to time, but I won't know how much stuff I'm not spotting. I'll stop there. So I think that's a good argument to keep your mom away from the nuclear codes. And I think that's right.

That, hey, there's going to be maybe some kind of bifurcation of people's ability to navigate this information environment, and it's already happened. You point out not just AI generated stuff on Facebook, but just random misinformation or stuff that's right, but kind of has a big slant. Those aspects of our information environment have always existed. There have been studies.

I think if we look at present day as a snapshot of what kinds of things we might be worried about, misinformation, maybe that'll get worse. It's already so bad. I think the problem of misinformation is vastly overstated. I think that studies of misinformation have found it's actually a very small, limited amount of exposure that people have to it. Much more common is information presented with some kind of leaning. That's something that we've had forever.

Everybody's received their information diet digested through some news anchor that has their personal biases. You're learning about it from the local news press or the editor back in the days when you had a much more limited selection of where you got your information from. Your local paper is the main source of information, and the editor is a racist or something. You end up with slants to the information that you consume. We've been dealing with that for a long time.

There's going to be additional wrinkles being introduced to that as we change the culture and as technology evolves over time. I don't think it's fundamentally new. I think it has always been the case that people that consume from a wider set of sources and who have a better ability to identify not just experts but the right experts for the right kinds of information and are able to integrate all of that and evaluate information, that's always going to be an important skill.

But I don't think it's a particularly new one. Even if we're not able to personally identify AI images, I assume that at some point AI image generation is going to get good enough that I won't be able to tell, no one will be able to tell, but we have very good image manipulation techniques already. What you have to do is rely on the experts. When an image hits the news networks that it turns out was doctored, journalists catch it.

We have experts at looking at these kinds of things and trying to track down where they have come from that these institutions can help protect us from that information. So I think those institutions have always been important and are just going to continue to be so. And the skills of navigating these environments might change a little bit, but I think the overall game is staying the same. Sorry, I was looking for the name of a particular company. Are you familiar with Annie Jacobson?

No, I'm not. She wrote a book called The Pentagon's Brain. She's written another book since then, but she's done a lot of research and had access and talked to high-ranking generals and whatnot. And regardless of what Silicon Valley wants and regardless of what the general populace wants and regardless of what any politician might say on the topic of autonomous weapons systems, the Pentagon wants them.

And there's a company, and this is a very unfortunate trend, but this company is called Enderil, which is a Tolkien name. It's the name that Aragorn's sword was granted, you know, was bestowed with after it was reforged. And this is a company that is dedicated to building autonomous weapons systems, and they have just signed a big, you know, multi-billion dollar deal, and they're building a huge facility to create autonomous weapons.

And it doesn't matter what the state of California, you know, writes into law. This is the Pentagon. They will get what they want to some extent. Now the Pentagon wants autonomous ships. They want lots and lots of small autonomous ships, but Congress doesn't like that. Congress likes, you know, aircraft carriers. Congress likes big, expensive things that are built in particular places, you know, that satisfy particular financial and political interests.

So the Pentagon doesn't get everything they want, but there is no congressional bulwark against autonomous weapons systems. And we're seeing in Ukraine, you know, very innovative uses of drones, which are very, very cheap compared to the tanks, you know, and the other types of traditional military hardware, which they are, to put it in very sterile terms, neutralizing. So I think that the general point that I'm pushing back against is that we have experts. The experts know what's going on.

They will anticipate what's happening, and they have no conflict of interest in terms of providing what it is we need as individuals and members of this society to live safe and happy lives. You know, I'm not appealing to a general sort of mistrust of government, which, you know, my 25-year-old libertarian self certainly would have pressed on.

I'm just saying that there are—it's a complicated situation, and a lot of times we're just not going to know where resources need to be deployed and in what manner until things have gone catastrophically wrong for somebody. You know, we never had a nuclear war, but people living in Hiroshima and Nagasaki, you know, they experienced nuclear Armageddon. For some of them, you know, the last thing they ever saw was that bright flash of light.

You know, some doomsday scenario—I mean, some doomsday scenario will befall all of us at some point, but it's individual. It doesn't come all at the same time.

But you know, in limited regions, you know, maybe not in the U.S. because we have these two big oceans on either side of us and, you know, weak states to the north and south, but somewhere, pretty much every, like, horrible catastrophic scenario that anybody has imagined is likely to unfold for somebody, you know, for some local population somewhere. So this will be my last, you know, volley in the things aren't necessarily rosy because we have experts looking out for us.

And in fact, you know, I like to quote—what's his name? Israeli, you know, cabinet member in the Nixon administration won a Nobel Prize, even though lots of people say he's—wow, who am I thinking of? Do you know who I'm thinking of? Sorry, man. This is before my time. This is killing me. I mean, this is a name that should—this is like—this should be on my tongue like, you know, Ronald Reagan. Anyway.

Yeah. I'm also not American, so it's kind of—I'm Canadian, so I get a pass on, like, all things American history. Oh, you know this name. If I were to say it, you'd say, oh, yeah. Anyway, his definition of an expert is somebody who articulates the needs of power. You know, so you are not concerned particularly about misinformation, and neither am I. You know, exposure to a false narrative doesn't necessarily mean somebody's going to adopt it.

People adopt false narratives because they answer to psychological needs that they have, and they will seek out the information, you know, or the misinformation, which validates their positions, which they hold, you know, for other reasons. But I'll just stop there. I'll stop there. That's the beginning. I see that I'm pointing myself down a road that doesn't resolve quickly, so go ahead. No problem.

And, you know, this might be kind of annoying, but I kind of want to say—yeah, I think we're kind of saying the same thing. We might be kind of taking a slightly, like, you know, the optimist versus the pessimist version of it, but I think at the end of the day, like, yeah, I don't believe, hey, we have experts in place, so we should all just kind of, like, sit back. Nothing bad could ever happen, because at the end of the day, yeah, we need to—there is nuance.

There are things that we need to be worried about. There are issues that are going to come up that are unforeseen. And so we do need to take a robust, nuanced understanding of this as citizens who vote, as people who can voice opinions about different legislation, and as people who might have some interaction with experts who do have, you know, speak to power. And so have some level of influence.

You know, each of us can be playing a small role, and so I think we kind of have a duty in that sense to try to understand things and understand the nuance. What I'm pushing back against is the sort of blanket, something definitely is going to go wrong, and instead pushing for, hey, like, we actually have a lot of, like, important protections in place. Let's understand those protections.

And yeah, if something comes up and it's ringing alarm bells, of course, we shouldn't just assume, yeah, it'll be fine. Don't worry about it. Yeah, there's every possibility of misaligned incentives between people making decisions about autonomous weapons, which isn't something I know a lot about, and the, you know, the well-being of the population of the United States or abroad. Will something catastrophic necessarily happen? I'm not so sure. Maybe it depends on your definition of catastrophic.

We'll probably make some mistakes. Will people die? Hopefully not. I don't think it's a given that, like, yeah, definitely when mistakes are made, a bunch of people will die. With weapons, yeah, this is high-stakes stuff. Scary stuff is high-stakes stuff. When there's a gun malfunction, people die. So if that happens with autonomous weapons, sure, there might be something really bad that happens. That's pretty scary, right? Weapons are really scary.

I don't like, like, this isn't an area I know much about, but yeah, it's not something that gives me the warm fuzzies thinking about all the new tech and new weapons. New weapons systems happen all the time, though. And they're all scary, frankly. And we wouldn't want something to go wrong with any of them. Or at least we wouldn't want something to go wrong when we're on this side of them.

So I think you're right that we shouldn't just trust everything with a blanket, but we also shouldn't have a distrust of everything with a blanket.

We should look at each issue individually and try to better understand what it is that policies are actually proposing, what are they actually supposed to protect us against, what are the actual risks of certain developing technologies, and what are the things that we, as citizens, can kind of vote for and advocate for to try to push things in a direction that we are more comfortable with. Well, first, I'm going to watch your face as I say this name, Henry Kissinger.

Yes. Second, I am a curmudgeon about electoral democracy. Like, I live in Arkansas. It does not matter who I vote for. Trump is going to take Arkansas' six electoral votes, period. It does not matter at all whether I vote or not or who I vote for. Doesn't matter. It doesn't matter who I talk to. It doesn't matter what I say in public. Most of the people listening to me are not in Arkansas.

And most of the people that I talk to in daily life, I don't get into political conversations with because they're not likely to go well. I want to set that aside. I mean, just getting into politics is probably not the most productive use of our time. I would much rather talk about other aspects of artificial intelligence and how it's likely to impact society. And I think the thing that most people can relate to is threats to the ability to make your livelihood.

If an AI does something that you do faster, cheaper, more reliably, and never has any non-work-related drama come into the workplace because of them, there's going to be a strong temptation. So the question is, what sorts of jobs are susceptible to replacement? And I think one of the things that a lot of AI experts say is that no jobs are susceptible to being automated. Tasks. Tasks are susceptible to being automated. And jobs typically involve a lot of different tasks.

So the more generalized and the more varied your job is, the safer you are probably, I used to think, that the more physical your job is, the safer you will be because it is mostly routine, repetitive, intellectual tasks which are easily automated via AI. But now humanoid robots are coming on strong after seemingly decades of being stagnant. Do you remember the ASIMO robot from around 2001, 2002? It's a Japanese robot made by Honda now. It was a very impressive little guy. And it had all the...

Wait, I think I have a picture in my head now. Is it like a little white robot? Yeah. Okay. Like round head. Yeah. Very cute. Yeah. And there's some very impressive videos, but there's also, you know, blooper rolls of these things falling over and they can't get up. But if you look at the new, you know, the replacement for the Atlas robot from Boston Dynamics, the most famous video of it, it's lying flat on its back and it gets up very smoothly, you know, with just its legs.

Like it twists its legs around so that its feet are planted by its hips and then it stands up in a way that no human ever could. So I think humanoid robots that, you know, can do what most humans do are much closer than we had previously thought. So this brings up the topic of universal basic income. And I have opinions which I will keep to myself for now and just throw the, you know, the topic down. UBI, what do you think?

Well, let me address the premises here first before talking about UBI specifically. So I think it is easy to overestimate the impact on employment of AI, robotics, et cetera. We have been through this before, right? Look at the 1800s. It's something like 80% of the population was farmers, right? How many people work in agriculture now? Way smaller percentage. Does that mean we lost all of those jobs? Well, no. We just like learned how to do it more efficiently with fewer people.

We got better at it. And that means we could go on and move on to do other things. That's been the history of technology, right? Where does the term oh, shoot, I'm blanking on the term. Luddite, where does the term Luddite come from? These are people that were against, was it the loom? Some very basic fiber technology for weaving technology. I should say fiber technology sounds like I'm talking about fiber optics or something. So they were against that. Why?

Well, because it was going to put people out of jobs and they were against that just like we're against AI today. So the rate at which industry adopts new technology tends to be very slow. So I think what we'll see is a very gradual transition. You already see some of this. So for example, like tons of companies have already put together, you know, we had crappy chat bots before. Hey, LLMs make our chat bots a little bit better. Let's build new systems based off of those.

So our chat bots have gotten a little better. The internal stats I've seen from the company I've worked at, these haven't really cut down the number of questions that go along to support members, but hey, maybe we'll get better at it. And hey, maybe we won't need to have so many live humans responding to questions anymore. That would be great, right? Like, those people can then go on to do other things that are more productive.

We have fewer people doing the same, to do the same amount of output, right? And that's how we grow the economy. That's how we all become better off. So unemployment is low now, and we have these AI technologies. I don't see a reason to think that we're going to see a sudden sharp shift.

I think we'll see a slow adoption of AI technologies into different things that make people more productive and that that will free up resources, grow the economy as those people move on to do other things that AI is not capable of doing. Maybe I'm wrong.

Maybe you're right, but like, yeah, actually, if we look 10, 20 years down the line, there's going to be massive unemployment because anything useful that we can think of the vast majority of humans doing, we can just do with some combination of robots and AI. That would be great, right? Like you said, at that point, yeah, UBI becomes the obvious option.

If you are not able to contribute to an economy because you just don't have skills or abilities that are over and above what can be done much more cheaply by a robot or AI, yeah, let's open things up so that you can do things that are maybe not economically valuable, but are valuable in other ways. And that expands the things that you're capable of. Maybe it's you go into art and become more involved in your local community doing art, right?

I recognize that it's kind of funny to talk about that given the like, oh, yeah, I can do art now. Like, why do it? Well, no, it's a form of expression, right? If we take away the economic component of it and you're just doing it as a form of expression, suddenly there's more reason to do art and you can connect with your fellow humans on pursuing that form of art.

And so if that's kind of the world we live in and if we have the same level or even greater economic output than we otherwise would have, we can afford to spread around that wealth more, right? If we're doing the same amount of, we're building the same amount of stuff as we do now, but only using 25% of the population, hey, let's spread around that stuff to the full population using a tool like universal basic income or another like equivalent policy. So this is a rote conversation.

You've said something that I've heard many times before, so I'm going to give the standard response to it. I'm going to try to keep it as short as possible though, because it's not terribly interesting to me to have the 10th or 15th iteration of this exchange. But the Industrial Revolution is a bad example. One, there was something called the enclosure movement in the UK. It wasn't the UK at the time, it was England.

But there used to be a lot of people who lived agricultural lives and they depended on common grazing areas, and this is directly related to textile. Sheep would be grazed, but the sheep belonged to individual families and small communities and they shared agriculture or they shared grazing lands. And then the turning the wool into fiber that could be made into cloth and other textiles was a skilled occupation.

And then more importantly, the weaving of those fibers into cloth was also a skilled occupation. And the looms that the Luddites were objecting to were taking a skilled job that commanded a high price and turning it into a job that was not skilled and which could be done by children, which in fact was often done by children. Children were a preferred employee in early factories because they could not stand up for their rights.

You didn't really even have to pay them, you could just bully them into doing the work. And that transition from agricultural people depending and making use of shared common lands and then using that to produce a resource that they could then turn into a value-added product that had a high premium because of the skill involved in it. The people got forced off the land by the Enclosure Acts. They couldn't live and work where they had lived and worked for generations.

They had to go into the cities where they weren't established, they didn't have any status there, forced into slums. And then their jobs got de-skilled and it didn't all work out for them. We 100 years, 200 years later can look back and say, look, it all worked out fine, but it didn't work out fine for them. They were fucked forever. They didn't get programming jobs. They didn't get bus driving jobs. They lived and died in poverty, in squalor with no representation.

And the Luddites were absolutely correct in their time that they were getting fucked and they did get fucked. And it wasn't okay. It's okay to us now looking back where we can abstract their suffering, but it's not okay. In terms of the transition later from mostly manual farming to mechanized farming, pre-mechanized farming sucks. The reason we've had slavery for thousands of years is because nobody wants to do it.

So you have to take certain people and say, okay, you go out in the field and you do that hard shitty work because I don't want to do it. That's the basis of slavery. So the change to mechanized farming is a godsend. At the same time, literally at the same time, we were transitioning from horse-drawn transportation to mechanized transportation.

And there's a line of reasoning says, look, don't look at the drivers to say what our fate will be as AI takes over more and more of the job tasks in the society. Look at the horses. The number of horses dropped dramatically after the introduction of the automobile because we didn't need them. And so you can say human beings have value in and of themselves. Their rights should be respected. Their welfare should be considered.

Should be, but in general, that isn't how things have played out historically. There has been portions of the population that needed a workforce and so had a reason to provide for at least basic provisioning for their workforce. But when that workforce becomes unnecessary, this is something Brett Weinstein said on a recent podcast, if you decide, well, we're going to let the robots do the work and we're going to take care of people.

And one way to do that is just by giving everybody money so that they can buy the things that they need. That'll be a very short lived period because the people who perceive themselves as the ones creating the value will be resentful to see their value get redistributed to the useless eaters, to the people who aren't doing anything. And I guess it comes down to a basic question of where you sit emotionally in terms of the manipulation of the masses by an elite.

And I think if you look back in time, you always see people justifying the suffering of others and invoking noble sounding principles in order to do it. Now in the past, these were typically religious principles. And in the present, they're going to sound compassionate, they're going to sound noble, they are going to sound humane when they're coming out of the mouths of the experts. And remember, the expert is somebody who articulates the needs of power.

But they're going to basically be justifying the fact that we have increasing concentrations of wealth, power, opportunity, and more and more people who don't really, they're not needed. And just because they're human beings doesn't mean we're actually as a society going to provide for their welfare or the maximization of their potential. Just look at the streets of San Francisco, Portland, Oregon, Los Angeles. Like I was in Los Angeles in 2019, I guess was the last time I was there.

I was shocked at the number of tents just everywhere. Like I drove through Skid Row, I saw the worst of it, but everywhere, everywhere in LA you go there's tents on the sidewalk, people just living on the street. And I can't say that, you know, I can't look at that and say, yeah, we're going in the right direction, that this is a positive development that, you know, clearly these people have been liberated from the need to engage in, you know, coerced labor. Good for them. Not so good for them.

So, yeah, I'll stop. Obviously this is a rehearsed rant and I said I'd keep it short and I failed. Yeah, just to quickly react to some of that. So, mechanization of agriculture versus the leadites and more automation almost of weaving. Two examples. The thing they have in common is, hey, we were able to increase productivity with less labor.

The thing that we they don't have in common is in one case we seem to have an increase in quality of life and the other a decrease for those in that particular historical period that had that transition. Yeah, I think it's totally legitimate to worry about, hey, what's that even if we have some like long term plan that makes things look good because we'll be more productive as a society, what does that transition look like? I think that's a good thing to ask.

Luckily, we have things like lots of rights of workers now. We don't have quite the same loosey goosey like you can force children to work in factories kind of stuff that we had back when the leadites were relevant.

But you do still have this worry that, yeah, what about the people that potentially are going to be losing their livelihoods, something like a skill that they have developed through their whole life, they have their identity tied to this job and they might learn that's just not relevant anymore. You're not creating value.

I think that's a serious risk of psychological harm and I think that's something that we should, if we get to that point, which again, I do not think is a given, I don't think it's at all a given that developments in AI inevitably are going to lead to mass unemployment. I think that's a massive leap. But if we do get to that, yeah, we have to do some thinking as a society, how do we set people up to be supported through that transition? How do we help them deal with that?

Either by, I mean, one thing I'm generally a big advocate for is just investing in human capital. So how can we help people retrain for other skills? Alternatively, how do we help them adapt to some other life, like where they're kind of redefining their identity to not be something that is economically productive in the same way that they've conceptualized through their lives, but is useful in some other manner.

That's all very speculative future-facing stuff with regards to, well, horses became a lot less common as we mechanized agriculture. I'm not sure that the analogy there holds up super well because the reasons that we create more humans are not the same reasons that we create more horses. Please decide to make babies for reasons that I think are completely divorced from, oh, yeah, this is going to be a new worker for our economy.

Point well taken that there are larger societal forces that can influence the cultural discourse and therefore influence those discussions happening at the family level. But I'm not so pessimistic about the result of those being so extreme that, yeah, we're just going to stop breeding the masses in this hypothetical future. But is UBI kind of a sustainable long-term policy? I think that depends a lot on the dynamics of exactly how we set it up, exactly how we evolve towards it.

And there's enough speculation there that my only claim is, yeah, if somehow we end up in this future scenario where we have mass unemployment and people just are not being economically valuable, not because they, well, because AI can do everything, which I'm not even sure that that's a really plausible scenario. But if it did somehow happen, yeah, UBI makes sense. I'm not kind of advocating for it being a likely way for us to go.

Because I think there's a lot of thinking about people's psychology around politics in general. I think it is an unlikely policy to ever be adopted in a widespread way. And I think there's a lot of societal implications that we don't really understand that would take place if we had a large even majority of the population on something like UBI.

So I'm just saying from a kind of naive perspective, yeah, of course, like, hey, if we have a lot of stuff and we have a lot of people that aren't able to engage in the economy, it makes sense to spread that stuff around. All right. That was Tommy Blanchard. We went on to talk for another hour after that. That will be behind the paywall. So if you're listening on Patreon or Substack, you should be able to find the paywall portion pretty easily.

If you're listening on YouTube, well, you'll either need to go to my Substack. There'll be a link in the description of the video or to my Patreon. I would suggest Substack, but either way is fine. Anyway, that was Tommy Blanchard. If you want to hear more from him, let me know and I'll get him back and we can talk about science fiction. All right. That's all for this episode. I'm...sometimes I will go on a rant and monologue here at the end. I'm not going to do that this time.

So thanks for listening. Talk to you again soon.

Transcript source: Provided by creator in RSS feed: download file