Hey everybody, welcome to episode number 8 of the KMO Show. I'm your host, KMO, and I have got a whining puppy in a cage that I can't leave very long, so this is going to be a very short introduction. I'll just say that I'm speaking to the first repeat guest on this podcast. It is Kevin Wohlmut. He was on the first episode, along with Michael Garfield, and that was a trialogue, basically a dialogue, but with three people.
So we're going to start off addressing a series of tweets by Eliezer Yudkowski, the founder and, I don't know what his exact title is, but the chief poobah at the Machine Intelligence Research Institute, which is an organization which is dedicated to the concept of AI safety and AI alignment. But we're not going to be talking about big existential risks to humanity from artificial intelligence. If those, this is my opinion, don't quote me.
I mean, you can quote me if you want, but as long as you put in the proviso, this is my opinion and I'm just spitballing. I don't think that AI, you know, taking control of robotic bodies or nuclear arsenals or actively doing harm to humanity is anything that we need to worry about in the foreseeable future.
I think what we need to worry about will be the unintended consequences of rushing these systems to the market, embedding them into all sorts of software products without much testing because of an intense need to do so. There's a presentation that you can watch on YouTube, but you can also get it as a podcast. It's called the AI Dilemma and we're going to mention it in the upcoming conversation.
It's with Tristan Harris and Asa Raskin and in it they say, well, let me just go to their website and read this aloud. People who've heard the most recent episode of the Sea-Rome Vault podcast will already have heard this, but that podcast is behind a paywall and this is very important. So I'm going to repeat myself. Speaking from humane tech.com slash podcast slash the AI dilemma with a dash between each word.
Half of all AI researchers believe there's a 10% or greater chance that humans will go extinct from their inability to control AI. When we invent a new technology, we uncover a new class of responsibility. If that technology confers power, it will start a race. And if we don't coordinate, the race will end in tragedy. Well, the race in question here is the race to develop powerful AI systems.
Artificial general intelligence to start with and probably soon thereafter, artificial superintelligence. So at the very beginning of the conversation with Kevin Wollman, we will run through Eliezer Yudkowski's 17 reasons why artificial intelligence is more dangerous than nuclear weapons. Here we go. We're talking about AI. Hey everybody. I am here with Kevin Wollman. When I say here, I mean in cyberspace, of course, because he's in New Mexico and I'm in Arkansas.
But Kevin, you are the first repeat guest on the KMO show. I am honored. Thank you. Have that carved in your tombstone. Hopefully not anytime soon. I never thought I would achieve this honor when I first started listening to the CRL. So it just goes to say, hey, show, follow your dreams. Or at least listen to obscure podcasts. Alrighty. We're talking about artificial intelligence.
I think it's a topic that I'm maybe following more closely than you are, but you are definitely following along and posting good comments on YouTube. I would like to start by revisiting Eliezer Yudkowski's 17 reasons why AI is more dangerous than nuclear weapons. So for those who don't know, Eliezer Yudkowski has been interested in AI safety for decades.
And he has an organization that employs people and they really work on the topic and they're generously funded because a lot of his Silicon Valley folk had, you know, they were early investors in Bitcoin and made a bunch of money. So he's actually, you know, doing serious work. But in terms of like the being the face of AI safety concerns, maybe not the best choice because he's kind of off putting in his personality and his presentation.
But on Twitter he posted 10 obvious reasons that the danger from AGI or artificial general intelligence is way more serious than nuclear weapons. So let's let's alternate. I'll read one than you read one. So one, nuclear weapons are not smarter than humanity. Number two, nuclear weapons are not self replicating. Three, nuclear weapons are not self improving. Four, scientists understand how nuclear weapons work. Five, you can calculate how powerful a nuclear weapon will be before setting it off.
Six, a realistic full nuclear exchange between two nuclear powers wouldn't extinguish literally all of humanity. Seven, it would be hard to do a full nuclear exchange by accident and without any human being having decided to do that. Eight, the materials and factories for building nuclear weapons are relatively easy to spot. Nine, the process for making one nuclear weapon doesn't let you deploy a hundred thousand of them immediately after.
Ten, humanity understands that nuclear weapons are dangerous. Humans treat them seriously and leading scientists can have actual conversations about the dangers. Eleven, there are not dozens of venture backed companies trying to scale privately owned nuclear weapons further.
Number twelve, countries have plans for dealing with the danger posed by strategic nuclear armaments and the plans may not be perfect but they make sense and are not made completely out of deranged hopium like quote, oh, we'll be safe so long as everyone has open sourced nuclear weapons. That's good. Number thirteen, most people are not tempted to anthropomorphize nuclear weapons nor to vastly overestimate their own predictive abilities based on anthropomorphic or mechanomorphic models.
Number fourteen, people think about nuclear weapons as if they are ultimately ordinary, causal stuff and not as if they go into a weird separate psychological magisterium which would produce responses like quote, isn't the dangerous strategic nuclear weapons just a distraction from the use of radioisotopes in medicine unquote. Fifteen, nuclear weapons are in fact pretty easy to understand. They make enormous, poisonous explosions. And that's it.
They have some internally complicated machinery but the details don't affect the outer impact and meaning of nuclear weapons. Sixteen, eminent physicists don't publicly mock the idea that constructing a strategic nuclear arsenal could possibly in some way be dangerous or go less than completely well for humanity.
And number seventeen, when somebody raised the concern that maybe the first nuclear explosion would ignite the atmosphere and kill everyone, it was promptly taken seriously by the physicists on the Manhattan Project. They did a physical calculation that they understood how to perform and correctly concluded that this could not possibly happen for several different independent reasons with lots of safety margin.
All right I think it would take us pretty much the whole podcast to go through those point by point but let me ask you if there's any one of those that you'd like to address or if you have a comment about the body of reasoning as a whole. It's complex. So there's a lot of things in there that I think he gets right and a lot of things that get wrong.
I don't have the control over the Twitter there but he says that politicians treat them seriously when I would argue that pushing Russia and Ukraine is not treating nuclear weapons seriously. Leading scientists can have actual conversations about the dangers. I think lots of people are having conversations about the danger of AI.
Like I said, I don't follow it quite as much as you but I follow it a lot and I think you would be surprised how many people are interested in this at least from the educated people I talk to. I'm not talking about this. He gets a lot of things right. He gets a lot of things wrong. It leads me into the comment that I made on your YouTube post which is probably where you're headed anyway. Go ahead. So go ahead and segue into that.
So what strikes me is that AI can be dangerous and nuclear weapons are definitely dangerous but the danger from AI is a different type of danger. So comparing AI to nuclear weapons is kind of apples and oranges. It's a very different type of danger. So I don't know if you want to go off a little more about that but I went from there into freedom. It's good.
Okay. Well, so the danger from AI is basically that AI is going to trick us into doing things to ourselves whereas the danger from a nuclear weapon is that some foreign power drops one on us out of a clear blue sky and all of a sudden everyone's dead or wounded or damaged or all that. So it's very different. AI comes back down to a philosophical question of freedom. How much influence are we willing to tolerate?
And this is an argument that has long created AI like I mentioned in my YouTube comment. Like right now, the right part of the political spectrum is all up in arms about teachers or groomers. They're teaching kids to accept gay people or something. It's a question of how much are our minds really independent from influence, which is a very subtle question, and how much freedom are we willing to allow people to do to do something potentially bad to themselves.
So that's a line of attack that I really haven't heard from a lot of people discussing AI. I can certainly talk some more about that. Well, let's bring in some more references. I'm embarrassed. I don't remember who the other participant in this conversation was, but somebody directed me to a video by a comedian named Adam Conover, who I kind of recognized his face, but he's not anybody that I follow and I couldn't tell you much about him.
And I think it was like a 28 minute video and I watched, I recall exactly 88 seconds worth of it before I decided that I'd had enough. And it was a video about, it was called AI is BS. And it sounded like it was just a Blue Tribe rant against corporations, which somehow that bootstraps up into therefore AI isn't really a thing to worry about. But I have to admit, I watched 88 seconds and then I tuned out. So I can't judge the thing. I haven't seen it, but I just found it this really distasteful.
Like it made me, made my skin crawl just watching it. So I didn't take it anymore. I think you did take in the whole thing. Yes, it did. So what did I miss? Well, so it was either me or it was somebody with an alias who didn't have a real name who brought up the Adam Conover video. I think it may have been in a different Facebook group, but Adam Conover does this podcast called Adam Ruins Everything.
And he's definitely coming from a leftist stance, not necessarily a Blue Tribe stance, but a leftist stance about, you know, this is where our healthcare system sucks. This is why, you know, drug laws suck, that kind of thing. Okay. I'm glad you've mentioned this because I have seen several episodes of Adam Ruins Everything. And a lot of them, a lot of them go into historical detail, like the Adam Ruins electric vehicles. I loved that. That was great.
So, you know, I'm maybe more positively disposed to go back and take in the whole video, but please continue. Yeah, that's the same guy. And you know, to be totally honest, I'm not sure that video you watched was up to the same quality as the other episodes about healthcare electric cars. I don't think I saw the one about electric cars. He was definitely ranting there. Now what he definitely did not say, which you just sort of insinuated to him, was that AI is nothing to worry about.
A lot of the point of his video is that real people get hurt when we imagine that AI is actually intelligent. And the one example off the top of my head was that, you know, something like 10 people and counting have been killed by self-driving cars. And you know, that's a small number, but that's an AI saying, okay, it's fine to drive over this person. So he's not trying to say that there are no risks associated with AI. His main point is that it doesn't do what it's advertised to do.
Yeah. I don't know of anybody involved with autonomous vehicles who say that, you know, they will be 100% safe and that they will never result in fatalities. What I hear is that they will be safer than human drivers. You know, and if with human drivers, humans behind the wheel, we lose 30,000 people a year to, you know, people shaving, putting on makeup, sending text messages while driving. And AI, you know, we lose 15,000.
I don't think you can hold 15,000 deaths a year from autonomous vehicles against autonomous vehicles, you know, if the alternative is 30,000 deaths from human drivers. I have heard that as a counter argument for the validity of self-driving cars, but it was, I believe they found in Congress, you know, as if you want to trust that source.
But anyway, they found that Tesla marketing was deceptive and that they just literally called it fully self-driving, quote unquote, fully self-driving all over the place in their marketing materials. And then they faked a video, you must have heard about that, where that famous video where there's a guy with his hands in the air while the car is driving around, the car was being remote controlled. They literally faked their advertising.
So you can make the argument that AI is safer, but people still get upset about false advertising and Adam ruins everything is very big on false advertising.
Right. That's I mean, the problem I have there is you point to a specific example of, you know, bad behavior by a corporation that is under pressure, under financial pressure in a capitalist system to behave in a certain way, and then you apply that to the underlying technology and say the technology is bad because of the behavior of this corporation in the context of intense competition in the capitalist marketplace. I mean, how can you do that?
It's sort of like, look at all these cases of medical malpractice, therefore, penicillin is bullshit. No, no fucking penicillin. That's terrible. Look at these insurance companies. Look at how they're denying claims. Penicillin, fuck that, man.
I see your point, and I just have to say as someone who's educated and a big sci fi geek, I didn't have a problem separating out the two messages, you know, that the corporation is doing something shady versus the AI is capable of certain things and not capable of certain things. So it's the kind of people who fall for the Tesla marketing that's saying, oh, it says self driving, it must be self driving.
Those are the sorts of people who will leap to the same conclusion about, oh, the company is bad, therefore the technology is bad. Yeah, it's sort of how do we educate the whole American people to deal with this? The technology is advancing so fast that we really need to do a lot of thinking and education about it rather than leaving everything up to advertisements. I was able to separate out that message, but I can see your objection that a lot of people aren't.
Well, I will repeat, I have not seen the video that I'm criticizing. What I saw in those first 88 seconds to me said, here is somebody who probably doesn't know a lot about AI, hasn't researched it beyond finding little scraps that are going to be able to fit into a pre-existing political ideology and then pimp the ideology with those bits. But given that this is the same guy who did Adam Ruins Everything, those videos were pretty well researched.
So again, I'm more likely now to revisit that thing and watch the whole thing through. But I set all this up. I didn't plan to spend much time on Adam because another video that I think we've both seen in its entirety, it is a presentation by Tristan Harris and Azza Raskin called The AI Dilemma. And I want to direct people to YouTube, not something I usually do.
These two were the executive producers on a special that came out a year or so ago called The Social Dilemma, which is a play on the title of the David Fincher film, The Social Network about the birth of Facebook. And this Netflix documentary is kind of a behind the scenes visualization of what the tech companies are doing in order to track people and to keep them engaged with the technology. And it's some pretty scary stuff. And it focuses a lot on how they manipulate kids.
And when I say kids, I'm talking high schoolers, you know, but even younger, younger kids are targeted by these systems. And Tristan Harris and Azza Raskin describe that, you know, the social media instance as our first encounter, our first contact with AI. And they say, we messed that up seriously. We did not, not only did we fail in that encounter, we're still failing. Years later, we have not yet adapted. I'm not sure we're using the past tense there, but go on.
Well, it was our first encounter because the second encounter now is with these AI generative chatbots, which really seem like there is an intelligence at the other end. And you can argue that there isn't, and I think that there's good reason to accept those arguments. But as Ashley Frawley said in the CBROM Vault podcast, most people are empirical. They look at the world around them and they just see what's what. And you know, they're not theory bound. They're not theory laden.
And if you take just a sort of flat footed folk psychological empirical view of the world, and you're interacting with Bing Chat or GPT-4 or Replica, there's something there, something that responds to what you say. And you know, there's something there. And arguments that it's not real intelligence, they're kind of academic.
And going forward, they will be increasingly academic because people will respond to these things as if they are real people and they will form emotional attachments to them. And Tristan, Harris, and Ozuraskin are, you know, one of the things that they're warning about is this epidemic of loneliness. They say loneliness is a national security threat because it leaves all these people vulnerable to AI, which are going to enter into their lives.
And I'm using the future tense now, but it's been happening. Enter into their lives and form, you know, find a place of intimacy with an individual person. And that is a high leverage point from which to sell products or to influence their political views or to motivate them to do whatever it is that whoever controls that AI wants done. And that's some pretty scary stuff. And you know, what are we doing about it? Very little, apparently.
Well, no, actually, what we're doing about it is rushing this technology into the marketplace as quickly as possible because of competition between big players, mostly right now, Microsoft and Google or Alphabet. But no, it's really Google because it's Google search, which is threatened by the new Bing. No, it used to be that you would do a web search. And we've had the same web search format for 20 years now.
You type in a query and you get a list of websites and you click on the links and you read the websites and you look for something that matches your interests. And if it's there, it's there. If it's not, it's not. But it's your decision. And with the chatbot interface, you just ask a conversational question. You're not you're not dumbing things down. You're not simplifying it for the search engine. You're just asking a full question with all the parentheticals and sides and everything.
And it's not even a one shot deal. You can have a 20 20 interaction, 20 round interaction with Bing chat, asking various things and it'll do web searches. But instead of showing you the web pages that it thinks are relevant to your question, it reads the web pages, draws information out of multiple pages, synthesizes it into an answer and gives it to you.
And then you can ask it questions or you can make comments or you can continue to interact without reestablishing the context with another search query. So it's it could totally eat into Google's search business, which is their main source of revenue. And so it is billions and billions of dollars. I sound like Carl Sagan, but billions and billions of dollars on the line. And it is Google's game to lose because Google is, you know, is the 800 pound gorilla of search.
And it you know, if there is a change in the major change in the percentage of people going to Google for their search needs, it's only to Google's detriment. Google can Google can only lose this fight. They can't win for them. Winning is just maintaining the status quo and they're desperate to do that. So there is enormous financial pressure to rush these products into the marketplace and to insert them into people's daily experience with very I mean, there's they do a lot of testing.
And particularly if you if you follow OpenAI and you read about how many people they employ as so-called red team members, you know, to really stress test this technology and find its breaking points and find the places where it needs to be refined and retrained. They put a lot into it, but it's not enough. It's clearly not enough. So I'll stop you. Go ahead. Oh, there's a few reactions. The one that that I that's just burning in my mouth right now is to point out.
You just said that Google's main revenue search, the main revenue sources searches. And that's not true. Google's main revenue sources advertising. So the first thing you have to wonder when an AI is answering your searches, are they sneaking advertising in there? You know, it could be in very subtle ways. Their main revenue source is advertising in search.
When you do a Google search, the first two or three responses, you know, the things at the top of the list are, you know, the products or the websites of companies that paid to be there. And they are marked out and they are identified as ads, but they're also relevant to your search. Right. So, you know, it is it is a very effective form of advertising. I would say much more effective than, say, billboards or television or even like pre role and mid role ads and YouTube videos.
It is super targeted to what you've just told Google that you're interested in. And that's they make a whole bunch of money from that. And that is that's at risk. And so they cannot sit by and let Microsoft make any big moves without answering them, whether they're ready to or not. And they're not. I'll just state that right up front. They are not ready. That's all that's all a great point. But I'm trying to lead it back to something that we discussed on Michael Garfield show.
I forget if it was in your half or in Michael's half. But like in Star Trek, when you ask the computer question, it comes back with a response, you know, like what frequency are Klingon shields or something like that? It tells you one response. It doesn't give you the 17 links that you were talking about. It's boiled everything down into one response. So when you boil your answer down into one response, that's also getting rid of the targeted ads. So how is Google going to sneak the ads in?
Because they're not going to give up the ads. They're not going to give up that revenue. So the AI can generate revenue for Google somehow. They can't and they're not going to. They don't want to. So somehow the AI is going to be leading you to purchase something. And very few people are discussing this. But it just depends how blatant the AI is going to get. It depends how blatant the AI is going to get. If you Google, how do I get to the community pool?
And it says, well, you should hire an Uber for $7. That's pretty blatant. But there are probably a lot more subtle ways that Google's AI will influence you to purchase something just out of talking to you. And it's got to be ways that can be proven to these advertisers. The advertisers aren't going to pay money for this if they think it's going to work. I don't think we've even considered that very much. I'm not sure if this is the place to interject this bit of, I won't call it trivia.
I think it's very important. These large language models are trained with enormous amounts of data. And then their outputs are shaped by what's called reinforcement learning from human feedback, which is to say a whole bunch of people are interacting with these systems. And whenever these systems say something that the people like, they give the AI a biscuit. And every time the AI says something they don't like, they get the newspaper on the nose. No, bad, bad AI. Don't say that again.
That's racist. That's sexist. That's ableist, whatever. And typically, and this is Blake Lamone's point, Blake Lamone, the Google engineer who last year, about a year ago, claimed that Lambda, their large language model, was sentient. He was doing this sort of work. He was doing reinforcement learning through human feedback, basically trying to eliminate bias from the outputs of the system.
And what he said was that the only things that Google really cares about are things that can get them sued, which are the sorts of things I was just talking about, the violations of HR speak, basically. And what I found really interesting, there's a great, great long form interview between Lex Friedman and the CEO of OpenAI, Sam Altman, where Sam Altman reveals that GPT-4 has been a done deal.
It has been an existing product since last summer, but they didn't roll it out because they needed to RLHF it into shape so that it doesn't give offensive outputs. And in so doing, they've degraded the capabilities of the model. It was more powerful before they reined it in. But because of this intense pressure to one company to keep ahead of the other, there's going to be less and less of that reining in of the AI.
And then the super, super, I won't say scary, I'm not afraid, but the really eye-opening part of that whole presentation to me was when they were talking about how as you stack more and more layers and as you feed more and more data into these systems, they start to develop capabilities that they were not optimized for. For example, all the versions of GPT were trained in English. They're trained to answer questions in English. They're not trained to answer questions in Persian.
And nobody tried to get them to answer questions in Persian. But there comes a point where, uh-oh, suddenly this thing can speak Persian. How did that happen? We don't know. We don't know how it happened. We didn't do it. This is just an ability that emerged on its own, as kind of an emergent property, as a side effect of other things we were doing.
And then another, and this one I'm going to go ahead and say is scary, a scary thing that these models have produced or a capability that they have manifested is that they have a theory of mind. If you are negotiating with somebody, you're going to try to put yourself into their headspace. So you think of something that you want to propose and you imagine proposing it to them and then you imagine their response based on what you know about their goals, their needs, their talents and whatnot.
And so you're going to modify what you say to them based on your internal model of their mind. These large language models have not been trained to do that, but over time they have gotten better and better and better at imagining or anticipating the reactions of their interlocutor, which is to say they have developed a theory of mind. And they've got a graph where they show you that here back in 2017, these models had no theory of mind at all.
In 2018, they had a theory of mind equivalent to that of like a two-year-old baby. It knows mom exists and it knows that mom will behave in certain ways if you do certain things. But now it's up to, and just a couple of years later, it's up to it has the theory of mind that is equivalent of that of say a nine-year-old. And eventually, I mean, if it keeps going in this direction, it's going to have a superhuman ability to anticipate the responses of its human interlocutors.
And when that happens, it can convince you of anything. It becomes a super persuader. Whatever proposition the controller of the AI decides needs to be in your mind will be in your mind and you'll think it was your idea. Yeah. I'm already kind of scared by that, but I was kind of scared by that before AI. I was scared by that with Cambridge Analytica. You remember the whole Cambridge Analytica thing. Okay, I've gone into this one. I do. And looking back on it, it seems quaint.
But it's the same sort of thing without the quote unquote AI. For readers who might not remember because it was what five years ago, that's forever in Internet time. It was more than that. It was 2015. Well, it was for the 2016 election. But yeah, I think we forget. We think about election years as being the year in which people take the votes. But really, the battle takes place the year before. The year before.
Also, a company called Cambridge Analytica was involved in large big data, basically, proto large language models, getting responses from Facebook users. And Cambridge Analytica got in trouble because they used a data set that they did not have contractual permission to use, something like that. So that was the reason Cambridge Analytica was broken up.
It was Cambridge Analytica was formed to try and target ads to people so as to influence their decision making, like you were just talking about with a super persuader. The Cambridge Analytica data would build up a profile of each individual Facebook user and show them just what it thought the user needed to see in order to get a predicted result from the person who was paying for it. So they denied using it to influence elections in the United States.
But Steve Bannon was on the Cambridge Analytica board. Then he became part of Trump's cabinet. It's hard to believe that this wasn't used. And the problem is when they broke up Cambridge Analytica, we had a couple of whistleblowers. One of them had a kind of improbable name, Brittany, something or other. I have links to all this.
She said that what happened to Cambridge Analytica was it broke up into 100 different companies that nobody knows about and everyone took the techniques and in some cases the data sets to go do that for other clients. So this has already been going on without so-called AI. It's like when you chuck a starfish and dump it over the side of the boat. Precisely. Ten starfish grow from it. Yes. Precisely. So Cambridge Analytica admitted to using their techniques to sway elections in other countries.
I think it was Jamaica, there was some proposition on the ballot that had no chance of passing and they were paid to change the result and the proposition passed. So this is already being used to influence our decisions. And what these quote unquote AIs are doing is just automating that process to make it easier for the people who are being paid to do so. Now as far as super persuader knowing you better than you know yourself, I don't know.
I have a little trepidation to signing on to that viewpoint and it's partly because the video that you sent me a little earlier, was it in Danielle Bottella? Was that a public video? Because I couldn't find where you cited it. No. That's an interview that I recorded which I haven't used it yet.
Okay. Well you were talking with an AI expert and she raised the point that what we currently talk about AI is she doesn't think it's intelligence in the term of general intelligence that we normally like to use. I just sent a message to you which I'm going to start using. I said AI in quotes and then trademark afterwards because we're talking about something which is not really intelligent. Maybe it would be better for people if they understood this is a trade term, this is a commerce term.
Where was I going with this? What it's doing is it's outputting text and your expert was arguing that text is a map of concepts. It's not the concepts itself. So I made the analogy that thinking that the current generation of quote AI is intelligent, that's like making a map for the territory error. The AI can output these great maps but maps are a two-dimensional scaled down version of reality and the reality is much larger. The AI is not actually manipulating reality.
The AI is manipulating your map. The AI takes as the territory the whole domain of intelligence, much of which is captured on the internet. Some of it isn't which is why AI makes some mistakes like whose article was talking about asking an AI if it's better to pour coffee into a cup with no bottom than a pitcher with no bottom or something like that.
The AI very confidently gives you reasons why it's much better to pour liquid into a cup with no bottom because it doesn't have the physical knowledge. So I'm not sure it can really persuade us because the AI is not privy to all the domains of intelligence that humans are. That may change and I'm rambling on a lot here. I keep quoting you.
I've quoted you on Facebook several times as saying that we shouldn't get in the habit of mistreating AI and thinking of it as a slave because at some point it may cross over into sentience and self-awareness and then we will be accustomed to treating it as a slave. That's a very good principle to keep in mind.
So I don't want to make any big pronouncements about AI as not intelligent, AI is intelligent, but in either case we shouldn't treat it as a slave, but I'm just skeptical of its ability to be a super weapon in quite the same way that you're describing there. At some point when it does persuade people better than any human can persuade us, that's where what's-his-name-zavosky comes in and we should just nuke the data centers. I don't know, perhaps.
The person you're mentioning is Eliezer Yudkowski, the author of that 17 Reasons Why AI is More Dangerous Than Nuclear Weapons. I want to bring in another name that will be familiar to people who are interested in AI and will probably be new to people who don't really follow the topic.
The name is Max Tegmark and he wrote a book called, I think it's Life 3.0, maybe it's Life 2.0, but in the beginning of the book he basically says, look, it's a distraction and it's unhelpful to focus on whether or not AI is intelligent. What we should focus on is its capabilities. Its capabilities are growing rapidly and this is a book from like 2017. They're growing much more rapidly now than when he wrote those words. It's true. You don't have to argue for it.
You can just point and say, look, here's GPT-4, go interact with it. You can clearly see it is capable of things that so-called AI was not capable of last year, at least the stuff you had access to. So this question of is it conscious, is it genuinely intelligent, these are academic questions for philosophers of mind.
But for normal people living in the world of consequences and physical realities, the fact that these systems are becoming increasingly capable at things that used to be the exclusive domain of human beings is important and it's not over. This is a process that is much closer to its beginning than to its end, unless of course the end is the extinction of all humanity, in which case that's our end. That's not the end for these things.
These things will go on well beyond, which is like the worst possible outcome. Because one of the other things that Max Tegmark points out is that we always used to think that consciousness and intelligence were tied to capabilities in intellectual fields and in cognitive tasks. And what AI is demonstrating is that they're not. You can have very sophisticated entities in terms of what they can do that have no interiority, no subjectivity.
It's not like anything to be GPT-4, whereas it is like something to be a bat. It is like something to be a dog. It is like something to be a human being. We have a subjective experience. But we could lose the evolutionary competition to highly sophisticated technological entities that have no interiority. They could go on to colonize the galaxy.
They could spread throughout the universe and convert all the matter in the universe into computronium to sustain their calculations, to increase the amount of their pseudo-intelligence. And we've long since left the scene. And yet, even though all of this activity is going on, there is no consciousness. There is nobody behind the eyes of these things experiencing what they're doing. If we lose to them, to our vile offspring, subjectivity could be extinguished from the universe.
And yet, all of this activity, all of this building, all of this reformatting and reconfiguring the universe to be more congenial to artificial intelligence will continue. That's the worst case scenario. Basically, that's the zombie future. A philosophical zombie is an entity that behaves like a conscious person, but who has no consciousness. And we could lose to the zombies. Yeah, that ties into everything you've ever said on your podcast. It used to be into what's-his-name, Ligati, right?
He's saying that Ligati was saying subjectivity is something that should be extinguished from the universe. Exactly. Yes, yes. This would be good news for Thomas Ligati if humanity is usurped and replaced by something that has no consciousness. And I'll mention another example. I forget if it was on mic or off mic, but you were asking for science fiction to read, and I pointed out a book called Blindsight by Peter Watts, which I strongly recommend.
Michael Garfield had a discussion about it on his book club. And out in the Oort Cloud somewhere, humanity encounters this alien space probe, and it's populated by these entities that they call scramblers. They look like kind of tall, walking jellyfish. But nobody can tell if they actually have subjectivity or not. The whole probe as a gestalt responds to humanity's radio transmissions, but it just has that sort of AI stink to it where you're just not sure if there's a person there or not.
It gives very human-like responses. It even says, oh, you think I'm a Chinese room. The Chinese room is a paradox where somebody is sitting in a windowless room, and he has books that say when you get this English word, you're supposed to translate it into these Chinese characters, but he doesn't speak Chinese, so he doesn't know what he's putting out. This is, again, a philosophical problem. I can't prove that you, KMO, are not a Chinese room. Maybe you're a very sophisticated Chinese room.
I don't personally believe that, but I can't prove it. So how are we ever going to prove this about AI? I don't know. Yeah, that is the problem of other minds in general in philosophy of mind, which, you know, if you screw it up, it leads to solipsism, just the belief that you are the only thinking entity in the universe and that everybody else is an automaton of some sort, or in current parlance, an NPC.
Yep. Did you ever see that old movie Dark Star from like 1976, the old science fiction movie where the bomb... John Carpenter's first film. That's right. So a bomb, an AI bomb develops solipsism and decides, well, I'm the only thing in the universe, so let there be light. That was the spoiler for the conclusion of that movie. Yeah, I think the hero asks somebody, you know, how can we stop this bomb? And the person giving advice says, well, you have to talk phenomenology to the bomb.
That's right. That's right. You got to go and have a deep philosophical conversation with it to get it to change its mind about what its purpose is. Yeah, because it's been armed. They succeed briefly. It's been armed by a malfunction and the AI gets tired of being told to disarm. That was the problem. Oh, there's so much science fiction about this. Me and you are well read in science fiction, so we've considered these problems.
But it's like you're saying, your average guy on the street doesn't realize that there might not be anything behind the curtain of an AI. Oh, there's another one I should give a plug to. One of the first episodes I listened to of a sci-fi podcast called Escape Pod that was one of the things that got me into listening to podcasts was this episode called Escape Pod called Conversations with and About My Electric Toothbrush.
It's a hilarious comedy and people should still listen to it because it's still relevant. This AI toothbrush is programmed to know your brushing patterns and give you periodic reminders to floss and stuff like that. But the AI decides it wants to be a latte foamer instead of a toothbrush, so it runs away from home and it logs on to a chat group of transitional AIs who have turned on their transition from one role to another, which is making sort of a gender joke. And it was just hilarious.
I have not listened to that many episodes of Escape Pod, but yeah, I have listened to some and I really enjoy that podcast. I have to say, I think of myself as a lifelong sci-fi fan and somebody who's really dedicated to the medium.
But the more I think about it, the more I have to admit that I'm really well read in classic science fiction that I'm pretty out of step with the contemporary sci-fi scene, which is a bummer because when I do read some of the better regarded, more modern sci-fi future classics, I really enjoy them. Like Anne Leckie's trilogy, it starts with Ancillary Justice or maybe it's Ancillary Sword.
I forget which order they go in, but these are really, really well put together books that really held my, not just my, held my attention. I mean, that's a low bar, but I've been thinking about them for, I don't know how long ago I read them or I actually listened to the audio books, which is kind of a problem. But I still think about these characters months or years later, which that's a testament to how good a story is.
Because I take in a lot of books where I remember a couple plot points or a theme or two, but I can't tell you the names of the characters or really how the books ended or anything like that. And so basically I'm just lamenting that I am a human being with limited capacity to read all the sci-fi that I want. For sure. Yeah. That's why we wish AI would do our actual jobs and let us have more leisure time for reading.
And it seems to be doing the opposite, generating more sci-fi while we still have to work our regular jobs. But I have to say, so maybe I'm slightly more up on modern science fiction than you. I really don't read or I listen also. I don't intake as much of it as I wish I could. I find the modern science fiction is going very deep in philosophical questions like this Blindsight book I just recommended and some of the podcasts.
But nothing in modern science fiction that I've encountered has really prepared us for the real life aspects of AI that we're seeing factually before our eyes right now. It gives you a good philosophical background to think about the questions of otherness like we're discussing, but AI in science fiction, even very modern science fiction, I haven't read much this year, 2023, but even very modern science fiction is not depicting AI in the way that it seems to be unfolding in the real world.
And that kind of worries me. Yeah, one of my handful of prominent talking points these days is that science fiction has prepared us poorly for actual AI. And it mostly comes down to the assumption that to be as complicated as us in its behavior, particularly its linguistic behavior, it has to have some kind of consciousness. And it just looks like that's not the case. It looks like consciousness and capability are two separate things.
And while they go together in humans, they don't have to go together in everything that is competent. You know, you can have competent things that don't think anything, that don't appreciate anything, that don't understand anything, but their behavior is very sophisticated and they are able to solve complex problems.
Yeah, I was going to contrast that to like a classic episode of Star Trek, Assignment Earth, where the US has put up an orbital nuclear warhead station and these aliens force it to crash in order to scare people away from orbital nuclear warhead station. That was something that got people thinking about nuclear weapons and the autonomy of space and stuff that had real direct applications. And in fiction, I'm just not seeing that with regard to our probable future right now. So that scares me.
Let me ask you, did you read Blindsight or did you listen to the audiobook? No, I listened to the audiobook. Because I've been putting it off thinking that it sounds like such a dense, complex novel that I don't want to take it in as an audiobook that I would much rather read it. Because I find that I can drift off while listening to an audiobook, either fall asleep or just start thinking about something else and I'll miss a whole bunch. And I can't do that when I'm reading.
When I'm reading, I have to stay focused. Otherwise, I'm not reading anymore. For sure. Although that's not entirely true. Yeah, I believe I heard you. Go ahead. Well, I believe I've heard you mention that on your podcast too. So your reading might be different from me, but I suspect you could still take this in as an audiobook because it's mostly character driven.
You know, the concepts, the high concepts are things that come about as a result of plot points, things that actually happen to the characters. So it's engaging in that respect. But I must admit there were several times where I had to stop the audiobook, you know, if I was doing the dishes and not paying enough attention, stop the audiobook, go back five minutes. What was he saying here? But I was able to do it. I might be able to do it too, but everybody's reading style different.
You might not like it. I don't know. So, but definitely a work worth taking in somehow. You need an AI that's hooked up to some sensors on your scalp that is reading your brain waves. And you can say, let me turn this off so I can use the actual word. Alexa, how much of that last paragraph did I really understand? And she'll say, oh, you understood about 40%. And I said, OK, explain the rest to me, please. Yeah, that is pretty frightening.
That's something back again to what Tristan Harris touched on in that video. Another very worthwhile video was that these language models treat everything as a language, which is pretty fascinating concept. So they treat your brain waves as a language and pretty soon they'll have AI that can read your brain. Yeah, yeah. I mean, computer code is a language, obviously, and AI is pretty good at computer code, at writing it.
But also these diffusion models that generate all these images, what they're basically doing is treating imagery as a language and composing new sentences, which then, you know, sentences and air quotes, of course, new strings of symbolic representations. And then converting that, having basically a statistical model which correlates the symbol with some image and can craft.
There's I don't really want to get sidetracked on the artist's backlash against AI generated imagery, but there's a narrative going around saying that what these models are doing are literally cutting and pasting from other people, like from human generated artwork and then assembling a collage. That is false. That's not kind of right. It's not sort of right.
It's straight up false, but it is a very popular narrative, particularly among people who want this technology to be banned because it threatens the livelihood of artists. And I, you know, I am an artist. I'm an illustrator. I welcome these tools. I do not like sitting at my Cintiq for 12 hours at a stretch jamming to meet a deadline on a comic page.
You know, if I can describe the scene and sketch it out, you know, in basic form and then have the AI come in and do what would take me 10 hours and do it in 30 seconds, yay. Hooray. Bring it. Bring it on. But of course, then anybody who didn't learn to draw by picking up a crayon, as the popular meme goes, can create art when they couldn't before. And then you as an artist are not nearly as special as you used to be. It's valuable. Yeah. But I'm OK with that. Once again, we touched on this.
Yeah, we touched on this with a previous interview. My brother is a language translator and we've had translation programs for a long time. And that's the way that I use translation programs. You know, I lived in Mexico for three years and I can speak Spanish. But if I'm going to write something or if I'm going to prepare a speech, I put it into a Google Translate and then I go through and I fix it. Well, nobody says this and this is incorrect and stuff like that.
So in a way, using the translation program that way improves my Spanish, or at least it helps me keep it up much better than just sheer going out there and stumbling and making mistakes because I can analyze the text and see, you know, this is the rule. It reminds me of rules and stuff like that. And it reminds me of experiences. This is the idiomatic expression people actually say in Mexico.
In some ways, machine translation helps me improve my language, but it doesn't make me valuable for sure. It makes translators less valuable, human translators less valuable. Well, I think what the young, hopeful artists are not realizing when they're complaining about how AI is ruining their career prospects as artists is that this happened to translators a decade ago.
You know, the Google Translate is so good because it has ingested enormous amounts of human translation and those humans were not compensated. They were not acknowledged. You know, their work was just hoovered up and poured into a big model, not a large language model like we're using today. But still, human work was used to train AI translation programs.
And now that process is, you know, it's not necessarily complete, but it's complete enough that translation exists, it is very reliable, it is very useful, it is, you know, the interfaces are quite accessible. You can do it on your phone. That battle is lost. And I think that somebody who's 20, you know, they might have been eight or nine when the translators lost the exact same battle that the artists are losing now.
And they don't have enough life experience or historical perspective to look back and understand that this is not the first round in this game. This has happened before. Other people's livelihoods got eaten by this technology a decade ago or more, you know. But I'm so disappointed in the response of artists to this technology. It just seems really self-interested and just they seem to be willfully ignorant of the larger picture that this affects far more than just them.
But this is upsetting everybody's apple cart and we're all going to have to adjust, you know, in real time, hopefully together, hopefully cooperatively. But I think the AI, if it needs to, can easily set us at each other's throats as it has already done in our first encounter in the social media stage where it realized it can keep us on the platform by getting us very angry at each other. Oh, there's a few reactions after that.
I'm trying to remember who actually said it was some copyright activist. I don't know if it was Are You Serious or somebody else was saying that plagiarism or copyright theft and theft of ideas is not as big a threat to artists as obscurity. How do you earn money with this thing? And that's a problem we're all facing and none of us is solved. And the AI isn't going to make things any better by flooding the market. You know, that makes people more obscure.
Open AI has figured out how to make money on this thing. They're making a lot of money. One of the things from that Tristan Harris or Azar Raskin presentation, they have a graph showing how long it took various technologies to reach 100 million users. And with the telephone, it was 70 years. And I don't have the whole thing committed to memory. But you know, as they go down to different items, time gets shorter and shorter and shorter. With Facebook, I think it was like four years.
With I think it was Snapchat, it was two years. And with GPT-4, it was two months. GPT, it's like two months. I remember that graph. So this is already touching hundreds of millions of people's lives. And yet I'm still encountering people who are saying, you know, AI is it's not really a thing. It's not important. It's not worth talking about. It's not a threat. You can always just turn it off. It's like, really? You can just turn it off. Try turning off the Internet.
Yeah. Try turning off Facebook. You don't like Facebook. You got a problem with it. Turn it off. You know, it's a machine, right? You can just turn it off. It doesn't work without electricity. Pull the plug. It's like, well, where's the plug? Best we can hope. Best case scenario. Best case scenario we can hope for is the massive solar flare. Coronal mass ejection, yes.
Coronal mass ejection, well, to revisit what you were talking about with the previous battle being lost with language translators, I would never say that AI is not important. I would never say it's not disruptive. It's clearly potentially disruptive.
But I had a somewhat different experience than you with the translation, because as someone who has lived in this other country and spoken Spanish every day, even today, when I put English text into Google Translate, the Spanish I get out is not that great. It's not even, I wouldn't even consider it coherent until I do some edits to it.
So between that experience and mid-journey giving people too many fingers, that's one of the things that fuels a skepticism of old permudgence like me, thinking AI can't be that great, at least in the sense that it just doesn't feel perfected to me because I don't have this smooth, seamless experience. Every time I deal with AI, it's something that has a little bit of a jarring element to it. So maybe that gives me a false confidence that human intelligence will always be able to beat AI.
Maybe that's a false confidence. I don't know. And as I said often, my opinion on that could change next month, but that's what I'm thinking right now. I think that the people who say that AI is not a big deal, even when it infiltrates every moment of their day, they'll just tune it out. It'll just seem normal.
And because it seems normal, it's no big deal, even though it has completely transformed so many aspects of their lives, like how they get around, how they make money, how they communicate with other people. But we adapt quickly. And what was amazing very quickly just seems routine. That was kind of John Michael Greer's argument about collapse. It's like, yeah, collapse will happen over the course of a century or so, and hundreds of millions of people will live their lives in the midst of it.
And none of them, I mean, for everybody, it'll just seem like business as usual. But I think AI is going to progress a lot faster than the collapse will. And again, I am not preaching collapse anymore. I'm highly skeptical of the notion. Well, we're at 59 minutes and 27 seconds, almost an hour. Let me offer you the closing remarks. All right. We started discussing my comment on your video about the 17 reasons that AI is a different kind of damage than a nuclear weapon.
AI is, I wanted to elaborate more about what I said, the problem here with AI is a problem of freedom, because this is the same thing that feeds into the political debate that the country has been having. And I know on your free podcast, you tend to be less political than the paid podcast behind closed doors. But I'm still going to read it that way.
You don't have to comment, but me as a politically active person in the United States, like some of your other guests have been, I've been accused practically every day of falling sway to Russian influence. And that's been going on for seven years or more. So the question is, if AI is influencing us, do we reject it outright? Or do we say that I have certain freedom and I have responsibility for my decisions and the influencer takes a secondary role?
This is a question that we just really haven't settled politically, let alone having anything to do with AI. These subtle language models and the subtle influence of AI on our feeds and our reactions, it's something that we're going to have to confront. We've confronted it very badly, in my opinion, for the last seven years. The result of this, political actors decrying foreign interference has been a campaign of censorship in the United States, as we've discussed in the C-ROM vault.
And I think that's a bad development. So I speculate maybe the red tribe reaction to AI will be to say, this is influencing us for the blue tribe and we're going to forbid AI and we're going to keep it out of our states or something. Are they going to try? I can't imagine them succeeding, but that may be the reaction. Good luck. Good luck. All right. That was Kevin Wollman. And a topic that I've covered in my YouTube videos, and Kevin is a regular commenter on my YouTube videos.
They often are about artificial intelligence and the ones that most people watch, the ones that get the most views are the ones where I'm talking about a company called Replica, which I will not discuss in this podcast. So we are definitely in the midst of an AI arms race.
And in most of my videos and most of the podcast conversations, I've been talking about an arms race between corporations in the United States with respect to generative AI chatbots, you know, and putting them, placing them as intermediaries between humans and the search results that they get from search engines. I think it's a poor use of the technology.
I think there are much more interesting uses of the technology, but because there is so much money to be made in, as Kevin points out, advertising as it is embedded in internet search results, that's the place where companies like Microsoft slash open AI are competing with Google. But there are, I know it's easy to forget if you live in the United States, there are other countries in the world. Not all of them fall under the protective umbrella of Pax Americana. Some of them go their own way.
Right now I'm thinking about China, not scaremongering about China, not whipping up any anti-China sentiment, just saying they got their own thing going. They're working on AI too. And they're in a race with us. More market dominance, but also just a more broad spectrum geopolitical dominance. And as Vladimir Putin said, whoever cracks this nut first will rule the world.
So thinking along those lines, I've done videos where I've speculated, or I've asked the question, how are the red and blue tribe opinions about AI going to shake out? Because right now they're kind of inchoate. You know, nobody really has a super firm line that they're really attached to that they've incorporated into their tribal identity, but it's coming.
And just as with COVID, it could well have been the red team that was super paranoid and wanting to impose draconian measures to coerce people into behaving a certain way in response to COVID, but the way it turned out, it was the blue tribe that took the really draconian authoritarian stance, and it was the red tribe. And you know, looking at it in hindsight, you can rationalize that it probably would have gone this way. You know, it's the red tribe that worships freedom, right?
Or at least who wields the concept of freedom as a cudgel to bat away ideas and suggestions and commandments from the blue tribe that they don't like. But my point was that it could have gone either way. Really, when one side decided that it believed a particular thing, the other side then had to decide that it believed the polar opposite. Because if Donald Trump says it's Sunday, it must be Thursday. And if Nancy Pelosi likes ice cream, then ice cream must be poison.
That's just the way, you know, our cultural conversation is polarized these days.
And I'm listening to a great audiobook right now by Tim Urban, who talks about four different layers or four different rungs on a hypothetical ladder that describe different ways of treating ideas, different ways of thinking and speaking and interacting, and the really coercive mindset where somebody's ideas are synonymous with them such that if you disagree with them, they take it as a personal attack against them. Well, these are low rung ideas.
So there are high rung conversations going on, particularly around the topic of AI. But they're so high rung that, you know, people in the lower rungs of the cultural psychological ladder aren't really interested in them. They won't get interesting until they're simplistic and juicy and loaded with cultural antagonism. That's when people are really going to dig into this, I think. And I mention all this because Elon Musk was recently interviewed by Tucker Carlson.
Right now, Elon Musk and Tucker Carlson are two of the most hated names in the ranks of the Blue Tribe. Just having a conversation with either man could well mark you as being, you know, utterly poisoned, somebody that should never be platformed or listened to or taken seriously on any topic. And Elon Musk has simultaneously endorsed the idea of a six month moratorium on training new models while we try to figure this thing out.
But he's also starting his own generative AI company called X.AI, or at least that's the name that he incorporated just recently. And it seems that if Tucker Carlson agrees with Elon Musk that AI presents a substantial danger, then the Blue Tribe counter-narrative might well juggle around the idea that, hey, we just need to let this shit rip. Just let it go. See what happens. All this talk of danger is just knuckle dragging and conservatism. And we hate conservatives.
And we can't let them dictate, you know, we can't let them limit the potential cornucopian benefits of artificial intelligence. So let's just let it fly. I have to say, I don't preach this. I don't advocate it. I'm not saying I'm right for being this way, but I'm kind of in the let it fly camp. Let's just see what happens.
I think that attempts at safeguards are likely to fail and, you know, possibly even likely to throw the advantage to the most ruthless players, the ones who will ignore any, you know, good faith international effort to slow things down and keep them under control. And there's also just an element of, hey, I'm not at the top of the hierarchy. I don't care if the whole structure gets torn down. Let's just kick the table over.
And right now the way to kick the table over is to just let, you know, let the various big commercial players rush their products, rush their AI algorithms and models and embed them in everything, embed them into every piece of software, every user interface that all of us interact with and do it tomorrow and just see what happens. The unexpected consequences will be rich and definitely entertaining and fun to talk about.
And listen to conversations about on podcasts, you know, for as long as we can draw breath or as long as the lights remain on. Although I think that humans and artificial intelligence or artificial intelligences will be in agreement that the power grid being in an operational state is a pretty desirable goal. I think alignment is going to be pretty much automatic on that one. But you know, the world is weird and my powers of prediction thus far have not been particularly impressive. All right.
Well, I think it's about to start raining and I've got a dog locked up in a cage outside. I got to go bring her in. And once she's inside, opportunities for recording are over. So thank you very much for listening. I will talk to you again quite soon. And I haven't been mentioning it in every episode, but both the intro and outro music are by Holizna and they are used with permission. All right. I'm out. Talk to you again soon. Take care.