AI and Neural Learning - Best of Coast to Coast AM - 7/22/23 - podcast episode cover

AI and Neural Learning - Best of Coast to Coast AM - 7/22/23

Jul 23, 202318 min
--:--
--:--
Listen in podcast apps:

Episode description

Guest Host Ian Punnett and Dr. Bart Kosko discuss A.I. and machine learning, Russia being able to hack our satellites, digital currency and the possibility of digitally backing up our brains.  

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Now here's a highlight from Coast to Coast AM on iHeartRadio.

Speaker 2

All right, so before we move on with the doomsday question, I thought i'd chair something that came into Twitter at Deacon Punnet is the my Twitter handle and the interesting comment. But they said, you know, I think this is all being overblown. A lot of people will say this, right, They say that that people are quote majorly over reacting to AI. The stupid thing makes arms coming out of people's heads and seven misshapen fingers.

Speaker 3

I don't know.

Speaker 2

Exactly what that means, but here's the here's that line from the usual suspects. The greatest trick you ever pulled was convincing the world that he didn't exist. And I know we're not talking about a sentient AI, like it was sitting around thinking about how it was going to

evade capture. But I do think that there's a parallel here, at very least in that you know this, it's wise that we think about AI because it's going to be, as you said earlier, this is going to become increasingly evident as something we're going to have to wrestle with in our life for years to come.

Speaker 3

Yes, you asked me about disaster scenarios. Yeah, in this recognition, I have something where I call it digital delirium. And it started and I was thinking about the Russian invasion of Ukraine last year and night. It's a writing for USC News service on this. But my argument to start at the end would be we need to back up things more so we'll have something to compare against in the future, because I don't think if we get taken over, big or small, it'll be an obvious way. It'll be

a subtle way you won't really realize. One way to do that is check against the past, some kind of system, restore point. And I give you our GPS system, the thirty one satellites that orbit and control the timing of this broadcast of almost all local area networks, of all banking, airlines and everything. Now there's a real risk that Russia, which we know has penetrated our software systems are grid

and our GPS could manipulate that. But people may not know, believe it or not, we don't have a backup of GPS. Russia build its own GPS system, it's backed it up. China has its own system, it's backed it up. So in an ongoing escalation with Russia. This could happen. I'm about to tell you, or say a Taiwanese if the Chinese invade Taiwan, and that kind of scenario, which a lot of people are wargaming right now, and so a

digital delirium would go like this each day. Those thirty one satellites orbiting the Earth are going so fast in the so far away have two different effects due to Einstein relativity and special relativity and general relativity, as you recall, says that mass bins or curved space and time. So these satellites are about twelve thousand plus miles away from the surface of the Earth. They only feel about one fourth the gravity that you and I feel right now.

So what that means is they run faster. And so you think about it, if you got closer to a black hole, time slows down how much faster? About forty five millions of a second. And this has to be constantly corrected. Right now and as broadcast this is happening. So the onboard computer clocks, atomic clocks, maser clocks are speeding up and relative to what we have here on Earth because

of general relativity. But these satellites are going so fast about two and a half miles per second, that they feel the effects of Einstein's special theory of relativity, which says, like in the twin paradox, that time slows down, and so they slow down about seven seconds millions of a second. So at all times each day we're having a speed up of our clocks of about thirty eight millions of a second. And if that isn't corrected within minutes, utter

chaos breaks loose. So a digital delirium would be putting it, just sittling with that a little bit, getting it off to a few decimal points. It would be the way I would do it. And the Russians are great and probability in chess, so I don't think they would just shut down GPS or blind. It'd be a lot better to infiltrate that do something smart, and we may not detect that for a very long time, and I think the only way to detect that is a backup and

a comparison. I give you the example of how the computer and the Great Stanley Kubrick aren't a Clark movie two thousand and one. Remember at the key scene, there's a match between the on board nine one thousand series or whatever it was, how versus the one land based and they didn't match up, and that was the only real signal that the astronauts had, the space knots had

that their computer was an error. And so without that we are untethered if we don't have that benchmark, and we've got to keep it and we just don't think about it, I.

Speaker 1

Mean, think about it.

Speaker 3

How much would it cost to backup? Yeah, apparently only a couple three billion dollars and we still haven't done it. Stunning Well, that's an exact level. You and I are vulnerable at the personal.

Speaker 2

Level, right, But I mean we even just to go with what you mentioned there, then it would require us to back it up on an analog system, wouldn't it a localized analog system not not replicate the problem by giving it digital access to somebody with sophisticated hackery like the Russians.

Speaker 3

That's true, but one way around that is redundancy. So we did, with enough effort, back up enough of it, and it might be we erase it after period of time. But I don't see how you could correct a large scale AI corruption without some benchmark, without some ground truths, and that ground truth is changing very fast, and that means also that you're going to have to build some of these very large, billion trillion parameter systems, if nothing else,

just to keep track of it. And we're raising ahead without thinking about any of this stuff. And when we put up a GPS, I remember going up to UC Berkeley in nineteen ninety one, Communism was about to fall, and we thought, well, this will really open a system for smart cars, that we would have smart car platoons

on the freeway. By the way, discussed this in the book, because you're thinking, we did some testing on I fifteen of this and that we would have the system pulsing down, and we were thinking about that, but people weren't thinking about some opponents not just undermining it, but in a very subtle way undermining it, like you're putting us in

a kind of kind of delirium. But it's a very tough problem even if things weren't changing as fast as they are, and they're changing it at a truly unprecedented.

Speaker 2

Cliff so, but again, analog local network, at least some of it could be digitized and duplicated. But there is something to be said for kind of old systems, including old satellites that if we're going to use a satellite, then would behoove us to not use a modern digital based satellite. That we should be able to access older satellites which would be impervious to digital hacking.

Speaker 3

You know, it would be impervious to some sorts of hacking, but you could still interfere with this handlog processing. Really, I don't think that would buy you much, and I don't think the satellites are pretty discrete entities out there quite exposed. Plus, for example, the Chinese seemed to have perfected kinetic weapons and others that just wouldn't care about

in a conflict. Care about the programming aspects, but I'm talking about just land based systems and our own items and hostile actors in no one really knows who are what here, but the potential for manipulation of an aisystem in a subtle way, which I think could be more probable in some gross way. Look, we just had a system failure of a fiber optic cable, as you said, just probably through the noise effects and the raw maxim

equations physics. So that's difficult enough to deal with. But it's the change in these systems and it comes back to the fact again, this is not AI. This is black box AI an important subset, but it's a subset of AI, and as it gets bigger, the black box problem only gets worse. And you say, well, how do we solve it? Well, if you try to ask at all possible questions and swore the answers. First off, it's too many questions answer pairs. But if you could, you

wouldn't need the system. But you can't do that. And it's learning new things, so it's forgetting old things, but we don't know what it's forgotten when we teach it something new. It's an inherent problem of a black box problem, and I sually just going to get a lot worse as we go our own black boxes. There are other ways potentially to corrupt the training of our systems by

the introduction of different kinds of data. So the sinister aspects of this, besides sky net type takeovers whatever's happened in the new Mission of Right, are really unlimited here. I mean, every time we have a new digital device, we open up the what's called the threat frontier for hackers and a bunch of other male actors going on.

But these system it's a whole different game. I think about that if you can manipulate your programming, like we can introduce what are called hints hi and ts hints into the programming, and somebody could do that, do just kind of as we say, rotate your axis or gets you to think a little more this way towards my political opinion or away from the other and very subtle effect. Extremely difficult to detect something like that.

Speaker 2

So the and that would let's hope that you're right that it becomes extremely difficult. That black box feature was part of the new Mission Impossible movie, and that's the whole mcguffin that they're pursuing, is that they're coming after they're trying to find out where that black box is because to have the black boxes to control of the world. I'm not giving away anything for people who haven't seen the movie yet, I strongly encourage them to. Is a

lot of fun. Okay, So, but then did we everything that we talked about there? Does that adequately address your concerns? What do your worst case scenarios are a sort of a doomsday scenario for AI and for that matter, any type of self teaching system.

Speaker 3

Not at all. I mean not in the least. There's so many problems when you combine this with nano issues and warfare and something else I talked about in the books I've mentioned before, when you start backing up the human brain, which we're going to be doing, likely in our children's lifetime, possibly in hours, the ability to have explore this in the novel Nano Time, where that the very name refers to the speed up in time when you're on a private or chip time at billion times

per second. But the protagonist finds out that his brain has been circumscribed by those who thinks are his friends to various ends, and you just wouldn't know that if it had happened, and if you didn't have them, if you are in a digital system, you can be shut down.

This is the legitimate concern I hear about moving to a digital currency in our current political atmosphere, that it's just too easy to say, okay, Ian, we don't think you should be purchasing these kind of things today, or based on which you've done a kind of social credit system like the Chinese already implementing anything digital has that and you can just something the Orwellian gods could only have dreamt of shut somebody completely off, put them in

a kind of digital prison, and we're just not thinking that through let alone the privacy invasions that we've had and we've talked about. For example, since I last talked to you, the Federal Court is revealed that the FBI has abused the Foreign Intelligence Services Act Surveillance Act almost two hundred and seventy thousand times in the last four years.

Speaker 1

Wow.

Speaker 3

And that's up by the way session seven oh two for renewal at the end of the year. We'll see if that provision gets renewed. But just that kind of thing our own government and the rapidly expanding surveillance state. And one other point there is once we have a

surveillance state setup, we have it. It was greatly expanded because the Patriot Act, in the nine to eleven attack, the advantage goes completely to the surveillance date because it gets to take advantage of all these rises and technology without passing any new laws, asking any judges to interpret old laws, updating the software, just updating the hardware, plugging

the new modules. That's very difficult to deal with. And you know, we're seeing a lot of testimony now about in terms about interaction between the FBI and maybe Twitter files if you follow it, which you're also broken, and that's our own side doing that. You know, imagine hostile actors or something that could evolve on its own in

the sky. Neet kind of scenario is possible too. A lot of the algorithms use, as I myself have patented, use noise to inject a form of creativity into these systems to help them with what's called hill climbing and other things. Well, it's entirely possible that you could get the system to break off on its own. It may not be consciously aware in the sense that would satisfy as psychologists, but enough to continue doing its own actions

to be very hard to fight. It could make copies of itself, billions of copies in a couple of seconds, and so I don't see an easy way to deal with that. In I do have to credit the big social media companies and I companies this week working with the Biden administration to at least put forth a kind of aspirational four page document. I just read it yesterday about we're going to try to make these systems safer. There's no mention by the way the effects of hallucination,

but at least acknowledging the problem. My understanding is that the President is preparing an executive order on that just to draw attention to it. But it's going way too fast.

Speaker 2

But that idea of hallucination that would be again, it goes back to a couple of the things that we've already talked about. But so are you and you mentioned and I think this is you know, a fair point.

Are you of the belief though, that there will be a kind of sort of divine status that is given to these systems once they go online or they develop sophistication enough that people will turn to them and get a direct, audible and or physical answer, something which often eludes people after prayer where they will pray for something and they don't have any sense, any better sense afterward or what they were supposed to do. But do you think that's what this could be?

Speaker 3

Easily? There was an early version that is the Spielberg Kubrick movie called Ai.

Speaker 2

Yeah.

Speaker 3

If you remember that the kid is asking about his mother and he gets an answer and he takes that. But it reminds me of a colleague of mine who asked a question years ago. We're talking about the singularity, where machines get so smart we can't see beyond a certain black hole like threshold. And he said, if you speed it up a dog sprain, a dog has well over a billion neurons, A sp it has almost ten billion, I guess, and we have about one hundred billion neurons.

And comparison in a cockroach has a million a million neurons and a good owl has a billion. But if we speed up a dog spring and ask a question, would we really care about what a dog has to say?

Speaker 1

I doubt it.

Speaker 3

Contrariwise, I think if we are the dog, which happens in this kind of comparison on just neuron capacity, not just so, I don't think the AI system would care a lot about us and would hurt our stories too many times to get much new out of it. But I think we will come when the error rates fall, and right now they're still pretty high. But like I said, we already have some of these large language mark's getting ninety percent on the uniform bar exam. I mean, that's

not an easy thing to do. And when it starts getting ninety nine percent all the time, and at some point courts will begin. The rules of evidence will I think eventually be modified. I don't say courts to allow this to enter. And there will be challenge, you know, in order to introduce that, you know, in court, any piece of evidence that there's a big fight that takes plus usually don't see it on TV. But I think

it'd be like getting an expert admitted to court. They'll be come back and forth questioning, and they'll admit them in and and a lot of questions of fact will initially come in an advisory away from the computer system, the decision support system what we currently call it. But I think as a practical matter, clearly ten twenty fifty, one hundred years from now, I mean, if we continue anything like the current right, we're likely to take that as an effective digital gospel right.

Speaker 2

And this concept of breaded circuss too that if a system understood, really began to understand what it is that motivates human beings, they might it might anticipate that and give us give certain areas of the country a tax break. They may give other areas. They may change the laws. And this is where it's a Republican or Democrat that they could they could it could become so troublesome that it would be it would become very when it became very likable. And that's what it doesn't have right now.

It seems to be very cold. The idea of AI. There is not a friendly face on it. It doesn't smile, it doesn't have a dog, you know, it doesn't wear a hat. It's just whatever it is. It is, and most people when they don't understand what something is, they merely sort of push away from the table and say, well, I'll never understand that. Wake me when you get into bite sized chunks, and maybe I'll.

Speaker 3

Take some of it. Let's call it digital sugar. Sugar programs our brains inside there, rubbing our teeth and causing all kinds of problems.

Speaker 1

Listen to more Coast to Coast AM every weeknight at one am Eastern, and go to Coast to coastam dot com for more

Transcript source: Provided by creator in RSS feed: download file