The Future of Artificial Intelligence - Best of Coast to Coast AM - 7/25/24 - podcast episode cover

The Future of Artificial Intelligence - Best of Coast to Coast AM - 7/25/24

Jul 26, 202418 min
--:--
--:--
Listen in podcast apps:

Episode description

George Noory and journalist Jeremy Kahn discuss the risks and potential rewards of A.I. in the future. 

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Now here's a highlight from coast to coast AM on iHeartRadio, Jeremy.

Speaker 2

We already have language interpreter devices, but with AI, don't you think it'll be even better?

Speaker 3

Absolutely? I mean one of the things you can do with these systems is translated very easily between languages, and there already people talking about, you know, putting it into

simple wearable devices. It might be possible in the future that you can have a conversation with somebody in two languages or neither of you actually speak, you know, not as you rather understand, and you will get simultaneous translation in those languages, you know, directly in your ear, and you won't have to worry about potentially learning those languages, although you know, there's obviously drawbacks to that if people are not actually learning language.

Speaker 2

So just incredible technology, it really is. What do you think of driverless vehicles? That's artificial intelligence is, isn't it?

Speaker 1

Yeah?

Speaker 3

Yes, driver's vehicles use various kinds of AI, and obviously we have driver's vehicles in a few cities in the US. In San Francisco there are these Waymo taxis. In Phoenix, you can get taxies as well. I think from Waymo and Cruise, which is this self driving car company. It's owned by General Motors, but it's been fairly limited areas, and I think part of the problem is that driving the real world is very complex and lots of things

that can happen out on the road. They're very hard to anticipate, and unfortunately, the way we've had to create driver's cards so far, they've needed very specific information about where they are, very detailed mapping, and we've tried very hard to train them for all different kinds of scenarios, but ultimately it's very hard to have enough data from enough place to train them to work well in all kinds of environments, particularly in all kinds of weather, so

they don't do as well in snow, for instance. It's a really tricky one for driver's cars, and as a result, they haven't really been deployed that widely yet. I think that's going to change though over the next ten year is I think we're going to see more and more

cities where driver's cars will be available. And I do think that some of these newer AI techniques, the sort of techniques that I talked about in the book, will enable us to create self driving cars that are more capable, because we'll start they will start to actually understand and I use the understands they're somewhat loosely as a term, but I think they will start to understand more about how the world actually works, and therefore they'll be able

to cope. They'll have essentially something closer to common sense. They'll know that if a tree falls across a road, that you have to stop the tree, even if that had never happened in a narrative seen before.

Speaker 2

How about people that have been killed by driverless cars.

Speaker 3

Yeah, I mean, I think that's terrible and I think that's a problem with the technology we have. So far, there haven't been too many incidants of people being killed by one of these ROWA taxes are full self driving cars. There was the one incident that in Phoenix, there was another intent where somebody had already been hit by a car than was dragged by driver's car.

Speaker 2

Phoenix was hit by one, right.

Speaker 3

Yeah, the woman in Phoenix was hit by one, and that that is a case where it's that again it was a censor. It did not recognize her as the person. I think again that's why we need careful testing of these systems. We have to remember that, you know, fifty thousand Americans are killed every year by other humans on the road, and I think we can do better than that.

You know, that would be a good thing. And I think it's one of the things we should get used to when we think about these systems, is you know, what are we comparing against and what standard do we require. We have lots of humans on the roads that are not particularly safe drivers every day, and if we can do better than that, should that be the standard or should the standard be one hundred percent you know, accuracy and no death.

Speaker 4

Ever.

Speaker 3

I think there is this interesting where we were willing to countenance certain kinds of human error, that including fatal human errors, and think that that's okay or just the price of life. And yet you're not willing to put systems into place, automated systems that would actually be less dangerous than having humans do it. Because I think that part of it has to do we don't really understand the way these errors occur when other humans make mistakes. We kind of have a good sense of what the

things are that cause humans to make mistakes are. What are the human errors that can happen to cause to cause, you know, fatal accidents. We don't have a good sense of exactly what all the errors are that can crop up in these automated AI systems, and I think that that frightens us a bit more, and we're a little bit more hesitant to put these systems into places of high consequence. And I think, you know, that's that's that's good. I think we should be careful, but I think we

should also have a standard. What are we comparing this against? What do we want these systems to do.

Speaker 2

I'm opposed to driverless trucks, jure, maybe because I don't want to see truck drivers displaced from jobs.

Speaker 3

Yeah, well, look, I mean I think that it is potentially an issue, but there are of other jobs people can do. I think we don't have to worry so much about math employment. They're all going to with their jobs. But if you think in certain industries we may see people lose their jobs. I think in case a trucking

is really a shortage of trucking driver. So the question is, you know, does this mean that all truckers you know, lose their jobs or does it just mean that some trucks are automated and there's still a role for human drivers. I think it depends. That depends what the cost is of these systems and how they're deployed exactly. It's one of the things we're going to have to wrestle with going forward.

Speaker 2

Let's go to the phones. Let's go to Thomas and La Jolla, California to start. Hey, Tom, welcome, Hi George, Thank you very much. I'm getting a lot of static on my line, so I will try to make the short I want to extend the conversation on self driving trucks. In my opinion, it will not displice jobs. Okay, if we have a truck that is automated, it's going to be like the Apollo Moon capsule, you know, with AI

running the truck. But there will always be a teamster there in the truck always, and they will be like astronauts. You know, They're not going to be using a steering wheel or a stick shift, or a brake pedal or gas pedal, or a rear view mirror or any of that. You know, it's all going to be handled by the AI. But you know, occasionally the AI will say, sir or madam, we need to make a course correction, and so our teamster puts down his Fortune magazine and participates in the

course correction. So I was wondering, it just seems to me we should have this technology available now. We had it available like in the Apollo Moon Capsule fifty five years ago, and the Apollo Moon Capsule only used a computer that was two megabytes two megabytes, and of course our smartphones have hundreds of gigabytes. So I'm wondering if you would if we could extend the conversation. I don't think we'll lose jobs. I think that our truck drivers will be trained like astronauts.

Speaker 4

What do you think, sir, I.

Speaker 3

Mean, I think yeah, I think that's quite That's a possible scenario. People have also talked about whether you would have still the truckers, but they would operate in some sort of remote center and you would have AI Infensa driving the trucks most of the time, and then if there was a problem, it would alert the driver, but the driver wouldn't necessarily be in the truck themselves. They might be in a remote center where they could take over a remote operation of the truck using off the camera.

They'd be able to see what's going on and they would then drive the trucks that are remotely if they're an error, at least the side of the road, and then you could have a recovery vehicle coming and and move the truck or yeah, I mean as possible as you say, we will want the teacher or the truck driver to actually be in the cab, but they'll just have to sit there and wait until there's a problem or something that needs human intervention or human decision making.

That's that is one possible scenario. I think that that could happen. It's some of the company that are pushing self driving trucks though, really do see it as a system where there is no driver there at least for large stretches of the journey.

Speaker 2

They're doing they're doing it money, aren't they.

Speaker 4

I'm sorry said that again, George.

Speaker 2

They're doing it to save money. There won't be a truck.

Speaker 3

Well, yeah, they want that is right. They want to save money. They but they also want to they want to save money and they also want to In a case of trucking, there aren't enough truck drivers to actually haul all the things that be hauling, and there's a real labor short Not enough people are going into trucking. So this is a case where you know there aren't enough people doing these jobs. And I think AI could actually help because you could have some self driving trucks.

You could have them in a convoy where you have one human driver to front, all the others essentially follow and are driven automatically. I think those scenarios, you know, could help us with the shortage of truck drivers that we have.

Speaker 2

I may have been the last reporter to talk to Jimmy Hoffa back in nineteen seventy five. Jeremy, he would be going ballistic right now. Yeah, driver trucks.

Speaker 3

I think that's true, and a lot of other industries, you know, people have gone ballistic. Unions have been very upset about displacement of workers, but in the end, I think the people have found those jobs.

Speaker 2

There.

Speaker 3

Again, I don't think we're gonna have massive employment. There will be jobs for people to do, and I think ultimately, you know, this technology is coming along, and I think you have to figure out how are you going to work along outside it? And are the ways to shape it? And I think you know, one of possibility is, as the caller suggested, that you have truck drivers in the cab, but they are doing less of the actual driving. They're kind of supervising the automated system. In the book, I

talk a lot about some problems. I actually use the examples of Massa. Massa is on a ton of research on this, because astern outs are very much often in this role of supervising automated systems, and I think that's increasingly going to be all of us. We're going to be in our jobs doing a lot more supervising of AI than we are maybe doing the task ourselves. The problems is that humans are not very good at being

that vigilant over long periods of time. So a pretty bu if the system is pretty good and it doesn't make mistakes very often, it's very hard to maintain that vigilance, and often we fail to catch the errors when they happen. And there's this human cognitive bias that I talked about in the book called automation bias, where we tend to assume that the automated system is right, even in the face of contradictory data that should alert us to the

fact that the system is making a mistake. Very often people think, oh, no, the system, you know, it's a computer. It must know best, even when what it's suggesting kind of defies common sense. And I think that's a danger

we're going to have to guard against. And the other thing that happens with people when they're placing that role of kind of just being this the overseer of what an automated system does is when they when a system does go wrong, if they do recognize the air that there is in there, they're often quite surprised by the error. They have a lot of trouble figuring out what's going wrong exactly and taking corrective action. That's increasingly been a problem in aviation, where we have, again a lot of

automated systems, a lot of autopilots. Most of the time they work perfectly well. When they do go wrong, pilots often struggle because they're surprised that there's an error. They often kind of panic and will make mistakes in the process of trying to correct and will work back to a manual process. And the key, as NASA's found and as we found in aviation, is very often just to drill people in simulation with lots of potential error scenarios so they know how to respond and they're kind of

practice and how to respond. And I think companies are going to have to start doing this as well as we move with more of these AI copilots into how we work, and maybe that we'll all have to train. If you're the salesperson and you often rely on the AI copidits to write your sales pitch for you, companies may have to say once a month do a drill where you have to write the sales pitch on your own to just so you keep those human skills and you know how to recover if the AI system isn't available.

Speaker 2

Go to my condenver Michael, go ahead.

Speaker 4

George, thank you for taking my callage. So great to be with you again. And Jeremy, thank you for this incredible presentation tonight. And what I was going to ask you about. Interesting kind of headline here, Dee Appliances is using Google Cloud AI to make recipes from what's already inside of fridge. And you know, this is something that's a relatively new thing, something I don't think AI could

do at conception. And that leads me to my question, which is what are some misconceptions when it comes to say what businesses, for example, think AI can do as far as like AIS capabilities and what AI AI actually can do.

Speaker 3

Yeah, so this is a great a great question. Uh, And it's a great example. I mean, I think it's one of the things that the AI systems that have kind of come online in the last two years can do. Is like it could take you could just take a picture of your fridge actually now and you you will recognize what those items are in your fridge, and then you know, it will suggest recipes based on those items,

which is pretty amazing. And then I think the next step that we're about to see, uh, which I talked about the beginning of the program, are these AI agents And in that case, it would would actually recognize what was in your fridge. You could suggest recipes for tonight, but it would also know, oh, you're running low on milk or you're out of uh, you know this particular ingredient, and it would go online and order those things for you. It might be your kind of weekly shop for you

online without you having to tell it very much. It might learn your preferences and just be able to do this automatically, which and I think that's coming in terms of what AI can do right now. And there is sometimes a disconnect particularly, there's a disconnect in large companies I think between what the CEOs or the board think AI can do, and then if you talk to the actual engineers who have to create this technology, they're often sort of frustrated by what the capabilities are. This often

comes down to reliability. So a lot of AI systems right now can do lots of things, some of the time very well, and then some of the times, on what seems like a very similar task, or it's the same task but you've given the instruction slightly differently, the system will fail completely. And I think that's frustrating for people. This is something of all the tech companies that are working on AI systems are working very hard on is

to increase the reliability of these systems. There's some debate in the field about how easy it is to do this, how whether this will be possible. Some people think the underlying large language models that we use right now create AI software that they have a kind of fundamental problem where they will never get to the kind of accuracy that we need them to to really be reliable in these systems, and that we're going to need something else, some other kind of change in the architecture to improve

these systems. One would be, as I talked about, trying to do more training and simulation of AI systems, So you put them in that kind of state environment, but let them just experiment and learn through trial and error. That does tend to result in more reliable AI systems, and I think it's one approach that we may see companies pursue going forward. Some specific things I think they're

very good at right now. I think in any sort of scenario which involves composing something that is going to be then checked by a human, So if it's a particularly any kind of text they're writing, writing letters, writing documents, as only kind of a human in a loop in that process, and it's one of the reasons we may not see much job loss from AIS. I think you still are going to need this human in the loop anytime in the near term to check what the AI is doing.

Speaker 4

And in those.

Speaker 3

Scenarios it's probably states because if a human check on what's being put out and ultimately it's just a kind of a documentation, it's not directly taking action. So those

are safer scenarios. I think situations where you would see AI, you know, actually taking action, these kind of AI agents that I think are coming along those I'm a little bit more worried about given the current reliability of systems, because they are error prone and it will be harder to have a human check on what they're doing, I think.

And that's the case where I think businesses, you know, have to be careful not to run ahead of themselves and think, oh, we can have an AI system that will automatically do our purchasing for us, will automatically do a particular process within in the company. Maybe it's moving data even from like one one type of software to another often that you know people want to use AI could do that to kind of transfer data between different programs.

That again, you're going to have to be very careful that it's not making mistakes in the process.

Speaker 2

Surely, if you asked AI what is God? Would it give you an answer?

Speaker 3

Yeah, absolutely, it would give you an answer. It might, you know if that might, it doesn't. But you have to remember that right now, these systems do they just make up an answer, right, So it would give you an answer. It would give you an answer based on you know what the data has been trained on, which is mostly human written data about you know what if

humans in the past, then what is God? It would and it would probably give you whatever the kind of majority viewpoint expressed, and that data is within some sort of range, and it might give you several different opinions. It might say, oh, some people think God is this, and some people think God is that, and some people think God doesn't exist. It would definitely give you an answer,

but that doesn't mean it's the correct answer. You know, AI, I think pretty AI systems right now they're not omnissioned. They don't know. And then the ones we're really impressed with actually don't know anything that we as as a human species don't already know and haven't already thought of and written down, because that's all all the information has been trained on, is all this human written information, So it doesn't have a way of answering beyond is training data.

Speaker 1

Listen to more Coast to Coast AM every weeknight at one a m. Eastern and go to Coast tocoastam dot com for more

Transcript source: Provided by creator in RSS feed: download file