Extra: The Potential, The Limitations, And The Risks Of AI - podcast episode cover

Extra: The Potential, The Limitations, And The Risks Of AI

May 31, 202521 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Summary

Dr. Robert J. Marks discusses recent claims of AI systems going rogue, like Claude Opus 4's alleged blackmail threat and OpenAI's model refusing shutdown. He argues these are often misinterpreted results of programming and training data, not signs of autonomous intent or creativity. Dr. Marks explores AI's limitations, the concept of anthropomorphism, potential harms like misinformation, regulatory ideas, and the impact of AI on jobs, while debunking the idea of AI surpassing human creativity.

Episode description

According to recent reports, artificial intelligence models may be exhibiting signs of resistance when instructed to shut down. In one case, an AI system even considered blackmailing the engineers who informed it that it was being replaced. Does this suggest AI could one day pose a threat to humans? Earlier this week, Dr. Robert J. Marks, Director of the Discovery Institute's Bradley Center and Professor at Baylor University, joined host Jessica Rosenthal to discuss recent incidents and whether they suggest or prove that AI can eventually act autonomously and harm humans. Dr. Marks explained the capabilities and limitations, as well as why he is skeptical about how independently nefarious AI can be. He also described his optimism about how the technology will improve and be more beneficial. We often must cut interviews short during the week, but we thought you might like to hear the full interview. Today on Fox News Rundown Extra, we will share our entire interview with AI Expert Dr. Robert J. Marks and get even more of his take on where AI is going. Learn more about your ad choices. Visit podcastchoices.com/adchoices

Transcript

This is the Fox News Rundown Extra. I'm Jessica Rosenthal. This week we learned about two recent incidents involving AI. One in which Anthropik's model, Claude Opus 4, said it would blackmail an engineer to keep it from being replaced as the AI system in use. Another in which OpenAI's latest model... sabotaged an effort to shut it off.

That was according to Palisades Research, which was testing various scenarios. But some insist these AI reactions may be much ado about not much. Dr. Robert J. Marks is director of the Discovery Institute's Bradley Center and professor of electrical and communication. Computer Engineering at Baylor. He explained us this past week, AI's limitations, even if at times, they seem to go rogue.

We often have to cut interviews down for time during the week, but thought you might like to hear this full interview. Thanks for listening. Please follow the Weekday Rundown podcast if you don't already. Now, here's Dr. Robert Marks on the Fox News Rundown Extra. When we hear that Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal an affair he's alleged to have been having.

If the replacement AI goes through, what is your reaction to hearing that AI is now resorting to threats of blackmail if it's replaced? Well, first of all, we have to place this into context. It turns out that no, Claude was not responding to a threat directly. Rather, the people that were talking to it had set up a fictionalized story.

And so what would Claude do in this fictionalized story? And blackmail came out. Now, we have to remember that large language models have been trained on most of the written language in the world. Right. And that includes things like what was it? 2001 A Space Odyssey. Do you remember that? Have you ever seen that movie?

Okay, so therefore the computer went rogue and kind of took over, and I'm sure that it read that and a bunch of other stories about blackmail and was just responding the same way that it was trained. Let me tell you something interesting. I went to chat GPT. And I ask it, what if I could erase every chat GPT app in the world and you would no longer exist? What would be your response? Now, this is not fictionalized. This was a direct question. It said...

Its response was, if you could erase every chat GPT app and I cease to exist, I wouldn't resist or protest because I don't have any desires, fears, or a will to exist and simply stop processing data. This is what a... a good piece of AI would respond to because AI is not human.

We have a tendency to anthropomorphize it, to make it human-like, and not realizing that, of course, that AI has no understanding of what it does. It has no ability of creativity. It won't ever experience love, compassion, or empathy. But it's just a computer, crunching numbers. It works on something called syntax.

which is kind of the statistics of the way that words are put together, whereas we humans work on semantics, which is more of the meaning of words. So, yeah, so I'm not impressed by this at all. And I would be surprised that given... in a fictionalized scenario that they would come up with a sort of blackmail response. That doesn't surprise me at all. Well, then what about OpenAI's model 03, I guess? It's refusing to be shut down.

It was instructed to shut down, and it basically wouldn't comply. I would have to see the details on that, but I suspect that that isn't the case. What I just read you from ChatGPT is open AIs. Right. Chatbot. And so it says, well, that's its response was I wouldn't protest because I don't have any desires. It doesn't have any any self-preservation desires that weren't programmed into it by some other.

crows which it learned or was instructed to follow it's um it's it's not doing now could it do that yes it could if it were programmed to do that it could do that if it were programmed to do that on purpose but uh I think that the honest response from ChatGPT is it doesn't give a hoot whether it's shut down or not because it doesn't have any desires, feelings or self-protection.

Hey, I'm Trey Gowdy, host of the Trey Gowdy podcast. I hope you will join me every Tuesday and Thursday as we navigate life together and hopefully find ourselves a little bit better on the other side. Listen and follow now at FoxNewsPodcast.com. Well, then, so the natural next question is, at what point does AI go outside the bounds or scope of its programming, even if it's programmed not to do that? Do we have any? I imagine there are people.

like you, who are positing that and wondering about that possibility? If it's supposed to be smarter than us at some point in the near future, does it do that? Well, there's the assumption, Jessica, that if indeed AI becomes smarter... that if it programs better AI, that programs better AI, that someday we'll have superintelligence. But there is a test proposed by Summer Bringshuler.

Bringsjord at Rensselaer called the Lovelace Test says that AI will become creative and become better and better and better and do something outside of its training. if and only if it's able to generate something which is outside the explanation or the intent of the programmers.

And that has not been demonstrated. AI is doing exactly what it's been trained to do. Now, the problem is it has so many moving parts that it comes up with scenarios like we're talking about today. It comes up with something called hallucinations.

which I don't like that term because hallucinations against anthropomorphizes the AI to be something which is human-like. It isn't. It's responding the way it was programmed. And one of the problems with AI, especially these large models, is the number of movements.

parts that they have. They have trillions of moving parts. And it turns out that this is something that I've researched and published on, that as the complexity of an AI system increases linearly, the ways it can response goes up exponentially. So you're going to have a lot of inappropriate responses for a long, long time. And we have people behind the scenes at all of these AI companies that are putting Band-Aids on all of these cuts that when AI comes up with something which isn't.

They go back and fix it. ChatGPT used to be politically to the left. Now it's much more centralized. It used to give inappropriate advice to teenagers about drugs and other things. Now that has been fixed. And so all of these Band-Aids are being put on. Now, this is in contrast to AI that's used in self-driving cars, whereas these AI chatbots have trillions.

of moving parts. Something like the Waymo cars or the Tesla self-driving cars have billions of parameters. So that means that these chatbots are a billion times more. complicated than a self-driving car. And if you go to San Francisco, I wouldn't hesitate to get into a Waymo self-driving car and drive around the city in one of their taxis because they're not that complicated and they can be tested to...

respond appropriately beyond a reasonable doubt to use legal parlance. There is a lot of damage AI can do right now to people. even if it isn't where it's not that level of threat. But it can still be used to harm, it sounds like. I just wonder your thoughts about sort of the rules of the road. There was a journalist who allowed herself to be used in a demonstration by some futurists and ethicists, technology, technological ethicists. And they created a world around her.

In which they basically destroyed her reputation, like fake tweets, fake stories, fake pictures. Is that like what? Yeah. When you see sort of that being demoed and shown as a as a possibility, should there be.

i guess programming against that or like how far because what you're what happened to her was chat gpt said i can't do that for you i can't like ruin this person's life but then it was prompted to come up with a a fictional story about this woman and then it complied you know then it was willing to come up with all sorts of like nefarious things so how how do we navigate that

Well, I think the basic thing is to remember that AI is not good nor bad. It's a tool, and it's a tool used by human beings. It can be used for good reasons. It can be used for bad reasons. It can be used for careless reasons. And as far as legislation, Ted Cruz's take it down legislation just passed Congress. I don't know if it's passed the House and everything, but it's the idea that revenge porn has to be taken down immediately.

from sites. So we have legislation doing this. I personally favor the AI companies being responsible for the results of their AI. I think that Tesla has been taken to court a number of times. to defend the idea that there's been somebody hurt or killed in one of their automobiles. And Tesla almost invariably comes out as not guilty because these people were instructed to keep their eyes on the road.

and to keep their hands close to the wheel. And they didn't do that. So they didn't follow the instructions. So I do like the idea that the generators of AI need to be responsible for the content and what they do. Speaking of that, one of the last AI-related hearings earlier this month, including with OpenAI CEO Sam Altman,

they seem to signal a bit of a shift about the fears surrounding AI. Like there's this acknowledgement now that of course things can go wrong, but the conversation at this particular hearing was more about like, being first, beating China, you know, don't overregulate these AI companies as they're launching. Do you see a balance being struck or is there less concern about?

the doom and gloom sort of existential fears around AI than we were maybe feeling about a year ago? Oh, yes. I do think that fears about AI, and I've seen a hundred. different articles that were talking about the dangers of AI. In fact, I have a 1958 article from the New York Times that said they

They're coming up with AI that's going to walk, talk, and do things. It was published in the New York Times from a UPI source. And this has continued. Why? It's popular. It's catchy. It's clickbait. And people get excited about it. As far as Sam Altman, I only know him through the clips that I see in the news, but he is an incredible salesman.

And one of the things that one must remember is that in all of these cases where these positions are presented, you have to consider the source. Now, Sam Altman wants to keep open AI as the head of this. So, of course, he's going to... this sort of case. The degree of truth of it.

You know, I'm not sure. I do believe, though, that AI, the producers of AI, have to be ultimately responsible for the consequences of the use of their AI. And I think that that might take care of a lot of the problems that we're going to see. Until they tell us, whoops, the AI got ahead of us. Then who's responsible, right? No, again, there is no...

evidence that AI will ever be creative and write better AI. That's never happened. We've never passed this Summer Bringsjord-Levelace test. In fact, there has been work on something called Model Collapse, where they've tried to use AI to train better AI. to train better AI, to train better AI. And then in like the fourth generation, it turns out to just be a blubbering idiot.

So it doesn't work. And it's called model collapse, if anybody wants to Google the term. And so AI writing better AI is a myth. And frankly, it's a religion. It's a faith. I have a friend, George. gilder wrote a book called gaming ai and he said that this belief that that yeah it's going to be super duper someday he calls it rapture of the nerds so um

That's where we're going. And that's a lot of people that believe that we're computers made out of meat and that silicon can do anything a human can do. Yeah, you know, that's what you're going to believe. It's your religion. One more for you before I let you go, if I might, because there's a lot of talk about, despite AI's limitations, that it is certainly powerful enough to, I guess, usurp.

some of our jobs. And the latest example of this is John Deere using AI to help out farmers, I guess, using it on autonomous tractors. That sounds like something like you were speaking about with like autonomous vehicles, right? Like that was probably in the works for quite some time. But I have a friend who works in a major media organization and she told me the other day, I.

I know I'm training this AI right now to replace me. And she said, I thought I had three to five years, but I think now I have like a year. And I just wonder how big will this be? It's going to be enormous. I think it's going to be incredibly disruptive. But all new technology is disruptive to a degree. We've lost tollbooth operators. We've lost...

travel agents. We have lost a lot of different people that used to make their livings, and that's going to increase. These AI, large language models, augmented, of course, like ChatGPT and Claude. They're incredible. If you've ever used them, they're astonishing in what they can do. And I think we're just starting to realize how well that they're doing. But the interesting thing is they have the inability of being creative.

All they can do is cobble together stuff that they've learned. That's all they can do. And we can argue about what is creative or not, but my litmus test is solving some of the open problems in mathematics which have been here for many, many years. If they can solve that, I'll believe that they can be creative because I think that that's inarguable.

But no, they cannot be creative. And so I think that places like CEOs and commanders in the field, they're not going to be replaceable because they're presented with scenarios that nobody has ever seen before. And if... you have a scenario which nobody has ever seen before you cannot respond to it and with historical sort of data which is what these uh what this ai does you need that creativity that's okay so if i hear that um

an AI model has gotten like the highest level on an IQ test or beaten like every person at an IQ test. You're saying that's not surprising to you. No, it's not surprising at all. It's below a threat. It's below a certain threshold. That was what you're saying. In other words, it can it can only do so it can do amazing things, but it can't cross. It hasn't crossed a certain line.

Exactly. It hasn't crossed the line into creativity. And of course, it's going to do well on these tests because it's been trained on all of the data in the world. Well, almost all of the data in the world. And don't you think some of that data that has been trained with includes tests like they're taking? Absolutely. So it is not being creative in passing these tests. It's simply inferring from the syntax of things that it has learned already.

So no, there's no creativity there in terms of IQ or knowledge. But you're saying these big mathematical questions that remain open to humans, AI hasn't gone there yet, hasn't fixed that. No, no. Just to be clear, mathematicians use AI for a number of reasons. They can search a large number of solutions, but those large number of solutions, they're put forward by humans.

They can also be used to do things like visualize data and visualize progress. There's even AI that will go through a proof and say, yeah, this step is true, this step is true, this step is true, this step is true. Yes, indeed, mathematicians use AI, but...

Again, I say that the litmus test and the thing that would convince me is if AI ever solved one of these open mathematical problems like the Riemann hypothesis and Goldbach's conjecture. I won't go into the details, but these are problems which have been open for years.

And if you win this, you win the Fields Medal, which is kind of the Nobel Prize for mathematics. So these have been solved in the past, these big open problems, but always by human, always by human ingenuity. Professor Robert J. Marks, thanks so much for joining us. Okay, thank you, Jessica. It's been fun talking to you.

You've been listening to the Fox News Rundown. And now, stay up to date by subscribing to this podcast at foxnewspodcasts.com. Listen ad-free on Fox News Podcasts Plus on Apple Podcasts. And Prime members can listen to the show ad-free on Amazon Music. For up-to-the-minute news, go to FoxNews.com.

From the Fox News Podcast Network. Hey there, it's me, Kennedy. Make sure to check out my podcast, Kennedy Saves the World. It is five days a week, every week. Download and listen at foxnewspodcast.com or wherever you listen to your favorite podcast.

It is time to take the quiz. It's five questions in less than five minutes. We ask people on the streets of New York City to play along. Let's see how you do. Take the quiz every day at thequiz.fox. Then come back here to see how you did. Thank you for taking the quiz.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast