What Artificial General Intelligence Could Mean For Our Future - podcast episode cover

What Artificial General Intelligence Could Mean For Our Future

Apr 09, 202529 minEp. 1004
--:--
--:--
Listen in podcast apps:

Summary

This episode of Science Friday explores the concept of Artificial General Intelligence (AGI), its definition, and potential impacts. Guests discuss the challenges in defining AGI, the economic motivations driving its development, and whether AI can be ethical. They also address concerns about job displacement, environmental impact, and potential doomsday scenarios, while highlighting the positive advancements AI has already enabled in fields like medicine and weather prediction.

Episode description

What happens when AI moves beyond convincing chatbots and custom image generators to something that matches—or outperforms—humans?

Each week, tech companies trumpet yet another advance in artificial intelligence, from better chat services to image and video generators that spend less time in the uncanny valley. But the holy grail for AI companies is known as AGI, or artificial general intelligence—a technology that can meet or outperform human capabilities on any number of tasks, not just chat or images.

The roadmap and schedule for getting to AGI depends on who you talk to and their precise definition of AGI. Some say it’s just around the corner, while other experts point a few years down the road. In fact, it’s not entirely clear whether current approaches to AI tech will be the ones that yield a true artificial general intelligence.

Hosts Ira Flatow and Flora Lichtman talk with Will Douglas Heaven, who reports on AI for MIT Technology Review; and Dr. Rumman Chowdhury, who specializes in ethical, explainable and transparent AI, about the path to AGI and its potential impacts on society.

Transcripts for each segment will be available after the show airs on sciencefriday.com.

Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

Transcript

Listener supported. WNYC Studios. This is Science Friday. I'm Flora Lixman here with Ira Flato. Today on the podcast, are we ready for super powerful artificial general intelligence? And will we ever get there? How do you really test that? And how would you know when we've got there? And it's not clear. tool that we'll get there on the current path.

Every week, you know, it feels like there's another AI advance. Some company produces a system that will write proposals better than you can or... makes more lifelike pictures or videos or wrangles data in a new way, and most of these systems are still limited to a few specialized tricks and what they can do. But how close are companies to creating something that can virtually think on its own or outperform humans on any task?

what researchers are calling AGI, Artificial General Intelligence. And that's what we are talking about this hour. All right, let's get into this. Let me introduce our guests. Will Douglas-Heaven is a senior editor for AI Coverage. at MIT Technology Review. He's based in the UK. Welcome back.

Hi, it's good to be back. Nice to have you. And Dr. Ruman Chaudhry, founder and CEO of Parity Consulting and the responsible AI fellow at the Berkman Klein Center at Harvard. You know where that is in Cambridge, Massachusetts. She's also a visiting researcher at the NYU Tandon School of Engineering, and she's with us in our New York studios. Welcome back. Thank you. Nice to have you. Okay.

and everybody's heard of AGI. Will, what do we mean when we say AGI? What does that mean? How is it defined? I have no idea. And that's the problem. Thank you, folks. And we're done. The end. No, seriously, I mean, that is a fascinating question. My whole problem with... It means so many different things to different people, and it's changed its meaning over the last few years.

For the sake of getting the conversation going, what it seems to mean to people now, the companies putting out their blog posts and their manifestos about what they're building, is an AI system that can do a wide range of tasks. cognitive tasks as well as a human can. That's about as good a definition as you're going to find. But I mean, my problem is that.

There's so many words in that themselves need defining. Like, what is a cognitive task? What does it mean to do it as well as a human? How many cognitive tasks do we need to call an AI system, an AGI? It's, yeah, we'll get onto this, I'm sure. That's sort of what we're talking about. Yeah, so that's the question of the hour, the day, the year. And it's interesting because as... it's on purpose, right? It's to leave the sort of vagueness where we

fill in the narrative ourselves and scare ourselves. But actually, if you look at what OpenAI has defined AGI as, it's the automation of tasks of economic value. And this is what happens when corporations get to define what intelligence means. They pin it to things that are economically productive. And I think that is a very important distinction from simply saying cognitive tasks. And Will's right. Yesterday, DeepMind had a blog post.

where they pretty much defined it as the automation of most human cognitive tasks. And I agree with Will. Who knows what that means? Does that mean self-awareness? It absolutely does not mean self-awareness. Intelligence and sentience are two totally different things.

If you think AGI is a muddy question, then... sentience then you know we'll be here for ages and not get anywhere okay so so so far i've mentioned things like chat box things like questions that get answered things that make images or video clips But how do you make the leak from something that's good at doing these sorts of things, Ramon, to something that's good at doing all sorts?

How do you make the technical leap? Yeah. What is that leap? How is that done? Is it learning, teaching computers to do different things? computers do? Yeah. I mean, if we're going to pin this to defining AGI, I think the goal would be that it's able to do these tasks without us explicitly teaching the model to do so. So what's very, what's captured the imagination with generative

generative artificial intelligence is that it seems as if we're just sort of handing over a pile of random looking information and it's putting together patterns, these models. And that is actually an impressive feet. What it is not is, you know, alive or replacing humans, etc. I think like what these things are in the real world is very, very different from just

capability performance. So one of the interesting things to think about is when these new models come out, like you said, they seem to come out like three a week, and they say it's performing better than X, Y, and Z, the important thing to ask is, What is the measurement by which we're saying it's so impressive? And that's publicly out there.

Right. I want to bring AI into this conversation. Literally an AI? Yeah, literally. Literally an AI. I asked Google's AI assistant Gemini what we should ask you to. And we workshopped it a few times. But here's where we got. And let's see what you...

If I could only ask one question to AI experts about AGI and humanity's preparation, it would be, considering the inherent uncertainties surrounding the development and capabilities of AGI, What is the single most proactive and universally beneficial step humanity should take now to prepare for its potential arrival, regardless of the specific form AG I might take? It goes on and on and on. Okay, what do you think? I'm going to punt that one to Will.

What is the most proactive and universally beneficial step humanity should take to prepare for AGI's arrival? I mean, this is... I don't think AGI is coming anytime soon. And I'm not really sure what... you know, what that would be when it came.

so just like a little side note there i think at some point probably soon because so many companies have said they're building it and it is around the corner probably someone will just make a you know a definition and say we're calling this thing we've just made agi so um you know the question that Gemini is asking is what do we need to do to prepare for that then it kind of depends what that is but um like more constructively

I would like us to get off this obsession with AGI and sort of focus on the specific technical advances that we are seeing that are coming along really fast. It's so easy to sort of, you know...

Dismissing AGI, the idea that AGI is around the corner, is not to dismiss how amazing the advances have been in video generation, in chatbots over the last few years. I'm constantly... wowed I mean and it's it's wonderful doing my job like seeing the latest thing the latest things come out and you're talking to people that are making it I'm constantly awed by how good this tech has got. I'd like to just sit with the capabilities that we have and think about

what impacts those are going to have on the world. And there's enough to deal with just with the AI we have today without... without sort of spending so many hours and words about preparing for AGI. Let me go to Samuel in Rochester, New York, who may have some words like that. Samuel, welcome to Science Friday. Hi, can you hear me okay? Yes, go right ahead. Hi. I wanted to just say that I agree with what's been said where, you know, we have a lot of very good image generators and chatbots.

But those are pretty far away from something that can reason cognitively and generate new ideas. It's always kind of a... We're making amalgamations of things that are already on the Internet. And the jump from that to, you know, or kind of summarizing to.

generating something new right something that hasn't been done or said before um that's a leap that you know i i think hasn't been made yet and the Trend I see is that tech companies can slap AI-powered on anything now, and it makes investors happy, but... The results, the profitability of it, the advancement. It's hard to know what the scale actually will be of that, the impact of that. Good point. Ramon? Yeah, there was a report last year, I believe, or two years ago.

that pretty much dug into all the companies claiming that their products were AI powers in the UK. And I found about 60% of them had no AI under the hood. There is no, you know, first of all, we have a very slippery slope definition of AI itself. And then now it's translated into AGI. And again, to Will's point, the analogy I give is...

gone down the same slippery slope of self-driving cars. Remember the earliest self-driving cars? And what we imagined is like we'd get into this pod and like take a nap and it whisked us off to where we're going. The Jetsons. Right. But now we have, according to Elon Musk, we have self-driving cars. which we still have to sit there in traffic with our hands on 10 and 2 and our foot on the brake, and this car is quote-unquote driving, but if it got into an accident, we are liable.

So you're still effectively absorbing all the stress of driving. with none of the self-driving. But let's also go right to the main point that I see, is that the reason all AI exists and AGI is being developed, it's about the money, isn't it? 100%. It is. Well, and actually... OpenAI and Microsoft have defined what AGI is with a monetary value. They have said it is when they have earned $200 billion of revenue. Then they will slap on a sticker and say, we have AGI.

So do you agree, Will, it's about the money here? Yeah, I do. And yeah, it was like... I'm happy that we were reminded of that definition. I think that's probably the best definition of AGI we have. At least it's precise and clear. But yeah, absolutely. It's so hard to talk about.

your AI advances and really get into the details of what these systems can and can't do because the tech is being developed by companies and they're doing it for... for for profit and it's it's you know obviously they're going to make as bigger claim for the new tech as they can. Again, Genuinely, when you get a lot of these demos in your hands, they are truly impressive. You know, they're never going to be as impressive as the companies selling them.

Well, and the story within the story is that for many years, companies have poached the brightest scientists and minds from academic institutions. In fact, they poached them straight out of their PhD programs. If you go visit the University of Cambridge... Oxford, MIT, Stanford, there's a very close tie to every single major model developer.

And that's on purpose. So there is something also to be said here about the lack of independent researchers who are able to do this work without getting funding or just explicitly being hired by these companies. After the break your hopes and fears for AI. We don't have to make it so that it has any power over us at all. Stay with us. Life can pull us in so many different directions, with work, our family, and our health all needing our attention.

It can be difficult to make decisions that are aligned with what we care about most. I'm Emily Falk. In my new audiobook, What We Value, I'll teach you how the brain makes decisions. and we'll explore the hidden calculations that can lead to more purposeful, fulfilling choices. Find what we value on pushkin.fm slash audiobooks or on Audible, Spotify, and wherever books are sold. and let's go to the phones to chris in scottsdale arizona hi chris hey ira can you hear me i sure can go ahead

Excellent. Well, I was just going to mention that I use AI quite a bit for nutrition analysis. So it helps me come up with plans of what I'm going to eat during the day. And I love it. One question about that that I would have, and it helps me with recipes too, but is, do they think the memory on these things is going to get better or will have personalized AI that can... you know um remember like what we ate a month ago because what i find is i have chat gpt and i've tried grot

And both of them sort of forget, you know, if you go back to it after a week, now you're having a brand new conversation. So one thing would be about the memory. And I had a just second question about what will happen with AI. Do you think it's or do your experts think it's more likely that we'll have a situation where that will place the jobs lost from AI with universal basic income, something like that?

Or do you think it would be something like an assisted situation where all of our jobs are assisted? You know, we tell AI what to do and it does the job for us. Okay. That's my two questions. Two meaty questions, Chris. Thanks for calling. I'm going to divide it up. Will, you want to take the first half of that? Sure. Yeah. Memory is a feature that a bunch of these companies making these chatbots have.

either already added or we're talking about adding to the chatbot. I think it's something, an option that you can turn offer on in ChatGPT and probably in... And the others, like Gemini and Grox, I don't know if the... is using that feature. I mean, possibly, I'm not an expert on the actual different tiers, like the different paid tiers and free versions of these chatbots, but it's certainly something.

which exists in some of them and if it doesn't already then i know that's what people are aiming to improve like this idea that yeah this is this will be like your your personal little buddy that knows more about you than than anyone else and can can recommend stuff is is a vision And Ramon, what about jobs? Is it taking our jobs? Yeah, I also can chime in on the first one. I use Perplexity to help me do research. They actually have something called Threads.

And then the thread can be a particular topic and you can kind of go back to that. Not to like promote any particular AI. It just happens to be the one that I use for that reason.

Future of work. I have many thoughts on future of work. Well, first of all, I want to start by saying there is no finite amount of work we do as humans. I think one of the... fallacies of this like there will be no jobs conversation is it there's a core assumption that is wrong that there is a finite amount of work that we do

Any sort of technological advancement has actually not given us less work but more work, right? How much more available are we now that we can be found on these little devices, our phones, 24-7? We used to, like, leave work at 5. Very few of us remember that time anyway. So, you know, email, internet did not give us less work. It actually gave us more things to do. uh and there's some empirical evidence to kind of back this so there's three

studies I like to talk about. The very first one came out last year. It's by this labor economist, Dr. Darren Asimoglu out of MIT, a brilliant labor economist. And he did a macroeconomic measure of the impact of AI over the next 10 years. and found that total factor productivity, so like all of the stuff we produce in the world, sub 1% will be automated by AI. But that's not nothing. I mean, sub 1% of what the entire world produces is still...

something. And when he talks about that's kind of interesting, I think this is what captures the imagination. Most automation tends to get rid of like blue collar jobs or rote tasks. So email automated sending mail. But what is interesting in capturing our minds about AI is that it automates knowledge tasks, which we've never had before. So he talks about how the distribution between blue and white collar jobs is actually fairly even.

maybe even leaning a bit more towards lower tier knowledge jobs. The second paper I like to talk about is called GPTs or GPTs. It came out in 2023. It was actually by some researchers at OpenAI, as well as some economists. talking about how like different sectors, what may be automated, what threat is faced by different sectors. So it's, you know, going from like big picture to like industry level. And like the rough takeaway would be about, would be roughly that.

80% of jobs will see about 20% automated away. 20% of jobs will see about 80% automated away. And they were talking about jobs like paralegal, et cetera. So, you know. research type jobs or knowledge jobs. It's interesting. The third one just came out last week. Really interesting. And this is getting super nitty gritty about future of work.

Harvard Business School, some other folks work with Procter & Gamble, and they did this study across over 900 employees. They did kind of like a competition. Where it was humans, it was individual humans, individual humans plus AI, like teams, teams plus AI. And they looked at things like...

quality of work, time to completion, how well it augmented people already with a skill set, and how well it augmented people without a skill set on a particular topic. You know, it's lots of details, but pretty much... The takeaway is that human plus AI is better than human alone is better than AI alone. So it's one of those things where it is a productivity booster.

And what that means is probably what it has always meant for us when we've gotten new productivity technology, which is that we will just kind of have more stuff to do. You know, when we think about AI, we talk about it as a reflection. Right. That it learns from us. It learns from our data. Can we teach AI to be better than us? Oh, that's a good question.

I think AI is capable of evaluating data at a scale that is hard for humans to do. That's why the output of these models can be so impressive. So the short answer is yes. The longer, more complicated answer is what do you mean by better? I mean, and I mean it specifically like doesn't cheat. is more ethical. You know, when people think about these sort of doomsday scenarios with AI, they're like, oh, AI is going to scheme and take down humanity. Can you teach AI ethics?

The short answer is yes. And actually, a lot of these scenarios where AI, quote unquote, cheats, it has no normative judgment. It doesn't understand good and bad. So, you know, even predating Gen AI, I remember some of the earliest... models coming out of DeepMind and some of the research bodies, they would play video games and it would do things like race a car backwards or it would...

shoot everybody else in the game and then pick up all the goodies. But that is not the AI being evil. We have decided that is evil because we made rules. And we implicitly know if I'm playing a game with other people, what I should not do is like get rid of everybody else so I can slowly pick up all the goodies. The AI is simply optimizing.

for what you have told it to do, like in this very blunt way. Like if you are of a particular age and I'm of the particular age and you read like Amelia Bedelia as a kid, like think of it as Amelia Bedelia. Like you literally are like, you know, I don't know. make me a cake. And it will just quite literally- Very literal. Yeah. Yes. And a lot of these- Issues of AI gone awry actually can be boiled down to a misspecified objective function.

You are telling it to do something. You actually have to think through all the ways in which you are making assumptions because you have been socialized to do things a certain way. And like, how would Amelia Bedelia? understand this. That's the new way that I interact with GPT. All right, let's go to the phones to Anton in Phoenix. Hi, Anton. Welcome to Science Friday. Hey there, thank you. Can you hear me okay? Sure.

Okay, yeah, so I just wanted to address the earlier question that you guys asked Gemini, which is like if there's one thing that you would want to focus on, focus humanity on, what would it be? And I'm kind of thinking about, like somebody said, doomsday scenario, right? And oftentimes when we talk about doomsday scenarios, we're thinking about the technology getting smarter than us and then deciding that we're expendable.

and all that. But I think that's kind of misguided. It makes for a good science fiction novel, but I think... The problem, if that was the problem, then it would be a technology problem. That would be easy. The problem I see is a people problem. You know, like so the NRA says, you know, guns don't kill people, people kill people. And I think, you know, we really need to focus on maybe two things. One is who's controlling the AI?

both in terms of training it as well as using it to inference to actually do things. But I think the bigger thing is really, if you think about artificial general intelligence or artificial superintelligence or whatever godlike intelligence, And AI is not going to necessarily see humans as a threat unless humans are competing for the same resources with the AI, right? So that could be jobs. It could be electricity. It could be any number of things.

And I think it's the question that I think about is, how do we... How do we arrive at a place where AI isn't... being manipulated by humans for human ends right yeah and one example okay all right yeah that's a good question let me get an answer to it because uh i mean if the point of ai is to make money I mean, it's going to be manipulated to make money, right? It already is, right? If we think about where money is being spent to build AI capabilities.

companies have conveniently found the alignment of things people are willing to spend money on cross. things that are also important to us. It's not surprising that healthcare has been one of the primary applications. There's so much money being made in healthcare, but also we want to lead better lives. The other one people talk about quite a bit is education, right? But no one is talking about things that are maybe less profitable.

but also good for humanity. And I appreciate the statement about like, let's think about the access or the people behind the wheel. A lot of these doomsday scenarios are very fantastic. What if AI sets off nuclear weapons? Why the hell did you give AI access? To be able to set off, you just can just not do that, you know. But those are people who worry about the singularity, right? I mean, when AI is smarter than us and takes, we become the AI, we become.

subservient. I mean, I think most AI is smarter than me from like a ability to answer Jeopardy questions perspective, right? Like I probably couldn't beat the average AI system. Can I jump in on that? Yes, please. What gets me about all these doomsday scenarios is this weird sense of inevitability that this technology is just going to appear and squash us puny humans.

We don't have to make this. We don't have to have the nuclear codes, as you said. We don't have to make it so that it has any power over us at all. But we also already have some advances in medicine and, you know, where doctors are doing things doctors couldn't do. Aren't there already positive results, Will, of using AI? Oh, yeah, many, many. And yeah, Medstone is a great example. I mean, just the everyday conveniences that we're already seeing from chatbots, I think, are great.

I tend in these conversations, could we go straight to AGI? I come across as a, you know, a sort of a naysaying crank, which is not a good professional look for someone who is. Very much, and has been for more than a decade, a champion of this technology, which I think is amazing. It's just, it gets derailed, I think, a lot of the interesting, brilliant things.

that we could talk about, get derailed when we talk about doomsday scenarios. Well, what are they? What are the interesting things that we should be talking about? I can chime in on some of that. I mean, we are likely to cure many cancers in our lifetime because of the advanced protein-folding AI-driven technologies that have been created. Like, this is like a fact. We have advances in genomics and medicine because of the models that have been made.

We have better weather prediction models, and I live part-time in Texas, and hurricanes are a very big deal. We have better weather prediction models that can tell us weeks in advance that a hurricane may be coming. Because of AI. And the thing is, this just won't capture the imagination the way a quote-unquote talking humanoid bot idea will. But all of that is AI. And, you know, it does, as Will is saying, it's a disservice.

driven by multiple narratives, companies included, to push us to look at AGI when we actually can celebrate a lot of the great stuff that AI is being used for today. Let's go to the phones to Marlena in Washington State. Hi there. Welcome to Science Friday. Hi, can you hear me? Yes I can. Okay, my question is, what is AI going to do to stop... sucking up all the electricity in our environment. I live in a small rural town and a small rural town really close to ours.

has made a deal and they've built a big data center and we all know this is how AI generates all its juice. And it made a deal with this town waving the employment flag. And now these residents in this small town are experiencing rolling blackouts. I call this predatory behavior. And I would like to know what AI, what these billionaire owners of AI are going to do to be protective. of people and save more of the energy in our environment. I mean, come on. This is global warming.

Sounds like Marlene is mad as hell. I'm not going to take it anymore. Will, what do you say to that? Good point. Yeah, I think that's a really good point, especially if you had this affecting your neighborhood. I think we are going to see that. These massive data centers get set up and they suck the power out of the local grid. So there's lots of things that could be done.

um and let's like let that could hang in the air a minute like there's a lot of work being done to reduce the size of um models uh and you know a smaller model can do many of the things that a larger model can do for less power. There are things that could be done around the way that these models are trained, train them more efficiently rather than just throw

Every single bit of data you can scrape up at them, maybe curate that data and show them sort of data that's actually going to be more useful. So the training, the training steps could. be fewer, again, using less electricity. That's all on the side of actually building the models. The data centers, of course, use then to run the models. You know, we're all invoking chat GPT for our recipes and everything else. And every time we do that,

It's sucking up a lot of power. So, I mean, there are... we could be making more efficient chips uh we could be uh you know running on renewable sources of energy and finding ways to store that energy in the data centers with with batteries etc so all of which is just to say that our solution

available. Will they happen? Is a completely different question because right now this is a race to the bottom uh you know all these companies really really having invested Everything they have into this race need to come out on top, you know, with the punchiest. most powerful AI model. And I think the sort of sustainability needs, I think, going to be an afterthought. 30 seconds to go, Rumah. Well, Microsoft is...

rebooting Three Mile Island, for those who are local who know what that is. I remember it well. Yes. And, you know, when pressed on this, Sam Altman was sort of hand-waving over we should have fission technology in our lifetimes, then everything will be fine. So it seems like they, too, are kind of banking on scientific advancements to do the work for them. All right. We've run out of time. I'd like to thank my guest, Will Douglas Heaven. Senior Editor for AI Coverage at MIT Technology Review.

and Dr. Ruman Chowdhury, founder and CEO of Parity Consulting, and the responsible AI fellow at the Berkman Klein Center at Harvard. Thank you both for taking time to be with us today. And that is about all we have time for. Lots of folks helped make this show happen, including... Shoshana Bugsbaum. Kathleen Davis. Diana Plasker. Beth Rami. I'm Flora Lichtman. Thanks for listening.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.