The Key to Making AI a Benefit, Not a Hazard - podcast episode cover

The Key to Making AI a Benefit, Not a Hazard

Jun 01, 202334 minSeason 9Ep. 5
--:--
--:--
Listen in podcast apps:

Episode description

The idea that artificial intelligence would someday replace humans in certain jobs is nothing new. Now, as some companies make plans for this new reality, it's still an open question as to whether AI should be feared--or embraced as a technology that will make the world a better place.

On this episode, Daron Acemoglu, an economics professor at the Massachusetts Institute of Technology, tells Stephanie that while it may be right to be concerned, people shouldn't be scared. They discuss a new book co-authored by Acemoglu, Power and Progress, and whether AI will yield benefits similar to those conferred by other technological and scientific advancements throughout history.

The key to making AI work in the long run, Acemoglu says, is that workers maintain a role and a voice through protections like unions and government regulation. Without those guardrails, he warned, AI may indeed sideline more humans from the workforce.

 

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Hello, and welcome to Stefanowie's the podcast that brings you the global economy, and we're dedicating this week's episode to a powerful book on a subject most of us have been thinking a lot about recently, how new technology can change the world. Chat GPT came out the end of last year, had a million users in the space of five days, and that was pretty cool. Then in March, chat GPT four came out. Now that was a whole

lot better, but also alarming. Journalists, lawyers, accountants, teachers could all see how it could not only help them do their jobs but make them redundant. Not sometime in the future, but next month. Should we worry then about where this will lead? Well, the standard version of economic history says not really. It tells a story where I and again people fear technology will make the world worse, think of

those Luddites smashing up machines. But in the end it's better. Yeah, there are adjustment costs, a bunch of people lose their jobs, maybe, but overall, the majority of people get new opportunities, more rewarding jobs, better lives. As the version of history I was taught in graduate school and have heard from fellow economists many many times since, and we've heard it again

often in response to AI. To be scared of this new technology, to believe it will hurt workers, they say, you have to believe this technological revolution will be different from all those that have gone before. Now you might think that's true. This time is different. Chat GPT feels different, but it sets a high bar for being scared. After all, the horseless carriage also felt pretty different at the time.

But two very distinguished economists have taken a fresh look at that history and decided the basic reassuring story about the past impact of technology on jobs and the quality of life of working people is wrong or seriously incomplete. So if their argument is right, we should be worried about the way AI will transform our economy and society because we are not now in a position to get

the best out of it. Quite the opposite. That book is Power and Progress, Our thousand years Struggle over Technology and Prosperity, and the authors are darn Smoglu and Simon Johnson, both professors at the Massachusetts Institute of Technology. Simon's a former chief economist at the International Monetary Fund, and Darren won the John Bates Prize for the best US economist under forty a few years ago. He also wrote possibly the most widely read book of economic history in recent times,

Why Nations Fail. And he's here with me now, Darren, Thank you very much. I'm really pleased we can have this conversation on Stephanomics.

Speaker 2

I am so happy to be here, and you give such a wonderful introduction. I don't think I have much to add, No, no.

Speaker 1

I think we have plenty to discuss, and I'm sorry about the long introduction, but I wanted to make clear why I thought the book was so important, and we do have I think a lot to unpack about the past and the present and what you hope might be a better future. But maybe we can start at that endpoint with what worries you right now about the impact that AI and similar technologies might have on the world.

Speaker 2

Well, I think you put it so well. We need to be concerned but not scared. I think this is a turning point. There are many transformative choices we have to make about the future of work, future of inequality, future of democracy. And the two worst positions we can take are to say everything's going to be fine, just let experts worry about it or be scared about killer robots.

Both of them pacify us and push us not to take up our responsibility of trying to shape the technology, trying to get involved about decisions about the future of this technology.

Speaker 1

And I guess there is a lot there, but one piece of it that I was struck by because I feel like that was definitely many many of my economic history lessons had this sort of underlying premise. And you talk about the productivity bandwagon, so maybe you should explain that.

Speaker 2

Let me actually take a step back, Simon, and I are definitely not arguing that we haven't tremendously benefited from industrial technology and the scientific advances. Today we are so much more comfortable, so much more prosperous, so much healthier than people who lived three hundred years ago, and that's thanks to industrial technology and use of scientific knowledge in further improving technology in every aspect of our lives. What

we are questioning that that was an automatic process. So the techno optimism that you so eloquently described at the beginning is that we don't need to take any drastic actions or become involved in shaping the future of technology because there is a very powerful in axorable, automatic process that's going to bring all sorts of good things to people. And at the center of it is what we call the productivity bandwagon. Because most of us earn our livings

by supplying our labor. So the process via which technology is going to improve brings shared prosperity has to go through the labor market, which means wages have to improve. And the productivity bandwagon says that if productivity grows, if technological capabilities improve, that's going to create a very powerful force towards employers wanting more labor, and that raises wages

and employment. If the productivity bandwagon breaks down or doesn't have many people on it, then shared prosperity would become a dream. And the real fear is that that's exactly what could happen with AI, and we've seen some of that happen with digital technologies over the last forty years anyway, and history says productivity bandwagon can work, but only if we create the right institutions and the right direction of technology.

Speaker 1

So what matters most for making an outcome better worse for workers?

Speaker 2

Perfect That's exactly the right question. And what Simon and I argue is that there are two pillars to it, and you can see them very clearly in most historical episodes. First, you need a direction of technology that doesn't just automate work. Automation is always going to be with us, but it doesn't just automate work, but at the same time creates new tasks, new capabilities, new things in which human labor

can be productively used. And second, you need an institutional framework in which there are forces such as worker voice and worker power that induces employers to share some of the gains with workers. If either of those two things are broken down, then we're in trouble. If both of them break down, that's really damaging. And that's the age

we are living in. There is no worker voice. AI is being used to sideline workers even more in the production process, and there isn't a democratic process that's actually contributing to it, to a sharing of prosperity, or to reshaping the direction of technology.

Speaker 1

You're talking about where we are now, which I think we should definitely get to. But for those who sort of feel like, oh, this sounds like people who are just seeing this new technology and fearing the worst, you know, I think at least we should show how it's rooted in that understanding of history that you mentioned, and maybe draw some contrasts when you're looking back, for example, at

the nineteenth century. I mean, there's a lot of a book which is looking at what happened in the UK and the Industrial Revolution, So maybe say a little bit about the sort of contrasting impact of the different technologies that came in there.

Speaker 2

Thank you them for bringing that up absolutely. I mentioned already that we are so fortunate to have had the industrial technological improvement that started somewhere in the UK in the middle of the eighteenth century. We are so fortunate, but the people who live through it weren't. The first eighty ninety years of the British Industrial Revolution was dreadful

for people. Incomes stagnated, working hours, expanded working conditions, Work worsened in factories which much greater discipline, much less autonomy. People were filled into unhealthy cities in which their life expectancy dropped, and there was no worker voice, no democratic process. The whole thing was just a very difficult time for most working people. But it didn't remain that way in

the second half of the nineteenth century. You already see higher wages, much greater use of technology for improving conditions, both as public infrastructure, health improves and factories improved. And why why did that happen? Was that automatic? Again, our reading of history with a lot of evidence says no,

that wasn't automatic. There was a complete transformation of British institutions with democracy, public sector involvement in cleaning up the cities, education and other public infrastructure, and very transformatively trade unions. You know, being a unionist was illegal in the ant Kingdom and that started changing in the second half of the nineteenth century, and that worker voice, worker negotiation were critical. As part of that process, the direction of technology changed significantly.

What brought part of that misery was the automation focus and the very high discipline modern factory system. All of that started improving. No longer you could allow child labor or you know, twelve hour days and mind shaft for people, for children as young as five. All of these things were institutional in nature as well as technological.

Speaker 1

And actually I was struck by the example. I mean, we going up in Britain. You feel like you hear about the industrial revolution all the time, but I think and indeed about child labor and some of the worst aspects. But I think what was in your book you sort of bring home what a deterioration in circumstances it was that actually six seven year olds hadn't been doing twelve hours work, certainly not in the dark underground before, and so there was this a period where people's lives were

actively worse. I think is worth reminding.

Speaker 2

People absolutely absolutely, and people were very exercised about it. I mean, at some point it reached such alarming proportion that middle class Brits, you know, said disc cannot go on. But all wishful thinking would not have done anything unless we changed the institutions and the direction of technology, and that's what Britain managed in the second half of the nineteenth century.

Speaker 1

So there's quite a few things there, because there's the nature of the technology and whether it tends to just replace workers or actually also produce more demand more other kinds of jobs for workers. There's also the institutions surrounding it. But part of that is about the companies that are producing the technology, that are driving technology, and how powerful they are relative to other parts of society. And I guess what the example of that is in the Gilded Age,

in the US. So how does that feed into it the sort of market power of companies.

Speaker 2

I think the market power is one of the very important elements as well, because new technologies, especially those that make better use of labor, come out of the competitive process. A more diverse approach to innovation is an important part of it. Now, large companies have always been with us. I don't think we're going to be able to reverse that,

and I don't think we should. Ford Motor Company started small but became one of the most important employers in the United States, and it was at the forefront of automating work. But it was also at the forefront of creating new tasks, much better working conditions for workers with higher wages, and accommodating workers into the production process so that they could actually reduce a turnover. But all of that becomes much more likely when we have countervailing powers,

and countervailing powers have to have several sources. For large companies, you need competition. If they become so secure that nobody can replace them, that's not going to be good. You need countervailing powers in the form of worker voice, worker involvement. Trade unions provided that, labor movement provided that in the past, What will provide it in the future That remains to be seen, and you need the government regulation in there.

You know, if companies can do whatever they want to their customers, to the environment, two workers, that's not going to lead to good outcomes. So a regulatory framework is also quite critical. Overall. I think a good way of thinking about this is democratic control of technology. Technology is

something that affects us all. To say that one or two genius in Silicon Valley have to be responsible for the future of technology, I mean we all have to take whatever it's dished out to us, that's not the right perspective. And the democratic control comes from companies being at the forefront of technological progress. But those companies are threatened by rivals, countable to their workers, and they're accountable to society through democratic means.

Speaker 1

Because we do tend to think of invention and technology as a sort of as a being out being outside the system, that there's you know, people are sitting around in their labs or where you know, wherever you can, wherever you picture them, or in their garages, coming up with their ideas, and there's only it's only after a certain point that they're they're interacting with the broader world, that there is this sort of natural process of invention

that happens. It doesn't feel like that process has ever been really organized or run by government or with a kind of democratic infact. So can you democratic?

Speaker 2

Interesting? You're right at some degree, But there is a broader ecosystem. First of all, a lot of innovation is coordinated by large companies. Today. If you look at the in the United States, most RND is by large and publicly traded companies. But second, even the innovation that takes place in universities, in people's garages, in small companies, it's influenced by the market system where people think profits are, and it's influenced by what we call a vision what

is the best use of our scientific knowledge. And I think we have created an incorrect direction of technology because of both reasons. We have provided the wrong market incentives

to digital technologies and we've provided the wrong vision. And they've both met in saying digital technologies should be designed by geniuses to be imposed on people, and they should be used for automation, for surveillance, for data collection, for reducing labour's involvement in the production process, for creating some sort of amorphous autonomous machine intelligence, and all of those are related, and they're the wrong direction. What we call for in the book is that we should strive for

machine usefulness, not machine intelligence. Machines are valuable to us because they enable us to do useful things. The calculator Wikipedia, those are amazing inventions because they expand what we can do. The amorphous notion of AI that is so good that it can do everything that humans do, actually in practice not so well. But that vision, which guides a lot of research, is the wrong one.

Speaker 1

And actually, when you talk about vision, you had a fascinating phrase which for me was quite resonant in a kind of broader way, which is vision oligarchy. Tell us a bit more about that.

Speaker 2

You know, at the end of the day, I described a vision which is this machine intelligence created by a few very smart engineers and scientists that's going to transform everybody's lives. That is a very powerful vision. But where did it come from? Well, it came from Turing to

some degree, but in a very different context. But it got operationalized by a number of very like minded people in Silicon Valley who've pushed this vision and have achieved some degree of commercial success early on and now are influencing the rest of society through their oversized role in the media, in all public debates, in policy, and of course their amazing wealth. And that's what we mean by

division oligarchy. That's a small group of people who have captured the vision of what we can do with technology and what we should do with technology.

Speaker 1

And I guess moving on to how we think about the more recent waves of technology, you're quite damning about the impact that recent automation that AI has had. I guess some would say it's just too soon to tell how some of these technologies are really going to affect the nature of jobs and the workplace.

Speaker 2

That's right, You're right, it's too soon to tell. But there is a problem in there. It's too soon to tell. Why are we rushing to automate work so quickly? What's the rush? So? I have no doubt that automation will be part of our future. It has to be part of our future. There will be things that machines can do better than us, and nothing wrong with that, but a we should do that only when they are truly better than humans, and in humane way, rearranging work in

a manner that's consistent with human priorities. And second, we should at the same time create better jobs, better tasks for humans unique skills. That's the problem. We are rushing to automate work, even when it's not so productive customer service. It's done by AI in many places, and nobody's happy with it. We've displaced workers and we are faced with these menus that are supposed to be smart. They never work. Productivity gains from that are minimal, perhaps negative. But we're

rushing to do it. And at the same time we're not creating any new tasks in new jobs and new capabilities. And we can do that. AI enables us or large language models, they have the capacity to help us. As you said in your introduction, we're not doing that.

Speaker 1

I think that what you've just said describes a lot of the technologies that we've lived with over the last few years. But it does feel like CHATGPT and that much more interactive technology is different to interact with and certainly seems to be learning faster than many of these

other technologies. People who are making an effort to make it part of their lives, whether it's professors or lawyers or people working in human resources are finding very quickly that it can change the way they do their work. So is there something a bit different about the generative AI.

Speaker 2

Well, I would say first, it is impressive, but part of the reason why it's impressive is because that's how it's been marketed. So what people are impressed by chat GIPT is it gives authoritative answers. It can write sonnets and poetry. It feels like these are things machines shouldn't be able to do, and that's what we're impressed by.

Don't get me wrong. I do completely believe that large language models and generative AI can be used in ways that are very positive for humans, and some people have found ways of doing that with chat GPT, But chat GPT's architecture is not optimal for that. What we want, if we believe my pitch for machine usefulness, is that these programs should make us better in our jobs, in

our lives, in our cognition. It doesn't work. If chachipet gives you an authoritative answer without explaining to you why you should believe it, you either believe it, which is not good, or you completely dismiss it. If I were to give you an argument, you would ask me, why are you saying that, what's your evidence? Where does that come from? Give me the provenance of that. You can't do that with chatchipt. It's not designed that way. If you ask it to provide references, it will make up some.

It never really processes the reliability of information. It is not designed so that it can interact with you in a way that filters the vast amount of information that you have available. But you don't know which one is reliable. So there are many things that we could design these machines or these models differently that could be more useful to us, but as not the direction the industry is going.

Many employers are excited by it not to make their labor more productive, but they want to eliminate labor.

Speaker 1

And it's interesting because I guess a lot of the commentators I was reading something by Ethan Molly other day, you know, who are excited about it, have tended to be the ones who are finding ways to make it more valuable for them. And it doesn't feel like a big leap, you know. It comes down to the way we interact with it, whether we trust its answers, whether we come back. So that doesn't seem like a big change.

What you've described about changing the whole environment in which these technologies are implemented, the whole public attitude towards them. That's a pretty big change.

Speaker 2

Absolutely, absolutely, the power attitude regulation. I think these are big changes, and you're absolutely right there are people who are using chatchipitin a productive where there are some companies that have already used in productive ways. But I think the model attitude of the corporate world is not the healthy one. And that's partly because of the corporate world, but a lot also because of the way that the technology is structured right now and is marketed right now.

Speaker 1

And we should get into that because you describe certain things about it, the way it's been driven towards automation, the way that it's led down a path of being used for surveillance of individuals, and the impact that that's had on democracy. So tell us about that. How you think the technology itself has been pushed in a certain direction by its origins.

Speaker 2

Well, I think let's start with surveillance. The current field of AI is contintely intermingled with data collection, and it is hungry for data and employers are hungry for getting more information about their workers'. Governments, especially authoritarian governments, are hungry for getting more information about dissident activities. So there is a confluence of factors that is intensifying monitoring of

surveillance of both citizens and workers. I think that's one of the things that we have to worry about, and generative AI is going to push more in that direction. Automation is related, but quite a distinct phenomenon. US corporations are under pressure competitively because of their shareholders, because of

the vision of their managers to reduce labor costs. Nobody in the schooling system is, for example, talking about, let's hire more teachers, give them better tools, make them more skill, pay them higher wages so that they can do a better job of creating the human capital of the next generation. But that's what we need. We need much more individualized

teaching in the education system. In the United States and the United Kingdom, a lot of low socioeconomic background children are having trouble getting the right type of education, the right type of skills from the schooling system. More individualized education targeted to their strengths and weaknesses could be a great boot. We can use AI for doing that, but nobody is doing that, because that means actually hiring more teachers.

What schools are interested in hiring less teachers. What companies are interested in is, let's eliminate some of the more of the blue color task, let's get rid of some of the clerical tasks. So that mindset needs to change.

Speaker 1

And of course the response to that has often been, well, those individual companies, particularly in the US, those individual companies will make their decisions about how many workers they want, but the increased productivity will create more jobs in other parts of the economy. You think it's just it won't operate this time or has not always It.

Speaker 2

Will it will if it really increased productivity by a tremendous amount, it would. The question is, can we get for example, let's say three percent productivity growth in real terms every year by automating. I think that's very difficult. You're automating a few tasks in a given point in time, say even if automation is on acceleration, you're going to be automating perhaps three or four or five percent of tasks that humans do. To get that kind of huge

productivity growth from automation is very difficult. That means that machines need to be ten times as productive as humans in the past. We haven't done that. In the past. We've gotten very rapid productivity growth when we made humans more productive, and I think therefore it's no surprise that today we are in a productivity slump around the world. We have six five to six times as many patterns in the United States as we did forty years ago.

We have new widgets every day, amazing algorithm breakthroughs in AI, and aggregate productivity is very very anemic in the United Kingdom, it's stagnant. I think that's a course for alarm and it says that we're not using these technologies the right way.

Speaker 1

And it's so interesting because of course people look at the there is there's been a productivity slump, particularly in the UK, and often that's used as a reason why we should be accelerating our introduction of these technologies.

Speaker 2

Yes, so that's the question to me, are you going to get out of that productivity slump by doing more AI driven customer service self checkout kiosks? Is that the way to double UK productivity? I mean, you know, sure if we did self check out cuos together with better things, perhaps it could contribute.

Speaker 1

Look and as you point out in that example, that's just labour shifting because we now do the work not the casemet In banking.

Speaker 2

ATMs were introduced, but at the same time, people who used to be bank tellers became analysts, customer service reps started doing other back office tasks. So actually banking productivity increased during that period. We're not doing that latter part. We're doing ATMs on overdrive.

Speaker 1

Okay, so what do we do? How do we fix this?

Speaker 2

Well, first, we need to change the narrative. This is part of it. We need to stay away from blind techno optimism. We need to stay away from a focus on killer robots. Great in Hollywood movies, but that's not what we should be worried about. But we should be concerned about the direction of technology, and we need to center the discussion on how we can use these technologies better for democracy, better for workers, better for inequality. Then

we need to start building institutions. This is not going to happen automatically, which means that we need countervailing powers. How do we have a better regulatory system? We have lost the regulatory muscle in the West. We used to regulate public utility as well We used to be able to regulate banking and financial services. Those have become harder, and we have not even tried to regulate digital technology. We need to build better democracy. Democracy has been in decline.

Labor movement we need some sort of labor voice. The old model of trade unions is probably not the one for future. How do we build an organic labor movement? And then we need to talk about specific policies. Are we using the right tax tools? Are we creating the right support for private RND backed up by public RND for the right direction of technology? Is the current business model of the tech world, for example, centered on data collection and individual as digital ads? Is that the right one?

Or should we actually tax digital ads? Are needs like Google, Microsoft, Facebook too big? Should we think of breaking them up? There are many policy leavers that we should be talking about. Don't claim I have the answers on those, but we make some suggestions in the book. But our purpose is not to say we know these policies will work. We need a ensemble of policies and it needs to be the result of a democratic process and expertise that brings us to the right solutions.

Speaker 1

Are there any reasons to be hopeful looking around that the government or broader society will be able to grapple with the kind of things you just talked about. I mean, we currently have in the US a Biden administration, which is by recent standards pretty activist in these directions, talks about reducing the monopoly control of the big tech companies.

It's quite pro worker in the way that it's designed a lot of these big investments in it and in green technology, and yet it's realistically going to achieve a fraction of what.

Speaker 2

You've just taught me. I think the Biden administration has done very well, and indeed, as you said, it's the more most pro worker government the United States has had since FDR probably, and I upload them on passing two major policies that many people would have thought would have been impossible, the Chips Act and the IRA. But despite those high ambitions, I think they're not sufficiently focused on the direction of technology and creating the right technological environment

for generating jobs for all kinds of skills. So, yes, there are many reasons to be concerned. There's only one reason for a very cautious optimism. I believe in the unique skills of humans and That's why I think automating work and surveillance are not the right direction. I think there is a diverse set of capabilities that humans have that can be very well utiliz in a new work environment. And I also believe that human ingenuity can be the

best way of furthering our productivity growth. So I hope that those great opportunities are used.

Speaker 1

Final question. You're an economic historian looking back in that history that you describe with Simon Johnson in the book. One thing that is clear is although the right kind of institutions to make this technology work for people did appear, they came much more slowly than the technology itself, and that technology was not coming nearly as fast as what

we're seeing now. When you look realistically at the history, does it not take an enormous amount of negative impact for there to be a response from society to make this work better? Aren't we going to have to live through quite a lot of bad things?

Speaker 2

Great question, and you know that's something that worries me. We don't actually discuss in the book, because it has gelled in my mind more recently. Early industrial revolution created a lot of misery, as we talked about, and there was nothing to be bilittled about that. But when reforms and policy responses and the labor movement's reactions came, it wasn't too late and things could be reorganized. Things are

happening much faster today. Are we going to be too late, if not today, in the next month or next year. I don't know the answer to that, but I was worried enough that I was one of the early signatories for the letter that asked for a six month pose on the training of large language models. Not because I agreed with the text. There was a lot of stuff that are about super intelligent AI that I definitely don't

worry about. That's not the top of my agenda. But I thought that building a broad coalition of academic and entrepreneurial voices to say, let's just take some time. The loss to humanity if we are six months late in implementing some AI technology is trivial. The damage we can do by irreversibly destroying democracy or cementing an approach that's not the right one could be much much larger. So take some time. There's no rush here.

Speaker 1

But I guess the other lesson of history is there are some things that are unstoppable.

Speaker 2

I don't think technology's direction is preordained, and sure technology should not be stopped, and in some sense advances are unstoppable. But we can choose its direction, and doing so deliberatively building the right institutions is feasible.

Speaker 1

Darness Mark, thank you so much.

Speaker 2

Thank you, it was a true pleasure to be here.

Speaker 1

That's it for this special episode of Stephanomics. We'll be back next week. In the meantime, you can, as always, get a lot more economic insight and news from the Bloomberg Terminal website or app. This episode was produced by Samasadi, with special thanks to darn Asmo Blue and Ruth Kick. The executive producer of Stephanomics is Molly Smith and the head of Bloomberg Podcast is Sage Bowman.

Transcript source: Provided by creator in RSS feed: download file