Bloomberg Audio Studios, Podcasts, Radio News.
Hello and welcome to another episode of the Odd Lots Podcast.
I'm Joe Wisenthal and I'm Tracy Alloway.
Tracy, I feel like AI is a great thing for anyone who wants to have an opinion on anything. It's like this blank canvas out there in which any idea you have it. It's just a great moment for pontificators in general.
Well, not only can you hang a bunch of different opinions on it, but it can generate those opinions for you.
Yeah, you can, that's right.
Like you can just go to chat GPT and say which jobs are going to be lost thanks to you, and it'll like spew some answer forward based on the collective wisdom of trillions of words that people have typed over the years.
I do think, though, if we're talking about one opinion, in particular, the dominant opinion at this point in time, it does feel like there's a lot of nervousness about this new technology and what exactly it means for the economy,
what specifically it means for jobs. And so you see all these headlines that AI is going to lead to a bunch of job losses, that it's going to basically be a new technological revolution that plays out very similarly to the computer revolution that led to the destruction of a bunch of sort of middle office jobs, or the industrial revolution that led to a loss of skilled artisan jobs. And we've seen some hints of that, to be fair,
So I'm thinking back to last year. I think it was in the summer, maybe in June, and the Challenger Jobs Report came out and for the first time ever, they included a line about job losses stemming from AI.
Yeah, although I'm just going to say right here then, I think when a company lays off workers and says it's due to AI, I still have this assumption that it's like, we're doing badly, so we're gonna put a positive spin on it by making it seem as though our layoffs are the result of some internal productivity breakthrough that we're getting from a chatbot. So, like, I don't quite believe it, but I think.
That's totally fair. That's totally fair. But I think clearly this is something people are paying attention to. You are starting to see some of the economic reports sort of break this down. At least the Challenger report if not like the BLS and things like that. So there is this idea hovering over the economy at the moment, which is, Okay, maybe AI will be great for productivity, we'll get that boost, but what does it mean for jobs? Right?
Basically everyone in any realm loses their job and is on the UBI drip, and only Sam Altman is the last person who is employed, is it? But like, I don't know, I get freaked out, Like it's pretty good, Like there are many I use AI almost chatbots all the time in my work, and it's like, well, maybe it could one day be a better host than myself for a podcast. It seems possible to me. I am anxious. Of course, people also like to project onto their perceived
ideological enemies. That's like, oh, all you English majors are going to lose your jobs hahaha, And then the English majors all go, all you coders are going to lose your jobs, and you're going to need English majors there.
It's just an endless thing.
And actually I think I tune most of it out because it's so ambiguous in my view where this technology is going that there are very few people I even want to hear from on the topic, because I think it's just so there's so much extreme uncertainty.
Still extreme uncertainty. As you mentioned, people kind of harness it to further their own biases or arguments. But you're right, there are people who are good on this topic, and we're about to speak to one of them.
That's exactly right.
So last month, there was this really interesting headline that I saw in a Nuema magazine and it sort of felt like this sort of like provocative, maybe clickbaity type headline that said AI could actually help rebuild the middle class, which is very counterintuitive, very the opposite of what we're
talking about. But then I noticed who the author of the piece was, and it's someone whose work is very strongly associated with forces in the past and forces in technology that have been destructive to the middle class and
have caused great labor market upheaval. And so if someone who has sort of been watching this exact topic, the intersection of labor market upheaval and technological change, is saying, actually this could be good, and this person is a track record in this area, I'm like, Okay, this is an argument maybe I'll pay more attention to than the random person doing a Twitter threat.
I'm into it. As you mentioned, we're speaking to someone who is an expert on this particular topic and specifically has has written a lot and researched a lot about previous labor market shocks, including the China Shock. So competition from China in the realms of manufacturing in the sort of nineteen nineties, early two thousands. So I'm very excited for this conversation. I am interested to hear an argument that's not just AI is terrible and it's going to take all of our jobs.
Absolutely well, I'm really excited. We do, in fact have the perfect guest. We are going to be speaking with, David Otter. He's a professor of economics at MIT and co director of the MIT Shaping the Future of Work Initiative, and he's really known for his work on the China Shock and the devastating impact that China's boom in tradeable goods, particularly after its essension to the WTO, had on various communities within the United States that were sort of dependent
on a sort of regional manufacturing. So, David, thank you so much for coming on odd LATS.
Thank you so much. Joe and Tracy for inviting me. I'll try not to be clickbaity.
That's okay. It's okay.
It's okay to be clickbait if it delivers. And the other thing about this article, by the way, is that it wasn't like a paragraph thought piece like this is clearly some serious work which we obviously appreciated and made me take it seriously. But before we get into this or even the China shock or general work or AI in general, what is your like, what do you tell us like what has been the thrust of your career over time?
Like?
What is what is sort of the main interest of yours that spans from the effects of globalization to now AI, et cetera.
My focus has always been on forces that shape opportunity, particularly for workers without for your college degrees, the majority of workers in the United States and of course elsewhere, and that have been so buffeted by computerization, by globalization, by changes and institutions, including the unionization, the fall the
mid and wages the United States. And so that is the common focus of my work, and that has included, you know, a lot of work on technological change, computerization, the China trade shock, and many other angles to that. But that kind of unifies that's you know, I think the labor market is the most important thing in the world.
I think that's where people drive most of their income, spend most of their time, derive identity from, and so things that affect the quality of jobs, the opportunities that people have are just quintessentially important and are going to shape the structure of their lives, you know, more than the quality of entertainment, more than the ease of transportation, more than you know, what fashion is available. This is really a big ee.
So in the spirit of this discussion, I asked to chat GPT to poke intellectual and logical holes in this article. So let's just start there. Number one. No, I'm joking. I did actually do that, and some of them, some of them are quite good, and I will get to them later. But maybe just to begin with, could you talk about the current discourse on AI and why there seems to be this distrust of new technology. What is
that predicated on. I mean I kind of referred to it in the intro, but there is past history, obviously with major technological advances and booms that have led to certain outcomes in the labor market. How does that inform the current discussion.
Sure, so people are understandably very concerned about all of these technological forces because they are disruptive and they create winners and losers. There's no sense in which everyone is better off because of a new technology. So you mentioned the industrial era, and the Luddites rose up against the introduction power looms and smashed them, and they're often derided historically,
but they were right. The industry revolution, the mechanization of weaving wiped out the careers of artisans and made their work non tenable, and wages didn't rise for decades into the Industrial Revolution. So that was very displacing. Ultimately at raised living stairs, but it took a long time and the beneficiaries were not workers. The computer revolution has raised productivity,
but it's been very unequal and polarizing. It's automated a lot of middle skill, middle class work in factories and in offices. It's been great for professionals, but for many other people it's just meant that because they can no longer do. Those middle skill jobs are often found in food service, cleaning, security, entertainment, recreation, and those are valuable,
laudable activities, but they don't pay well. Because they don't use specialized expertise in training, so most people can do that work almost right away, so it tends to be low paid. So I think there's many reasons to take this very very seriously and think carefully about what the implications are.
Before we even get to AI talked to us more about the computer revolution, because, like I said, I saw your piece and I'm like, uh, and I first thought China Shock and your work on that. It's like, okay, this is interesting, but actually, just like I feel like there actually has not been a lot of general conversation about the sort of unequalizing effects of the computer revolution,
like how did that happen? What does the research show about the timing of the introduction of the computers, And then this sort of like I don't know, maybe Barbelle or fragmentation of what happened to workers.
So you know, this really begins in the nineteen eighties and it can continues through the over at least thirty five years. And you know, a very simple way to boil it down and say, look, what are computers useful for. They're useful for following rules and procedures, right, they don't think they're not creative. They're not problem solvers. They don't improvise, they follow codified rules and procedures. But that describes a lot of middle skill work. Right, whether you're in an
office or you're doing repetitive assembly work. The ability to accurately carry out codified procedures is a valuable skill. It requires off in literacy and numeracy and training, and so the ability to automate that was a really big deal, and that had the effect of displacing many people who were doing what I would call these mass expertise jobs
where they were following codified procedures. Right, it takes education to be a typist or a bookkeeper or someone who does filing an organization, keeps track of accounts, and so the fact that a lot of that work takes real skill. To do high quality work on assembly line, you have to understan the tools, who have to understand the product and so on. So the fact that that work could be automated was not unambiguously good. It was good for you know, it was good for productivity, it was good
for consumers, it was good for firms. But for the workers who had invested their careers in those activities, that was definitely a negative. And on the other hand, if you were a professional or you know, a manager or a designer, researcher, doctor, having access to information and quick calculation, that's not your main job. Those are just inputs into
your decision making. So computers were very complimentary to people who are decision makers, which is really the bulk of the professions making high stakes decisions about you know, important one off cases. You know how to care for a cancer patient, or you know how to design a building, or how to do a marketing plan. Right, computerizations extremely helpful for that doesn't displace your main job, it just
makes you more efficient add it. But for people who did not have the opportunity to get degrees and move upward into that work, what remained was a lot of work that's very hard to automate. You to mention these a lot of these hands on manual jobs, so you know, food service and cleaning, could be a transportation and many of those jobs. Not all of those jobs are open to many many people. They don't require much training or experience, and you don't get a great deal better at them
over time. And so because of that, because they're non expert work, they tend to be low paid in all industrialized countries. Now, I want to be clear that not all hands on work is low paid or low skilled in any sense. Right, if you're a plumber, electrician, you're working the skilled trades, right, if you do skilled repair.
There are many, many skilled hands on jobs, but the ones that have grown so much as the middle has hollowed out, have been much more of these personal service occupations that have low training and expertise requirements.
The thing this reminds me of, and I cannot, for the life of me remember which guest this was, but a previous all Lots guests described this as remember the scene from The Producers where Matthew Broderick is like an actuary or something working in an office and they're all toiling away.
And then Stuart Butterfield.
That's right, and then all of those people eventually get replaced by an Excel spreadsheet, right, Like that's the function that became Excel. So, David, I want to kind of press you on this point because I think it's a really interesting one, and I think it's essential to understanding your overall argument. But you make the distinction between information and decision making. So the idea that people can have
access to a lot of information. In fact, plenty of people would argue that people are drowning in information at the moment.
Information.
Yeah, but they're not necessarily using that to make the best decisions. Decision making is sort of a separate skill. Can you talk a little bit more about that aspect of your argument?
Absolutely so. Let me so, I want to draw a sharp line between AI and traditional computing, which is what we've been discussing, because they're quite different. But before you do that, let me kind of make a kind of a meta argument that I think is useful our discussion. So the concern we should be having is not about the quantity of jobs. We are not running out of jobs.
And in fact, you know, all the Western world right now is in full or overemployment, and even during the whole computer revolutions on, we didn't run out of jobs. It's not the quantity that matters. In fact, we're all facing a demographic crunch. It's the quality. Right A world in which everyone's waiting tables is very different from the world in which everyone is doing medical care. And so what matters is not simply whether there is work, but
whether it's expert work that requires real skills. If it's non expert work, work that anyone can do with no training your certification, unfortunately it will be low paid. On the other hand, if it's work that requires specialized knowledge and that is made more productive by uses of tools, and computers are to and AI as a tool, then that's good for labor, that's good for earnings, that's good for the quality of careers. And so we should be
thinking about expertise. Just to give you, like a very stylized example, you know, think of the job of crossing guard and air traffic controller. These are basically the same job. The job is to prevent things from crashing into other things, right, airplanes from crashing to airplanes, cars from crashing into children on their way to school. But air traffic controllers United States are paid, you know, four and a half times as much as crossing guards, and the reason is expertise.
Almost anyone can become a crossing guard in the United States with no trainer certification, whereas to become an air traffic controller requires years of school and thousands of hours of practice. And so even though those jobs do the same thing, because of the difference in skill requirements, they pay very different wage levels, and so we want to have jobs where expertise is valuable, not just where physical presence is the primary requirement. So that's what we should
be thinking about. Having said that, let me talk about how AI relates to that. So, you know, a traditional computerization, as we've been talking about, is really about automating well understood procedures and rules, right, what we call formal knowledge. You know how to do math, how to reproduce a document or check for spelling errors. And it's very limited because it cannot do what people do fairly effortlessly, which
is learn from kind of tacit knowledge. Tacit knowledge is all the things that you implicitly understand that you infer from your environment, but you never formalize. Right. So, you know how to ride a bicycle, but you couldn't explain how it's done. Right. You couldn't sit up and explain that you know the gyroscopic physics of a bicycle. You know how to make a funny joke, but you don't
know the rules for making a funny joke. You know how to recognize the face of someone after you haven't seen them for thirty years, right, But that's actually a hard problem, and we do it, but we do it based on some tacit understanding. And this has always been a barrier to computerization because we couldn't code up the things that we understood only acidly. We had to understand them explicitly, informally. So AI overcomes that barrier. AI essentially
infers tacit information from large bodies of data. It learns the associations between you know, words and phrases and sentences between pictures and words. It can look at a scan of a patient's lungs and make predictions or you know, guesses about whether that patient has an endema or other medical disorders. It does that not because someone has written a program that says, these things tell you whether you have,
you know, a lung issue. It's because it learns from the patterns it's trained on that data, and so that gives it a really different set of capabilities. It gives it the ability to do what a lot of us do, or at least to supplement a lot of what we do, which is to sort of make decisions based on lots and lots of inputs and educated guesses. Right, So let's say you know, you're a medical doctor, right, when you see a patient, you're not simply essentially reading from your
textbook in your mind about what to do. You understand bodily systems, you understand the biology and so on, but then you've had lots and lots of experience. So when you've see an individual patient, you're going to make a decision based on a kind of translation from this formal body of knowledge plus all the experience you've had to make a good judgment. And the stakes are really high because obviously if there was just a simple rule book
for it, you wouldn't need a doctor. You need a person who can make a judgment about how to care for this patient and their individual needs.
It's so funny.
I was just talking to Tracy in a different context, and I was like, I was talking about the TV show House, which I'm really into, and like, you know, even though it's probably hyper dramatized, this idea of like how still today like doctors don't really know a lot and they have to like they debate, well, what's actually going on here? And of course the show has some very entertaining depictions of what those debates among doctors of what's really going wrong with the patient and what's the
proper treatment. So I guess you can go from there and just say, well, House was like the most brilliant and he had seen thousands of patients over the course of several seasons of that show, and so he had the best like intuitions. But basically it sounds like, thanks to AI, someone can harness those same intuitions without having seen thousands of patients before, like doctor House did.
I think that's a nice way to put it, is that what AI can do is provide kind of guidance and guardrails for decision making. So what do I mean by guidance and guardrails. By guidance, I mean, you know, had you considered this set of possibilities, these potential diagnoses, guardrails were like, you know, don't prescribe these two drugs together.
They negatively interact and in decision making work. Having that kind of access to support, to a form of expertise, not that you should one hundred percent rely upon it, but that you can supplement your own judgment is potentially very useful. So you know, let me give you a concrete example, sticking with medicine. So the job of nurse practitioner is pretty prominent right now, they're several hundred thousand
the United States. They make quite a good living, about one hundred and thirty two thousand dollars a year of the median. And they barely existed twenty years ago. And nurse practitioners are nurses with an additional master's degree who could do diagnosing, prescribing, treating things that were only done
by medical doctors decades earlier. And this new occupation has come into existence, and it's terrific for patients in the way in that it saves them time, it saves the healthcare system money, it creates a good job, and it
does a very important task. Now, this is not a technological creation, socially, the result of nurses recognizing they were under used, fighting for a larger role, developing a training and certification program, and eventually against over the dead body of the American Medical Association, effectively carving out this new role. So it's not because of technology. However, at this point,
nurse practitioners are heavily supported by technology. Right, So electronic medical records right provide all the information all that you would need or some of the information you would need for good decision making, as do extensive diagnostic tests, as does software that looks for drug interactions, among other things. And it's easy to imagine that as we roll the clock forward, the set of tools that will support decision
making by nurse practitioners will improve dramatically. And as it does so, it will allow them to do more of the tasks that are currently kind of controlled by more
expensive professionals. And why is that a good thing? You might say, Well, it's not a good thing if your doctor necessarily, But we live in a world in which a lot of the bottlenecks are expensive decision makers, people who are the MBAs and the lawyers and the medical doctors and the architects and the engineers, and they all do valid work and they deserve what they earn, and I'm not disputing that, but it would be great to be able to create more people who could do that
work without them being quite so expensive and the advantage of that, So, if an AI can enable more people to do good decision making work, it actually can open up opportunity for people who are not the elite. Right, we have tons and tons of healthcare that needs to be done right. It doesn't all need to be done by medical doctors, or we have lots of software coding that needs to be done. It doesn't all need to be done by people from top universities with Bachelors of
Science degrees in computer science. We have tons of design that needs to be done, tons of care, tons of legal work. Right, So the potential for an AI is to enable people who have training and judgment to go further with those skills. So it's not to make them unnecessary, but simply to extend their range by supporting decision making. So, just to give you another super concrete analogy, take YouTube. Right.
So YouTube is used all the time by people in the trades among other groups to try to figure out how to do a specific repair or diagnose the problem that they have seen before. Now I'm gonna say, well, who is YouTube really for. Well, it's not for the frontier experts they already know how to do these things, nor is it necessarily for the rank amateur. Right. You don't want to go to YouTube and say, well, how do I install and wire in a brand new central
house air conditioning? I've never done anything like that before. Right, If you went Toto YouTube for that, you would quickly get yourself into trouble because if you don't have some foundational skills, that could be a problem. On the other hand, if you were handy and you had some experience with electrical work, some experience with plumbing, some experience with carpentry, but you've never done an ac installation before, well, now you could go to YouTube and that would get you further.
So you could think of YouTube as kind of like a mini AI that provides guidance and guardrails.
I feel like Tracy has watched many youtubes in the last year to fix your Connecticut house.
This example hits home so hard, and I'll give you a specific anecdote, which is my husband and I are currently building a shed and we're trying to put a roof on it. And we thought like, okay, we put the plywood on the roof, and then we get some joyst tape. We put the joystape down over the edges, and then we put on the shingles, and we watched many,
many YouTube videos on how to do this. It turns out that you can't use joyste tape when it's less than fifty degrees fahrenheit outside, which it was, which, of course, none of the YouTube videos that are filmed down in Florida or wherever actually mentioned. And then secondly, it turns out that the ability of the joyst tape to actually adhere to the plywood varies enormously depending on what plywood
you're using. So there are all these subtleties and nuances that you don't necessarily get from a ten minute YouTube video. Maybe that's not that surprising. But on this note, so you mentioned training, and you've spoken a lot at this point about the idea of AI being able to provide guardrails and context around decision making that maybe yeah, can resolve the bottleneck of expensive decision makers, as you put it, by creating more of them or allowing more people to
tap that function. I guess my big question is how much of this is just going to be Well, we add a new layer of training that people have to do so you can use AI, but you still have to know how to use AI. You still have to understand the result that it's spitting out and interpret that. You still have to know how to actually apply and use that result. Are we basically just replacing one skill set with another.
It's a good question. We want it to require skills, right if everyone is expert. No one is expert, right, it's important. The question is whether it can be the acquisition of expertise or whether it just gets in the way. Another thing you have to certify on. We now have you know a bunch of evidence on AI and specific applications and where it works well and where it doesn't.
So for example, you know some students of mine, ked Noy and Whitney Zang published a paper in Science last year where they gave chat gipt three and a half to people who were doing advertising writing and marketing plans. And these were people who were college graduates who do this for a living. And one group just used the
standard tools, so it's basically the Internet word processors. Another one actually used the chatbot and this was early enough that most people didn't already have it, and there were a couple of really nice results. So, first thing, it saved everybody time. It cut the time it took people to do this work from about thirty minutes to about eighteen.
The second is it improved the quality on average. So the output of the people using this tool was judged and by other college graduates who were not confederates in the experiment to be more precise, more concise, and more accurate, so improve the quality of work and saved time. But then the most exciting result was if you looked at the quality range of the work people did, it basically made the least capable writers using chat GPT were about as good as the median writers not using it, So
it kind of leveled up the bottom. And we've seen this in other places as well, folks doing customer support. The example I'm thinking of is a kind of an enterprise software product and they customers chat in through chat window, and then the company installed a tool that suggests responses to the customer's chat. You don't have to use them, but it will also not just suggest technical responses, but polite responses and so on to keep the customer from
getting overheated. And the result is that it speeds the rate at which people learn. So it used to take people ten months to reach peak capacity. Now it takes them about three months. They're somewhat faster when that's done, so it's not that it eliminates the training or learning. Everyone starts off bad at this job, but they get faster. They converge towards expert level more quickly. But this tool,
and also really interestingly, people quit a lot less. And the reason is, you know, customer service work is actually really difficult. It's very heavy emotional lay and you have to take a lot of incoming abuse actually from customers. It's hard to keep your cool. And the sentiment analysis of this tool, of the chats that occurred through it, is that it basically reduced the level of hostility from customers to workers and from workers to customers. So it
actually did a lot of the emotional labor. So it didn't eliminate the need for skills in doing this work, but it enabled people to become more efficient, more rapidly, with less stress. And so that's the good scenario. There's a lot of work that needs to be done. And right now, what are the most expensive things? The things that are growing more and more costly all the time are education, healthcare, legal services. Why is that? Why are
those things getting so expensive? Well, during the industrial era, we got really efficient and manufacturing goods. Right, so TVs, automobiles, coffeemakers, right, mobile phones, these things are actually remarkably good and relatively cheap. Why, well, we've automated them and the labor content is relatively low.
On the other hand, healthcare, education, law, Right, we've not gotten any more efficient to those things, and they require people who've gotten more and more expensive over time because as we've automated the other work, the people who are the degreed professionals or have become the bottleneck. So that slows the growth of productivity. It makes the cost of living higher for the typical person. Right, typical person is not a lawyer, it's not a professor, it's not a doctor.
But they're paying for all those things. So if we could enable more people without as much training, and I don't mean no judgment, I mean some training. If we could allow paralegals to do more legal work, if we could allow nurse practitioners to do a larger range of medical tasks. If we could enable people who are doing
working as contractors also to do more design. Right, if we're enabling people who don't have computer science degrees to do more software development, not only would that reduce the cost of these expensive services, but when I prove the quality of work that people could do, it allow them to take some expertise and make it go further. So that's the good scenario.
Joe, I like the idea of using AI to reduce emotional labor. I wonder if I can start automating some responses on Twitter to toxic bitcoin maximlist. That's interesting, Tracy.
The block button is right there. So there's so many different questions now that I have in my mind. But you know, look, we're only near the beginning. I mean chat GPT, which is sort of what's brought us all into consciousness, was unveiled to the public in late twenty twenty two, so not even two years into that this sort of breakthrough that enabled it. As just a few
years older than that. The concern would be, well, yes, at this point some training plus AI enables many people to become much more productive and have this sort of output that was previously associated with people with years of experience. Like the fear would be that in multiple generations down the road, you don't even need that initial training first.
I fully agreed. Word just at the beginning. The tools are only so good, they're going to get much better. Are Understanding how to use them is also very primitive. We often don't know how to interact well with AI. In fact, I could give you examples of cases where it goes pretty badly, even though the tool is good. So I think there are sort of two concerns built into what you said. One is it basically, for now it's a helper, and then eventually it's just your replacement.
And the other is that even if it just makes everyone more efficient, eventually, well, we just saturate the world with whatever that thing is, and then it's super cheap. Right. So, there's only so many PowerPoint presentations the world can tolerate, and if you get really fast at making them, eventually people will pay you to stop.
Yes, we're there now, maybe, but anyway, keep going.
So I think that that will occur in some cases. There's no question that in some cases the tool will initially be a supplement and eventually be a replacement. Right. So maybe air traffic controllers would be an example like that, right where eventually almost all the air traffic control will be done by machines. But I don't think every job is like that. I don't think that's the case in medicine. Medicine will be a hands on occupation for a very long time. So will law, where there's a lot of
high stakes decision making, so will design. So I don't think that we're going to automate everything away. I know people think that, and I think it's a valid concern. I don't think that's the most likely scenario. But I also want to stress something that's said too little in these discussions, which is, when you think about what you can do with a new tool, most people think, well, what can I automate? What is the thing that I'm doing now that I could now have the machine do
for me? And that's important, and we do a lot of automation, but automation is not the primary source of how innovation improves our lives. Right. Many of the things that we do with new tools is create new capabilities that we didn't previously have. Right. So, airplanes did not automate the way we used to fly. We just didn't fly before we had airplanes. Right. The scanning electron microscope didn't automate the way we used to look at subatomic particles.
We simply couldn't see them without that microscope. Right. So think of the thought experiment of automating everything in Asian Greece, you know, two thousand years ago. Even if you automated everything in ancient Greece, it wouldn't be modern America, right, It wouldn't have electricity, it wouldn't have computers, it wouldn't have airplanes, it wouldn't have penicillin, it wouldn't have a
million tools technologies that we take for granted. So the most important applications of technology are to enable capabilities that didn't previously exist, and I think AI will do that as well. So you know, we couldn't be having this conversation were it not for our computers. Right. If someone took my computer away from me, I couldn't even do my job, right, It's just my job wouldn't exist in
its current form. And so what we do with new technology is create new capabilities, and then human expertises often needed to support those capabilities. Right, we didn't have pilots before we had airplanes, and we didn't have pediatric oncologists before we had all kinds of tools and knowledge to treat cancer or cancer in children. And so as we instantiate these new capabilities, we often require new human skills
and expertise that are valuable. And so much of what we do with these tools is to change our lives by pushing out the possibility set, rather than simply just automating the things that we already do, and I think AI will also be really important for that.
One thing I wanted to ask you is you are very very clear in your piece that this is more of an informed thesis than an actual forecast. And here I am actually leaning on chat GPT when I asked it to poke holes in your argument. One of the ones it spat out had to do with this exact question.
Are there specific measures or policies that we could be doing right now to make the probability of this outcome better rather than the sort of like destructive AI dooomerism outcome that everyone is worried about.
Yeah, so I appreciate your saying that the future should not be treated as a forecasting or prediction exercise. It should be treated as a design problem. Because the future is not like the weather that we just wait and
see what happens. Right, We're making our own weather. We have enormous control over the future in which we live and depends on the investments and structures that we create today, whether that's democracies, whether that's you know, education, whether that's how we use tools and science that whether we use fissionable material to make bombs or to make energy. Right,
we have lots and lots of agency here. So in terms of using AI well, So first of all, let me say what would be a metric how would we
know we were using AI? Well, because it's not like carbon dioxide, where you know, we say, oh, we know we're reducing carbon dioxide, you can just measure it, right, How would we know we're using AI well, I would say we know we're using it well when we see people who don't have for your college degrees doing work that we would think of as expert decision making work, whether that's coding, whether that's you know, medical vocational work,
whether that's design and contracting, or even whether it allows skilled repair people to work on a broader range of products or tools or engines or whatever. So that's my metric of success, that it opens up new job opportunities to people who are not at the absolute elite of a field. How do we get there? So I think that's a super central question. And I think most thoughts about you know, policies about AI are about regulating, controlling, and some of that has to happen, and I feel
reasonably confident that it will. This is much more about investing, right, So say, look, you know, in the United States, for example, about twenty percent of GDP two in ten dollars goes to education and healthcare. More than half that money is public money, so in fact, we have a lot of
control over how education and healthcare delivered. So healthcare would be the best place to start to say, all right, let's redesign the tools or invest in the tools in a way that enables more people to deliver this work. And not only would that make better jobs, it would also improve access to healthcare potentially lower those costs. We
could do the same in education. How can we make education, you know, make better use of teachers, provide better services to students, and also make education more accessible, immersive, engaging for adults. Right, we have lots of adults who need to learn, and traditional classrooms are really not the best place to do that. So I do think you have to think about these moonshots and governments can invest in them.
That doesn't mean the government has to run them, but you know, governments often fund basic science, governments fund education. Most health innovation in the United States is paid for by the National Institute of Health, which is much much larger than the National Science Foundation, for example, So I think that is the biggest chance is to look for those opportunities and then design with the intention of creating a more effective way to structure work that uses the
tools and uses human skills better. And let me say, you might say, well, you know, why doesn't this apply equally well to the last era. So first of all, we didn't design, and probably we should have done more. But essentially, computers are good at following rules and so they could replicate a lot of work that was just that, but they weren't good at supplementing skills at enabling people to do these high stakes decision making tasks. So it's important. Unsually,
AI is almost the inverse of traditional computing. Right if I told you I have the most advanced technology in the world, but you know, it really can't do math and it's not reliable with facts and figures, you would say, well, what kind of technology is that? And I would say, well, that's artificial intelligence. It is really quite the opposite. So I think it has quite different capabilities. And in some sense you could say traditional computing was really complementary to
you know, the most elite professionals. And it's quite possible that AI will enable more people to compete with them, and that's a really good thing because that improves the quality of services and improves the quality of jaws for people who were not at that leading edge.
This is I think the key thing because in your piece and and other testimony you've given, you've talked about this idea of collective decision making, And when I think about modern American society or modern society in general, I don't necessarily think that collective decision making is something we're particularly strong on.
So if the.
Future depends on making good collective decisions, then that makes me anxious. But you know, you talk about investment, but it sounds like the other element here. And you mentioned that the rise of the nurse practitioner had to happen over the kicking and screaming of the American Medical Association, which represents that a top strata of healthcare professionals, the
elite doctors. How much of this is going to be a political fight ultimately in which the doctor and the lawyers and the podcasters collectively resist other people who are using these tools to do our jobs. And how much is that really like where the collective fight is going to happen.
Yeah, if we have to take on the podcasters.
I think we're doomed, but yeah, yeah, we're going to fight this kicking and screaming for sha.
The AMA is one thing, yeah, but the podcasters, that's a whole different army. Some of that will absolutely be terf warfare. Right the professions, we think, oh, you know that, you know, oil companies and so on don't like competition and they're always trying to rig the market, But in fact, the professions rig the markets as well, right they what a profession is, actually what it means is an occupation that gets to certify its own members and decide who's
in and who's out right. And so it's the medical profession that creates training standards and certification standards. It's universities that decide what skills enable you to have a PhD and therefore become a professor. So it absolutely is going to be a challenge. Like lawyers will try very hard say well, that can't be a legal document unless a lawyer has signed it someone with a JD and has passed the bar. So that will be a source of
resistance for sure. On the other hand, if there's a really good competing alternative, if you can say, look, these nurse practitioners can do a lot of this diagnostic work. You know, they work well with doctors, but they can do some things that doctors would be more expensive doing, and you can make that case. Or a paralegal using the software can create a lot of routine documents, or a software developer using GitHub copilot can go pretty far.
Then that creates a lot of economic pressure that tends, over long periods of time to erode these gills. So I think that they will not go quietly into this dark knight. But if the models are successful, it does create a strong incentive for eventually that to become adopted.
I think part of the concern around AI has to do also with how any productivity gains are actually distributed and whether or not people are compensated for doing more. And I asked chat GPT obviously to provide a summary of dust capital before I came on here. No, I do think there is this concern about Okay, in an ideal scenario, we're all more efficient in terms of our labor, and maybe some types of work are even better to perform.
Maybe we reduce that emotional labor. But aside from that particular benefit, how do we distribute the additional productivity gains? Is there any evidence or any reason to believe that these benefits are going to go to labor, to actual workers and individuals versus to companies in capital.
Yeah. Good. So let me give you two answers that question. One is it really does depend on institutions, not just on decentralized labor markets.
Right.
So if you compare the US versus Germany versus Scandinavia, right, we have so much in common. We have the same technologies, we have the same agent population, we have the same rising education levels, we have the same China as a competitor, we have lots of immigration. And yet these countries have big, very different cakes with the same ingredients. Right. The US is kind of cowboy capitalism, very high levels inequality and disparity and not so much sharing with workers. And if
you look at Scandinavia or Germany, it's much more cuddly capitalism. Right, it's not nearly as unequal. And that's really a question of tax regulation, it's a question of the role of labor unions and labor voice, and it's a question of social norms. And so I guess we should not take it as inevitable that the outcomes we have are the only ones the market could tolerate. But at the same time, we should recognize that without those sort of counterveling forces,
the outcomes can look pretty bad. So I do think you know, I'm happy about the rise of collective bargaining again in the United States, although it's from a very low level. I'm happy that more states are passing minimum wage regulations. I'm happy that the Biden administration is trying to sort of beef up the Occtational Safety in Health Administration and the Equal Employment Opportunity Commission and so on.
So I think those things matter a great deal. So one should not take it for granted that just because productivity rises, workers benefit in many countries. That's true, but not so much in the United States.
But I want to press you right here on this point, because why doesn't this undermine much of the argument. If these different countries, whether it's Sweden and Germany the US, can have very different sort of distributional outcomes with the same cake ingredients, with roughly similar technology and labor markets, why then take the assumption that it's the technology that has the distributional impact rather than just those policies themselves.
Okay, this is an excellent question. So I think the technology provides headwinds and tailwinds with which policy can work. So all of these countries I mentioned have become more unequal.
Okay, all of.
These countries have seen a decline in middle scale work. All of these countries have seen the mean wage rise relative to the median, meaning the upper wages have risen more than the center. But the degree to which countries have pushed back against that is a function of their institutions.
In the prior era, prior to computerization, all of these countries saw their middle classes grow together along with the upper class and lower class, and so the industrial era prior computerization was very friendly, sort of intrinsically towards the middle class. The computer era was much much less so, and then policy helped ameliorate those impacts, and much less so in the United States. So I do think that
technology plays a role. I just we should simultaneously believe that these underlying forces of technology and globalization creates strong pressures in one way or another, and then policy can shape how those pressures play out. It won't undo them, but it can channel them more or less effectively. So you're asking both the right questions, and I think the answer is both are true, but we should think it's
not one or the other. And in some periods those forces are very favorable and policy has to do less hard work, and other periods they're relatively unfavorable. Policy, if it's working well, has to do more work. The other point I want to make, and this is why I'm so focused on expertise, is expert work is intrinsically well paid.
It's scarce, and it's necessary. And that's why if we live in a world where all the work can be done by machines, we're completely dependent upon redistribution, right the people who own the machines to share with everyone else. And I'm not so optimistic about people's excitement about sharing with everyone else. And even when people say, oh, we'll have universal basic income, they really mean universal basic income within the borders of the United States. They don't mean
universal basic income for the rest of the world. Right, So people's notion of sharing is very limited. So I do think it's extremely important that labor remains valuable, and that's actually an achievement of the industrialized world that so many people can make a good reasonable standard living based on their skills, and so technologies and tools that make human expertise more valuable by allowing to go further are
really favorable towards income distribution. Technologies that just automate away work, even though they raise productivity, are not favorable to its income distribution because it means it goes to ownership of capital, and ownership of capital is intrinsically more centralized in ownership of labor, because in a country that doesn't have slavery and doesn't have labor coersion, everyone owns one worker themselves, and so that inherently creates some tendency towards equality when
labor is valuable.
The efforts of the Biden administration to reindustrialize the US and sort of counter some of the effects of the last twenty years that you wrote about, do you have any optimism that those trends can be reversed. I know this is a very simple, straightforward question that you're going to answer in about thirty seconds, so good luck.
I don't think they can be completely reversed, but you can stem the tide, right, So it's not that the U this has stabilized. The US continues to lose industrial capacity, right, whether it's in semiconductors, whether it's automobiles, whether it's an aircraft, Thank you, and so on. So I think reinvesting can help solidify those sectors, and I think it's very important to do so, because now they're not just a question
of jobs. It really is about leadership of the key profit and idea generating activities in the modern world, and we don't want to lose a leadership place in those activities.
Good concise answer to what probably could be multiple future episodes, David Otter, Thank you so much for coming on out Laws. That really was a fascinating conversation. We probably could get multiple episodes out of this conversation with you, but really appreciate your time.
Thank you very much. Nice to speak with both of you.
Have a good day, Tracy. I'm convinced. I think everything will be fun.
I'm no longer worried.
Well, first of all, I would say it was nice to hear a slightly more optimistic argument from David. There were a lot of quotable sentences in there. So I like the idea that everyone's their own individual capitalist in the sense that we each have one worker to direct and get the most money out of So that's how I'm going to start thinking cuddly capitalism, which, as our producer Klee observes, is a much more appealing name than
the Swedish model. I like that. What I would say is again putting on my cynical journalist hat, and I guess I don't have an opinion because I am a journalist whose expertise is about to be automated away. But my non consensus take here, or my sort of hot take here, is that I agree with David that we are going to get more jobs out of AI, and
probably more than a lot of people currently anticipate. I guess I'm less convinced about how useful those jobs are going to be, So going back to his point about how do we measure how well we're using AI, I have a feeling that a lot of it is going to end up basically creating a whole new layer of BS jobs that don't actually do much. So there's going to be all these decision making bodies attached to AI. There's going to be big discussions about how you implement AI, fairness,
litigating its results, and things like that. I guess I'm a little bit pessimistic about the ability of AI to generate additional bureaucracy in addition to additional productivity.
The other term that was great was when you said the future is not like the weather. Yeah, but also like I am worried about any notion that to achieve the good outcome, the good equilibrium, we have to make good, correct collective decisions because I have almost zero confidence in whether it's just the US specifically or globally to make
collective decisions. I do think like going after like these guilds like the American Medical Association, which for all of the rise of nurse practitioners, it doesn't seem like we're doing great on like ben the cost of healthcare jobs or really having a healthcare capacity. That's going to be like really tough. And those fights are going to be really intense, whether it's with lawyers, whether it's with doctors, whether it's with teachers, whether it's with podcasters, whether it's
professional architects, et cetera. Like, those fights are going to be extremely intense. But like the basic intuition sounds very compelling to me. The other thing is like you know this idea of like oh, yeah, some training plus AI, like I am worried, like maybe you just won't need the training, and maybe it's just AI from the start.
So I don't know.
Well, in some respects, I think that would almost be a better outcome in terms of democratizing AI. But yeah, there are so many questions, uncertainty, as you mentioned in the intro, lots of different takes at the moment. I guess we'll see how it plays out and whether or not you and I have jobs in ten years time.
We'll see, Well, we'll have David back when we're just like.
When we're automated voices. Yeah, exactly, all right, shall we leave it there for now?
Let's leave it there.
Okay. This has been another episode of the Authoughts podcast. I'm Tracy Alloway. You can follow me at Tracy Alloway.
And I'm Joe Wisenthal. You can follow me at the Stalwart. Follow our guest David Otter, He's at David Otter. Follow our producers Carmen Rodriguez at Carman Arman dashl Bennett at Dashbot and kel Brooks at kel Brooks. And thank you
to our producer Moses Ondam. Form our oddlogs content go to Bloomberg dot com slash odd Lots, where we have a blog, we post transcripts, and we have a weekly newsletter and you can chat with fellow listeners twenty four to seven in the discord discord dot gg slash odd Logs. There's even an AI room in there where people are talking about all these things. So imagine there will be some conversation about this there.
And if you enjoy odd Lots, if you like it when we do deep dives into AI, how it works, what it means for the economy and society, then please leave us a positive review on your favorite podcast platform. And remember, if you are a Bloomberg subscriber, you can listen to all of our episodes absolutely ad free. All you need to do is connect to your Bloomberg subscription to Apple Podcasts. Thanks for listening.