Kahneman, A Rational Appreciation | Libertarian: Richard Epstein | Hoover Institution - podcast episode cover

Kahneman, A Rational Appreciation | Libertarian: Richard Epstein | Hoover Institution

Apr 03, 202430 minEp. 755
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Richard Epstein remembers the late psychologist Daniel Kahneman and discusses the long-running debate between Kahneman's research on behavioral economics and the rational-choice models popularized by Gary Becker and Ronald Coase.

Transcript

[MUSIC]

>> Tom Church: This is the Libertarian Podcast from the Hoover Institution. I am your host, Tom Church, and I'm joined, as always, by the libertarian professor Richard Epstein. Richard is the Peter and Kiersten Bedford senior fellow here at the Hoover Institution. He's the Lawrence A Tisch Professor of Law at NYU, and he's a senior lecturer at the University of Chicago.

Richard, I think we've got a little bit of a different show today, because I'd like to hear more about Daniel Kahneman, the Nobel Prize winning psychologist and author of thinking Fast and Slow, who died last week. And I'll admit something to you, Richard. I'm a bit of an econ sympathizer, meaning when I start my analysis, I go from the point of view of irrational actors working according to utility functions, ordering things in an ordinal fashion so I can make the math work out.

In other words, I like Becker, I like Coase, that's my mindset. Now, is it right to say that that's not how Kahneman saw things? >> Richard Epstein: It certainly is correct to say that. And if you look at what the claims are both by him and more importantly, claims that are even more extravagant by many of his supporters and defenders, including Richard Thaler and Clyde Sanderson.

What he has done is done a complete revolution in the way in which we ought to think about human behavior, which shows that the anomalies of classical liberalism turn out to be the dominant tropes when you try to put the overall system together. I was exposed to this very early on, and let me tell you the two strands that came together.

The first one was sociobiology, and I was having dinner with my friends, the Landis and Lisa Landis sitting on the couch, and she starts telling me something about this odd field of sociobiology and this man named Edward O Wilson. And I had always liked evolutionary theory when I was in high school and in college cuz it had a certain kind of elegance about it with natural selection and the like.

And you start listening to this stuff, and what you realize is that somebody is trying to systematize how it's going to work. They're gonna use rational choice principles. But the unit of action is not the individual, it's the gene. And what happens is they treat human beings as carriers of these genes. And there was a theory developed some years before by a man named W.D Hamilton called inclusive fitness.

Which says, in effect, what parents do is they economize and rationalize the probability that their genes will manifest themselves to the same point in the evolutionary cycle one generation away. And the basic equation is that if you're a parent, your child has one half your genes. So if you could make an expenditure of one unit of work. And get more than two units of benefits out of your child, you will do that because the child is not unrelated.

And when you cut this down by 50%, anything over two is greater than one. And then you have to figure out, what about parents, because they have to coordinate. And you start having theories of non economic relationships having to do with natural love and affection, to use the roman term. But essentially, you need to have a huge degree of cohesion between spouses, if you're going to raise children. More important in people than in animals.

Because what people do is it turns out that both of them need to engage in the care function. So it's not a question, insemination, and then disappear, as it is with some species. And so that you need to get the cooperation, you have to find ways to bond them. So this was the model that I looked to. And you go back and you look at the kind of legal stuff that you see on family relationship. They talked in pretty much the same kind of way. These are not arm's length transactions.

Transactions where you use your arm to keep somebody at a distance. There are situations where you bring them close, and there's situations in which market exchanges are not the dominance. Because this family is kind of like a little society in which redistribution is a necessary truth for its operation. So when I got out to Hoover, I was at, not Hoover, the Center for Advanced Studies.

I was absolutely thrilled to learn that David Barish, who had just written a pioneering book on sociobiology, was running a seminar on that subject with guest speakers from around the world. It was, as the case with behavioral economics, it was a period in its infancy. So there were a lot of movements and a lot of important papers very early on. And it was like getting at the time of the time of the crest.

Well, the exact same thing was true with respect to behavioral economics in terms of its influence. Kahneman and Tversky had written in 1973 a paper called Heuristics and Biases, which got a huge play. And they were both at the center all the time. Kahneman was one of our fellows, and his wife, very gifted psychologist named Anne Tresman, was there. And Amos Tversky was down in the department of psychology on the main Stanford campus.

And so we managed to get together and debate this stuff and cross swords all the time. I starting with the more evolutionary, rational choice thing. And they're starting with this particular stuff. And we basically had nonstop arguments and dialogues for the entire period. And I could even go in, if you're curious, to some of the particular exchanges we had and some of the traps into which I fell when we started to do these debates.

But it was, in fact, one of the most bracing intellectual experiences of my life. And as things turned out, the tension on between these two evolutionary theories and the anomaly theory is something that I continue to write about to this very day, because it's essential for trying to figure out your major premises.

So I came down, as you did, on the theory that anomalies are thus, that and if the only way you can find out about how something is working is to run a series of very contrived experiments, chances are you're not finding out something that's going to work outside the laboratory in nature, that is, in all sorts, ordinary social interactions.

And so for the last 40 odd years, I've always been on the opposite side of this debate, having written a number of articles, most of which are deeply critical of the way in which that works, both as an abstract matter and as it gets translated into various things, like market transactions in various goods, stock market transactions, and the like.

So there's a lot of difference going on between us, and I'm extremely grateful, as I wrote in my column, for defining ideas, that Danny was there one of the most formative times of my life, with a very formidable intellect, pushing and pushing and pushing. And I found that I learned an enormous amount by pushing back, even though it turns out that on virtually every major point, I think the differences between us not only started in 1977, 78 but continued until the time of his death.

>> Tom Church: Richard, I do wanna hear about some of the debates that you had, but I wanna push back a little, because there has to be something more to behavioral economics, which is the marrying of psychology and observed behavior. And what I would say from the outside often looks like people acting irrationally, but making sense with economic logic and the rest of it. There has to be more than just it's not just contrived experiments, though, right?

I mean, you point out the endowment effect coming from Kahneman. That's real. That makes sense to me. People absolutely do that. I mean, I internalize it as this goes in. This is one more part of the utility function, which covers everything thing, but there has to be more. So give me something that you think Kahneman really did help put out there, and it's something that people should understand and incorporate.

>> Richard Epstein: Well, I think some of the things that they were able to do is to point out anomalies that aren't fully understood. They did a lot of work, for example, in trying to figure out what are rational strategies for retirement. And how you put money aside and so forth, and you start looking at the systems and you can make arguments perfectly good that people tend to undersave. What I find is that it's pretty good and possible for you to make out arguments like that.

And on the endowment effect, what happens is the real question is to deal with these things. Do you wanna have an entirely different frame in which people have a wholly different set of preferences? Or do you wanna find something within the standard theory that if you vary and tick it a little bit, it will lead you the same results?

So let me give you one set of experiments that work with exactly at the time that I was doing all this stuff with evolutionary theory and with behavioral economics in one point. I was also concerned with the situations associated with medical malpractice, because the mid 70s were a period in which these markets tended to dry up. And you couldn't find any sort of obvious behavioral explanation as to why in 1976 they go dark, whereas in 1950, they were personally stable.

And so you try to figure out, first of all, what's the underlying reason why these markets are special, and then what are the changes in doctrine that start to move the market. Well, the thing that you have to understand is when you're dealing with standard rational choice theory and these experiments, you're dealing with people for whom competence is not a serious issue. There are obviously differences in level.

And some of these experiments, if you run them on UCLA graduate students, they don't get to the right answer as fast as the kids were going to Caltech, if there's maths, so they're obviously different. But when you're doing medical stuff, the whole issue of how you deal with incompetence becomes absolutely major because you can't adopt the strategy that many businesses do.

So if you wanna talk about the stock market, one of the things that Danny says quite elegantly, is that you go to ordinary people and ask them how they're supposed to invest. They're gonna make gobbledygook messes out of it. And so what happens is the way the exchanges work is you have to pass a licensing test, which is basically like a speed test. So you can't go on these markers and engage in rapid trades unless you meet all the standards.

So what you do is you keep the incompetence out of the system. And you have to do that because when you want to resolve trades and other difficulties, you have to be able to give everybody a clean set of books that evening. So they could go into the market the next day and know where their position is. And so what you do is you have very strong rules as to who can play. You can't tell people, you wanna go to the hospital, we only take in people who have very high levels of confidence.

The whole business is non-confident. But then what you try to do is you figure out what are the various methods that you try to do to take into account. And you're not worried about behavioral anomalies, right, for example, risk aversion, loss aversion, or whatever it is. You're worried about the fact that people can't make up their own minds, they don't understand their instructions. So what do you do?

You have public oversight of this, people get guardians to do things, they write advanced wills and directives, and you have to figure out how all those things start to go together. And the problem with the behavioral economics is once you put the problem in that way and you ask them what is it that you want to do in order to change the system. Which of these devices you want to use and which not, they don't give you a lot of information about it.

Because it really depends upon just what the rate of progression is with various kinds of diseases, what kinds of effective help and assistance you could give people, and so forth. And so it doesn't work there. To give you another illustration where it kind of doesn't work, there's a lot of work which starts to say that, look at businesses and so forth. Consumers sometimes are driven by impulse purchases and stuff like that, which is commonly known.

But then when you try to take the central question of our time, which is, do we think that monopoly structures are goy be needed to deal with these anomalies, or do we believe in the power of competitive markets? Danny was always willing to think about various kinds of regulations that you could put into place. And I always came up with the alternative that if you could basically make the market more competitive, a lot of this stuff would flush itself out. Why is that?

Because if you make a mistake in doing business with one dealer, somebody else will come along and give you an offer which will be a little bit better. So that the way in which you solve your personal defect is you have two people giving you quotes on an identical product, and you take the lower quote for the stabilized commodity. And you don't have to worry about behavioral anomalies, because you're relying on somebody else with greater skill to solve those problems.

So one of the things about a psychologist, which is very different from a lawyer and institutionalist is all of those experiments tend to be done by lone individuals in very isolated and sparse settings. And all of the transactions that take place in the world are done in very rich and complicated institutional structures where everybody knows. Well, I don't know whether it's behavioral economics or just the inability to count.

But I'm very sure that people are gonna make fundamental mistakes that will cost them very dearly and you put into place institutional structures that are designed to stop that. So to give you the source ratio, you worry about the endowment effect. What you do is you hire an agent, and the agent then sells the goods, which isn't his own, and he doesn't have an endowment effect. And then you give them incentives, you say, the higher price that you get, the more you're gonna get on it.

So if the endowment says that the willingness to accept money is higher than the willingness to pay, it's not gonna work in organized markets where inventories are sold. And knowing all that stuff, what you then do is you spend your time worrying about, what's the optimal incentive structure and commission structure for dealing with agents, and that's a very hard question. But the behavioral economics really doesn't solve it, except to remind you of the fact that these people are frail.

What you then do is you engage in arrangements in which some people have more discretion, and we call them residual claimants, and other people have less discretions, and we give them essentially routines to follow. So if you wanna go to the Federal Express office, they won't be allowed their agents to waive certain terms, because they know that they can't do it. But back at headquarters, somebody has to give people a list of options of what they can and cannot do on these things.

And some of them are waivable, some of them are not. So what you do is you put the expertise up high. And so the institutional objection to behavioral anomalies is that people, in fact, organize their discussions and their groupings because they know what they don't know at some levels. And they put the greatest discretion in the people who take the greatest risk and they compensate them accordingly. And so, for example, when you try to get individuals to figure out how to trade, they can do it.

But if, on the other hand, what you do is you get a mutual fund, what you're doing is you're trading your own amateur instincts in for a group of heavy professionals. And then they could tell you, we're gonna do an index fund, we're gonna specialize in these commodities, we're gonna have a short-term hypothesis, and so forth. So if you look at Vanguard funds, they have a very sensible institution, say, well, when do you plan to retire, or any other arbitrary day?

We will give you a portfolio that will optimize your rate of return for the year 2035. And how will we do it? Well, we know there are general tendencies that as people get older and start to retire, they want to shift from equity to debt. And what we'll do is we'll follow a uniform table that will allow you do that. And you don't have to know actually how you calculate it, but you have to look at it and say, this is comfortable for me.

If you're somebody who's not gonna retire, you're not gonna want that scaled down situation, because essentially a stable salary is like a debt. It's a stable thing, so you're willing to take a more risk elsewhere. And when they start to do all the analysis on the retirement plan, what they tend to do is to ignore other sources of wealth, which may, for certain individuals, lead them to take things which would be odd if you only looked at their retirement plans as their so wealth.

But it's not gonna be so odd if you realize that many people have a home mortgage that they could get a home, which they can mortgage a second time, move, downsize, and so forth. And when you take all that stuff into account, it turns out that the anomalies tend to disappear. And so, if you adjust for competence and if you adjust for institutional settings, it turns out I don't believe that the behavioral insights have a huge influence on the way in which ordinary human behavior starts to work.

If you're working within families where there are no market constraints, it's gonna be much tricky to figure how that works out, but there are responses. People hire family coaches, they hire various kinds of intermediates to tell them what happens with a spouse when they can't agree with respect to money. In fact, when I started teaching with estate planning, what you spent is an enormous amount of time asking the question, you have four children and a surviving spouse.

Do you want one trust for which you contribute the money with some degree of discretion, or do you want five different trusts in which there are fewer levels of discretion? And a lot of that depended upon the level of affection and cooperation amongst the various family members. And sometimes you have to follow Tolstoy's maxim, if you're a normal family, nobody wants to write about you, including your estate planning lawyer.

But if you're not so good, half of what an estate planning lawyer does today is to try to figure out how it is you take into account those individuals who deviate from rational principles. And make sure that they don't get too quickly separated from their money with spend thrift trust or a whole other devices. So I'm stressing one of two things, either the continuity of self interest under the biology, or the institutional arrangements that are dealt with for perceived issues.

And you think of the world as being less discontinuous and less disorganized than you do if you start with the other stuff. And then you also have pretty clear programs of what it is you want to do when you identify some situation which turns out to be a big old failure. Well, why is it that the credit markets crashed in 2008? And it turned out there was a diversification problem with respect to interest rates, which were not captured in the standard diversification program, and so on like that.

>> Tom Church: Richard, last one for you here, when it comes to Kahneman and his approach, I wonder, is part of your worry coming from a libertarian perspective? That if Kahneman was gonna identify places that people act irrationally, therefore we would have the government step in to prevent them from acting irrationally, does that make sense?

I mean, is this one of those divides that if you're like me and you like, I really like courts' approaches to conflicts, as opposed to, we can figure it out on the government side? >> Richard Epstein: Yeah, well, let me give you one of the battles that we had at the time, I'll give you a couple of them. At that particular time, it turns out that there was a genuine crisis in liability markets, not only for medical malpractice, but also with respect to product liability.

And these were cases in which Kahneman and Tversky were generally in favor of fairly strong interventions on individual cases to write what they thought were irrational responses. And I took the very opposite approach, and this leads me to the famous story about the Babcock mobile. Charlie Babcock was at that time one of the people in charge of the liability defenses at General Motors. He was a former military guy.

And he constantly said, you cannot use ad hoc judgments in jury trial to correct the individual mistakes and judgments that are made by ordinary people. Kahneman and Tversky recognized the anomalies, they thought that many people in buying and selling cars would make all sorts of mistakes and they wouldn't understand the safety risks and so forth. So they wanted the government to come back in and to deal with the issue.

But that's half the problem, the other half of the problem is, do you come in by ad hoc adjustments made through jury determinations? Or do you require a consistent set of standards to be put forward by the regulator? Which will then have to take into account future accidents behind a veil of ignorance, without knowing which one is more likely to occur than not.

And I was very much in favor of the system which says you want that kind of fixity, and they were very much on the other side of the situation. So why was Charlie relevant to this particular question? Well, what Charlie did is he lost a lot of cases in which all of the determinations were made under kind of a risk utility formulation.

Where you looked at the various defects, the various cures that you could put into place for them, their cost of replacement, the other things and so forth, and you had a laundry list of factors. And so what Charlie did is he said, well, you lost this case because the gasoline tank had this problem and so forth. And then when he put the Babcock mobile together, this is what he discovered, is this thing weighed about 1000 to 1500 pounds more than a real car, and that it couldn't move.

Because what happened is, it's a cognitive bias. I mean, it's the problem they worried about. You give a jury a situation in which there's a defect alleged in the car which causes an incident, what they're gonna do is they're gonna argue that to the plaintiff's lawyer is that this is the thing that caused the injury. The defendant's lawyers will try to come and say, but what about all these other risks? But they're not instantiated.

And so juries will, in fact, over signify the importance of the particular laws relative to the full pot. If you require everybody to do this in the ex ante perspective, this will never happen. So I said to them two things. One is, if you're worried about these biases, you don't want to have individual determinations, you want to have fixed rules put together behind a veil of ignorance, because that would be the better way.

So it wasn't that I was oblivious to these things, I just thought that they had the wrong cure for the way in which they started to figure out. And in general, I still continue with that, there are very few cases where in the first instance, a simple rule will not be the correct solution for a complex world. You've heard that phrase before from me, right?

And a lot of the stuff that I developed in this area was developed in response to their views that somehow, or rather taking into account cognitive biases meant that you had to have individual solutions rather than collective solutions. So that's one problem. The other problem was the question about collectability and so forth. So I'm gonna give you how they trap me. And they were very good at setting traps for people, so this was the particular situation that they gave.

They said, you are in charge of the air force, and you know that if you send people up there were two ways that they can be killed. One way is they could be killed by fire, and the other they could be shot by flak. And it turns out that what happens is, any jacket you could put on is perfect defense against the one but not against the other, where it does zero effect. And you now know that two thirds of the time it's flak that's gonna come at you, and one third the time, fire, what do you do?

So sitting there in the audience having done 0.4 seconds of they were replying to say, two-thirds, one-third, and they said wrong, and they were right. What's the correct way to do it? Is what you do is, why would you want to, in one case, take a one-third chance when you could protect against the two-thirds chance? So you should always use the dominant solution and never engage in proportionality, and proportionality is one of the things that a cognitive bias might lead you into.

Well, this was done in 1977, it's now 2024, I'm not gonna make that mistake again, believe me. Because having heard it once, I know what the trick is, and I'm not gonna get it wrong. And they give another illustration about, you have to sell ammunition and a rifle, and the ammunition has got, you know you have a dollar ten, and you know precisely that the rifle has to cost more than $1, then the ammunition, what do you do?

And everybody gets the wrong answer until you sit down and solve the equation where it turns out it's $0.05 for ammunition, $1.05 for the gun, so you get the separation of $1. What happens is every cognitive bias, once it's understood by anybody in any business, will be corrected by everybody in that business. So it turns out they have no systematic purchase with respect to you.

And you can fool me out of context, but the moment you put me back into an institutional context, these things disappear. But there's a much more serious problem. Forget about the simple algebra equation, but on the first thing you now ask yourself, what's the likelihood that the scenario that they set up is correct and the answer is zero? Because what happens is somebody who is basically on the other side of this is going to mix strategies.

They're gonna use flack, they're going to use fire type situations, they're gonna switch the relative performances and frequencies. They're gonna try to find ways in which they could combine some weaponry that tax both of these things, at least in part. You're then gonna have to develop a series of strategies to switch back and forth to deal with this particular uncertainty.

And there you have a real sort of game of cat and mouse trying to play it, for which, it turns out the best solutions will be those which are essentially rational choice solutions. And you're gonna get it wrong a lot of the time, but at least you know what it is that you're trying to do. So what happens is the difficulty with cognitive biases on the institutional level is, once corrected, they disappear.

Whereas the ones that they're talking about are much more durable in terms of the way in which they last. And there are certain things like that, which are very durable, prejudices and so forth. But again, if somebody else in the institution understands the danger, they're gonna change it. So does anyone build a building without having somebody double check the math to see that the base, the top of the building is within the center of gravity that's put together at the base?

If you did this on cognitive biases, you would expect that there'd be a very high rate of institutional failure in building dams, houses, bricks, anything you want to talk about. But since everybody else in the world knows this, the phrase checklist, right, or double check, means a lot. So there's a famous article, and I'll stop on this note by Atul Gawande, it's about 15 years ago in the New Yorker. I recommend that anybody who's sentient read this article.

It's one of these transformations, and it starts off with the question about the arrival of a new bomber prepared by General Billy Mitchell, I think it may have been the B17. And one of the best pilots in the world gets on the runway and he starts to try to take off. And what he does is he garbles the sequences of the particular maneuvers he's supposed to do, and he crashes the plane.

And so Billy Mitchell was faced with the situation that the next generation bomb may be screwed, because it turns out people had cognitive limitations that they couldn't overcome in trying to deal with this stuff. So what they did is they prepared something known as a checklist. And what the checklist was, you put it down, and what you did is you had the copilot, and the pilot and the copilot would essentially give you the sequence in which the steps had to be taken.

And what you did is you avoided 99% of the problems because 99% of the time the protocol worked. And that's the way in which you dealt with cognitive biases, and what he then said is, you start moving into the medical setting. It's exactly the same thing, so in absolutely astonishing language, he tells you the kinds of steps that it takes to correct something when you have a screw up in giving the right anesthesia in the right quantity at the right time.

And then he says, well, how do we solve this? Well, we went back to Billy Mitchell, and we decided to have checklists, and we told the nurses inside the rooms that they're in charge to making sure that everything is done in accordance with protocol. And that's the kind of a systematic technique that's used.

So to understand this, the first thing you have to assume, for whatever reason, the simplest one is people just aren't perfectly smart and rational theory kind of understands that and says, you do the best with what you have. And then you develop institutions to try to limit those rates. And that's the much more productive line of inquiry for doing these things.

And that's the way in which it in fact does, indeed, it's one of the ironies is that there was a burst of cognitive and non cognitive biases that Kahneman and Tversky identified through about 1980. But in the years since then, the list is so long that you don't know what to do with any individual item on it, and it hasn't grown.

What really has to grow is an awareness of institutional responses that take into account all this stuff, all the difficulties with cognition and also understand something else about the future. Is that if people are evolutionary creatures and you decide you like your children in the morning and don't like them in the evening. Those change in preferences, those instabilities essentially are the death of the species, so you have to make the opposite assumption.

>> Tom Church: You've been listening to the libertarian podcast with Richard Epstein. If you'd like to learn more, please make sure to read Richard's column the Libertarian, which we publish on defining ideas [email protected]. If you found this conversation thought provoking, please share it with your friends and rate the show on Apple Podcasts or wherever you're tuning in. For Richard Epstein, I'm Tom Church, we'll talk to you next time. [MUSIC]

>> Jenn Henry: This podcast is a production of the Hoover Institution, where we generate and promote ideas advancing freedom. For more information about our work, to hear more of our podcasts or view our video content, please visit hoover.org. [MUSIC]

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast