This is Masters in Business with Barry Ridholts on Bloomberg Radio. This week on the podcast, I have a fascinating guest, And if you're at all interested in statistical analysis of sports, UH, behavioral finance data analysis, understanding um streakiness, understanding the money hole problem, and then extrapolating that towards things like, UH, the hot hand in basketball, you're gonna find this to
be absolutely fascinating. UH. Joshua Benjamin Miller comes from California, where he basically racked up all the degrees he could at some of the u C schools before getting his PhD at in economics UH in Minnesota. Josh and his co author have taken apart some of the more interesting statistical assumptions made UH in the original hand study with Tom Gilovich and Amos Tversky UH, and they found something really unusual by looking at the data from a slightly
different perspective. And I approached their paper with tremendous amount of skepticism. I thought the randomness of the hot hands was a fairly well proven study that Tversky and Gilovich did, But when you look at the data and you look at how they analyzed it, it's hard not to reach the conclusion that there is some sort of a hot hand. It's quite sophisticated mathematics, but Josh does a very nice job reducing it to some very easily understandable UM. Probability,
we don't we don't know math is required. Um, you just have to know the difference between a head or a tail when you're flipping a coin. If you're at all interested in anything probability, sports related statistical, you're gonna find this to be a fascinating, lee wonky and tremendously interesting conversation. So, with no further ado, my conversation with the economists and statistician Josh Miller. My special guest today
is Joshua Benjamin Miller UH. He is the co author, along with Adam Sanjorro, of a fascinating paper that puts challenge to the myth of the myth of the hot hands. Uh. He comes to us with a BA in economics and an m a UH in mathewmatical statistics from UC Santa Barbara. He has his PhD from University of Minnesota, and he is currently a professor in the economics department at the
University of Alacante. In Spain, where he focuses his research on behavioral economics, judgment and decision making, game theory, and statistical and experimental methods. Josh Miller, Welcome to Bloomberg. Thanks for having me, Barry, So a little background. We kind of met after I interviewed Thomas Gilovich, who I was mostly interested in due to all of his work on behavioral finance, but he also co authored a fascinating paper,
uh that basically pointed out the hot hands. You co authored that with Amos Tversky, by the way, that the hot hand was really a myth and it was just we were all being fooled by randomness. How did that paper come to your attention? And what what fascinated you buy it? So that paper came to the attention of my co author Adam who's also at University of Vallecante. Pretty Much everyone who takes a behavioral economics class and
even earlier gets exposed to that paper. It's like it's one of the you know, the prime examples of a bias because it's such apparently power. It's part of the canon of oh, look how easily we're all fooled exactly, and in the in the beginning of any kind of behavior economics class, you have to show the real world implications first to kind of motivate students. And here is this um one that professionals are faul victim too, and
they're so resistant to it. I mean they were shown that this hot I mean, we haven't to find hot hand yet, but you know, the hot hand is this idea that you're in the zone. You know that success breeds success. And if you look at basketball players, UM and coaches, they all believe in this thing. And so when they discovered that there was no pattern there and they came and revealed revealed the output of their research, the professionals were it was difficult to convince them. Oh,
there was tremendous pushback. There's a famous quote you that's been referenced was red or back up at the Boston Celtics. I don't I don't care what this professor says. So they do a study. Who cares? Right? Yeah? So, I mean the stubbornness that that came out of the practitioners was was really dramatic because typically you can convince someone that is motivated to get things right if you can demonstrate that they'll benefit from it, Um, and they just
discounted it. And so there's this famous quote from Amis Diversky, after all the stubbornness that they encountered repeatedly, of people just not even looking at the evidence they were showing them, Um that he said, I've been in thousand arguments, one them all, but convinced no one. And he was He was very famous for being um, not only quite brilliant, but a little hardheaded and a little aggressive when it came to debating people. According at least according right, at
least according to Michael Lewis's book The Undoing Project. Between Konomen and Seversky, they were two very distinct personality types. So so let's get back to you before we were gonna spend a lot of time on the hot hands. Um, you're not what I would think of as a traditional economist.
What what sort of work do you focus on? Both my ca author and I focused on individual decision making, right, so we're looking at is it individual decision making within a group, within an institution, or just as a lone wolf.
So so there you know, there are the psychological factors like UM, my co author works on search and attention and things like this UM, but there are also UM factors of the institution, the design, like how information is presented to you UM, and these things, while it may be important to an individual, they also bubble up in terms of how it affects, you know, decision making in groups, and how it affects financial markets, and so in the end it does impact UM policy and economic outcomes. It
has real world effects. In other words, these aren't just ivory tower abstract discussions. There's real world application for how decisions are making and how information is presented, so that that's really quite interesting. UM. When you and you mentioned one of your research areas is behavioral finance. Has all the low hanging fruit in this space been picked or is there still lots and lots of things to be discovered.
There's still lots of fruit, whether it's low hanging. I think you always have to work for it, right right, UM, So I think the way you get the fruit, you have to think a lot about how to measure things, how a theoretical grounding and what you're trying to get at, And you can't just rely on existing data and existing things that have been counted. You have to go out and measure things yourself. A bit um and do some
work to collect that data. UM and so you know what, Like so a lot of the modern work you'll see is going beyond just the choices that people make, like when you're paying them to make decisions and looking at their choices, and you can get a lot more about You learn a lot more about what people want and what they believe by looking at other things like reaction times, how they search for things, how you know what they're paying attention to, um And in other words, you're not
just bringing in a bunch of undergrads, sticking them in a room, giving them twenty bucks for the night, and saying we're gonna put you through a series of things. You're looking at very You're looking at a very different data set that's measuring very different things. Yeah, I mean you can improve even with the undergrads. But I think a lot of the innovative work goes and collects unique
data from unique subjects. Like I have a friend alex Emis, I just present this very interesting it's on in the top topic of finance. We went and looked at institutional investors tons of data. These are people with big positions um, and found that there they're actually quite skilled at buying um, buying stocks, but they aren't so skilled at selling them UM and it seems to be distinct skills. It's very distinct skills because it's easy to buy, that's the easy part.
Sellings where the money gets made, those are not um equal level of difficulty things. I'm absolutely not surprised to hear the selling demonstrates less skill than buying. Is that basically what you know? So the finding is that they were unloaded extreme winners um too quickly, um before they really exploited the you know, information advantage that they had. So they made a good job choosing it, but they
sold it too soon, a classic mistake. Let's talk a little bit about the original hot hand paper, which, as we discussed earlier, became cannon in the world of behavioral finance. Uh, when did you first start to get an inkling that the original thesis might not have been all it seemed to be. So the original inkling was that people sometimes overreact. It's that it's a myth, the thing doesn't exist. Why does that generate such a strong intuitive pushback from people?
I mean, I have my thesis, I'm curious as to yours. I think everyone has some experience in their own athletic performance where they have moments where they're particularly locked in, and then they realized that outside of athletic performance, you just have these moments where you're in the you're in the zone, that's the best word for it. Um. You're just you're firing on all cylinders, and you would expect that you would see that um somehow in basketball data
as well. So so my personal experience. I used to play hoops as a kid, but as I've gotten older, I've become a tennis player, and I know from personal experience it takes a good twenty minutes for me to calibrate my forehand so that I am consistently hitting the bowl more or less towards where I want it, more or less with the right amount of spin, more or less with the right height. But it's not something that I could just grab a racket and swing and oh
there it is. It takes a while, too too fast, too much whip, keep it loosening your wrist, bringing around, make sure you're dropping the head like I'm running through a series of steps in my head. Hey you're too close. Watch your footwork well, one after another and I am now good enough to know I suck. I'm in that Dunning Krueger drop where Oh you know, I used to think I was good. Now I realize I'm really I'm good enough to know how good I actually am not. Um,
But it takes a while to calibrate that. I imagine a basketball player in the midst of a game has to go through some sort of fine tuning of their shooting. You can warm up all you want, um, when you're just shooting by yourself before the game, but when people are on you and you're running, it has to be a very very different set of circumstances. Or am I overstating this? Well, I mean that's that's the strongest intuition is based on this calibration thing. I mean there's probably
other elements. Will not get to that later. Um, but if you look at um, yeah, I mean, if you're sitting on the bench for ten minutes and then you come off, that's very different. I mean in the NFL you see field goal kickers warming up go on. You don't see that so much in the NBA. They don't have like an extra hoop on this, that's right, So I'd imagine, yeah, that's an important element there. What else is so intuitively attractive about the idea of the hot hand?
Is it simply just the zone? Is it the adrenaline and the endorphins. Why do we think that, hey, suddenly I'm on I'm on a streak. Why do we believe that streak is going to continue? And I'm not talking about blackjack or roulette or games of chance. These are really games of skill played at the highest level. So why we believe it? Um? I would imagine sometimes when we believe it, it's not really there. And so there
is this feeling, you know. You know, part part of the feeling is the feedback, right you You see that your successful gives you some confidence, you know. So it's not always simply this zone that emerges. Um. Sometimes you get a few successes in a row and it gives you more confidence in your training. You don't overthink it, and you return and trust your training. So you're essentially unconscious and you bring that whereas if you maybe miss
a funeral, you lose your confidence. You start making adjustments, and if you're making adjustments, you're not going to have much consistency. So let's go back to the original research. Tom Gilovich one of the co authors said about the work that you and Adam did, this is unlike a lot of stuff that's come down the pike, since this is truly interesting. How encouraging it was that from one of the original authors who ostensibly disproved the hot hand.
I mean, it's always nice when somebody appreciates your work, especially in one of Tom Gilovich's stature at the time that he said that our our paper, while it had gone through the public peri review process, it hadn't gone through the formal one. And so just last week our paper was finally published its online, not in the print edition yet in Econometrica, which is you know, top journal
and economics. There's the top five. They're kind of all equal, um, And you know, so now it's been kind of formally taken in, so I think Tom Gilvitch might have a different opinion, um, now that it's gone through this process. So so the paper is ready to be published or was just published. Yeah, so the November issue of Econometrica, Um, it came out and that's got to be very exciting. Oh,
very exciting. Yeah, it's something. So what's the takeaway from the original research, What was it that was wrong in the structure of the original myth of the hot hands paper. Right, So um, the original hot hand paper. They're interested in seeing if people do better after recent success then after recent failure. Um, that was the most important measure, Like, is your probability of success increase when you've hit in a few funeral versus if you've missed a funeral? And well,
we don't know what someone's probability is. It seems like our best guests would be would just look at the percentage of time they make it, right, And so they just look at all the events when you've had a streak of recent successes and all the events when you have a streaking recent failures, and just see what the change in your shooting percentages between those two conditions. And you know, that's very natural and very intuitive to expect that would be your best guess. And they do that
and they don't find any difference. Um. And so that's how the problem was set up. So before we get to your solution, the the immediate pushback is, Hey, after a shooter gets on a bit of a streak, the defense collapses on them. They become they're forced to either pass the ball and more or take more difficult shots. At the time in there was no way to account for that difference. However, in the intervening years, every shot
gets marked. You you describe this in one of your um publications recently, explain the degree of difficulty that is
now tracked on every single basketball shot that's taken. Right, So, there there's a it's a new company now I don't remember the company, but sport View was the first company that did this, where they have optical tracking of the I mean, the precision isn't super high, but it's it gets in the general area, and so you can control for a lot more factors than you could say, or they had the seventies sixers and they're just looking at the play by playout and so you know, even in
that data, what they would find is, yes, they'd have this evidence of the defense adjusting to what they believed to be a hoth hand, making it more difficult for the player. But the player stats to shoot from time to time to keep the defense honest. And so the important thing isn't so much as the player doing better in the context of the game, but as it helped the teammates if they're hot, because then it opens upthing
for their teammates. Makes sense. Yeah, So the the innovations that have happened, I think the first in vation actually was Justin Rao, who's u the head economists at Home Away.
He was the first one to actually come out and and measure how many defenders are around the player and try to control from these things in a different way by using the videos and show that yes, there's a lot of evidence that there's this defense factor, and if you just control for a few of these things, the effects that they had found in the previous study went away. So in other words, what looks like it's random is you're shooting the same percentage but with a whole lot
more defensive activity on you. Therefore, it's a continuation of the of the streak. Yes, so he didn't necessarily find evidence of the streak there because he controlled for a subset of factors. But as you add more more controls, it looks like there might be some evidence there. But these are very difficult things to measure in the context of the game. The original study had this critical test
and it's been repeated with other teams. Um where they take them and they pay them to shoot the basketball. So in other words, you're you're not playing during a live game, you're just doing foul shooting with three point shooting or whatever exactly. Or you'll to the NBA three point shooting contest and and and in those studies you can get rid of the defense and get a little more zero in on your question. A bit more. Let's talk a little bit about the surprising math of coin flips.
My best guests and my understanding of statistics has always been, if you take a true coin and flip it, the odds of a head of a tail is fifty fifty. This is regardless of what came before it. Coins have no memory. But you found something surprising in the data set. After you flip a coin a hundred times, and you were to pick a specific series, the odds are somewhat different.
Explain that. Yeah, So, my author Adamson, and I, after having watched the NBA three point shooting contests, we had a particular player, Craig Hodges, who was obviously hot, and we went and used the original analysis on his data and it said that he wasn't and that was puzzling, and so we had to go and see, well, we don't really know how Craig Hodges generated his shots. It's kind of a black box. But let's create an environment where we have the ground truth when we know what's happening,
and so coin flips is a world like this. So you can actually go and flip a coin many many times, or do it on a computer and see what do you get if you analyze we're interested, is the probability of heads after a few heads different than the probablity of heads after a few tails? We know that's the same. That's we have the ground truth. But now let's go out and generate that data and make our best guests
from that data. What's our best guests is the percentage of heads that you get after a few heads in a row the same as the percentage of heads you get after a few tails in a row. And analyzing it in the way they analyzed it, we found that no,
it's different. The percentage of tails after a few heads in a row is higher, which which is so counterintuitive because perspectively, so understand before people lose their mind and start sending emails, what we're not talking about is looking forward in a live situation, no matter what the previous with with a true coin. You could have a thousand heads in a row. Highly improbable, but not mathematically impossible. The odds on that next flip are still gonna be
That's not what we're saying. We're saying, flip a coin a hundred times, look at the data set, and then go back and randomly pick any head in that order or any tail in the order. What are the odds that the next flip is ahead of a tail? And it turns out that's not so? Explain that because it's it's a complete It blows people's minds because you've been told over and over again, hey, coins have no memory. But that's not what this is. This is an existing
data set. When we randomly pull any of those flips, what are the probabilities as to the outcome in the next flip after it's already been done. So, so how do you end up with instead of so the complete exper name would take some time, but we can kind of get it an intuition. Um, if you flip a coin a hundred times, there's gonna be a certain number of heads and tails there when when you're done about guarantee,
but it's gonna be some number. Now, if you just choose any flip your best guess, just choose a flip your best guess. Is if I choose flip forty two, for example, my best guess or heads, that's my you know, And so that's different though then if I choose flip forty two because flip forty one is the heads, So if I choose one of the flips where the previous flip is the heads, or just choose a flip that's ahead and see what the next flip is in the
same way of looking at it. Now there's something else because the flip you chose because the previous flip is ahead, is using information about the outcomes of adjacent flips, and that information kind of gets contained within your flip, and that's this. This gets a little complicated, but one way to think about it is you've taken a head's away from the finite number of heads that you have and
you can't see it again. You've reduced that data set, and the remaining tails now should be slightly than it shifts. And there's another element that uses the kind of space away there arrange that makes it a bit stronger. But that that that's this smells to me slightly like the Monty Hall problem. Is there any element in the Monty Hall problem? You go from choosing one in three to
one and two. So suddenly what was the thirty three percent chance becomes a fifty fifty Why not make the switch that That is a little counterintuitive, but once you see the statistics, it you can't unsee it. It always you always should make the change. There is a is there a tiny element of this and that more than tiny.
So my co author and Adamson Hot and I also wrote another paper connecting this to the Monty Hall problem and explaining it via this principle from Bridge, which is the stable restricted choice, which is essentially the intuition of Bay's rule, and so the way to think about it in the Monty Hall problem, you of these three doors, right,
so you're on this game show. There's three doors. Well, let's make this our problem exactly the same as money, except it's a hundred doors, not necessarily three doors, so it becomes much harder. We can make it three doors. So your games show, you're on a game show, and usually that you have this car and two goats, and you gotta find the car, and there's a car behind one of the doors. You gotta guess. Well, let's get rid of the goats and cars. Now, let's just flip
a coin behind each door. So eat behind the shorts fifty fifty. But you're the contestant. The host knows what the outcome of the flips are. You don't you want to guess, Hey, where's it? Where's the heads? Let's say you want to find the heads um, so you guess you know door three. Now, if you guess door three, let's say the host looks behind the door. You didn't guess door one and two, and he's going to reveal the heads if you can't. So let's say the host
opens door one and shows you ahead. Do you want to switch or do you want to stay? If you're looking for the heads, you want to stay. If you're looking for the tails, you want to switch. Now, the intuition is not going to be clear immediately, but if you think about now, the host looking at door one and two used information about both doors to determine which door to open up to you. Now, if both doors were heads, the host could have opened door too. But if it was heads tails, the door host had to
open door one. Because the host is gonna show your heads if you can't, right, he doesn't want to show you a tail because that's what you're looking to avoid. That's the goat. Yea. So the world we don't know which world we're in where the first is heads and the second is tails. Of the first his heads and the you know second, his heads. But the world where it's heads tails is the world where the host is
more restricted. The host has to open door one, and so the world that so that you should avoid door two in those circumstances because it's a higher probability of if you're if you're hunting for the heads you should have, you should avoid door too because tails is more likely because in that world of heads tails, the host had to open door one and it's yeah, so it's a
higher probability. So that door three, even though it's a coin that's flipped independent of the other two, when you're dealing with that data set, you're better off with three because of the circumstances that lead the host to pick one and not pick two. That makes some rational degree of sense. Once you get the monty whole aspect of this, it makes a whole lot more sense. It's just it just seems, uh, it's quite fascinating. We were discussing um
the coin flip issue and the hot hand scenario. Let's circle back to that hot hand and the original research. The original research said that if there's a streak of three hits in basketball or three misses in basketball, the odds of the next shot going in or not is whatever the shooters historical shooting percentage hges, which sort of seems that there's no hot hand. But that presumes after streak that their next shot should be dead center in
their percentage. You found out it should be worse than that. Explain exactly. So that's the counterintuitive thing. If you go out and you watch a player shoot a basketball and you look at their shooting percentage after a streak of hits and compare it to their shooting percentage after a streak of misses, and you find that it's the same and not. The intuitive thing is to say, oh, they're just they have the same rate, but actually you would
expect them to do worse. Explain that, because that's the most fascinating part of it. Someone is on a shooting streak, we take a data set of a whole run of shots. What do you find after the streak, and why is that? So you said, you find their percentage actually goes down after a streak. In the world where there's no hot hand, where they're a consistent shooter, their percentage will go down after a streak in the data. Not in reality, their
probability is always the same. But we don't observe the probability. We calculate the percentage, and that's where the biases come in. And so the original authors found that the shooting percentage was around the same, and that's correct. We go and we check and they were right. They report they did all the analysis in that sense, the calculations correctly. But the mistake is understanding the benchmark. You have to go out and say, okay, now let's look at the world
where we know, um, we can control it. So on a computer you can say, we can generate coin flips, so we can make a player that has no hot hand, and then look at how that player does when we analyze the data and we realize, oh, they should do
worse after a few in a row. So once you adjust for that bias, you find out that actually, if they're doing the same, that's indicative that they're doing about ten percentage points are more better after hitting a funer row than missing in a few funeroe and that's huge. That's like the difference between the median and the best
NBA three points, thereby confirming the hot hands. So I have to challenge the data set because it again, everything about this, each step along the way, is so counterintuitive. So why would we expect a shooter who's on a streak, who's in the zone, who has the hot hand, whatever we want to call it, Why would we expect his shooting percentage to be lower after they hit several shots in a row. Why would we expect it to be lower for a real human or for for anybody, for
a professional, for a real human. When you look at a data set of here's here's all the NBA streak shooters or all the NBA shooters, what does the data show after a streak they're shooting percentage actually becomes So if you're talking about live action games, we have those issues that we spoke about. The defense will adjust and so that becomes a little bit more complished. So so
let's talk about three point contests. So if the hot hand didn't exist in a world like that, we would expect players to shoot worse after making a few in a row in the data, so simply just mean reversion, is that all. It's not mean resursion. It's the same thing we talked about with the coin flips. Right. And so as a researcher, you're taking the data after it's already been generated, and you're picking through it, and you're
looking only at the events that you're interested in. Right, You're looking at their probability of success given recent success. You're just picking out those events when they had recent success. Let's say where they just made three in a row. So you're changing the data set so now there's three less.
So if you someone shoots three in a row, when we're looking at the data set, let's say they've shot twenty shots, and after three in a row, how they do well, guess what, You've pulled three hits out of the set, meaning there's a disproportionate number of mrs left.
That's part of the bias. And there's this other element that didn't quite get into is that you can you have this essentially a stopping rule, so as you collect the data, the moment they miss it, you're not interested anymore in looking because you're gonna wait for a streak of hits again, so you've kind of you're biased towards stopping at a miss, so you might get a miss right away right then everything you collected in those events of their shots or misses because you just collected one
shot and they're all misses, and so you're you can biasing towards collecting misses in a side that that that's quite that's quite fascinating. So what other areas like this are you studying, because it's really it's really quite quite fascinating stuff. Are there other sport myths that you're looking at that have a probabilistic element that's very counterintuitive or is this pretty much the biggest one out there? Um,
this is the biggest one that we're studying. A lot of what you're doing is statistical and probability work at a level that the average sports fan is really not familiar with. Forget the live game. When you explain relative to three point shooting contest, it's really not so much about the streakiness of the shooter, but the mathematics of the data set it. And I think that is really counterintuitive, but it doesn't seem anyone's been able to disprove what
you and your co author found. So there have been a lot of challenges to that original study, a lot of challenge and legitimate challenges. You know, there are issues with UM what they call statistical power. Right, so we have we have a friend and colleague, Daniel Stone, who made this nice point that you have this thing called measurement. Are we want to know how do you do after hitting a few in a row? Um, that's what we actually look at. But what we're really interested in is
how will you do when you're hot? So hitting a few in a row, you're not always hot, and so you can underestimate how hot someone is if you use only the data that you can observe, which is zeros and one. So the you know, the econometrician, the statistician has kind of a weak measure of that. So you know, these this kind of evidence is um just the mathematical evidence. Do you ever do interviews of players? Do you ever say to them, Hey, were you in the zone? How
did you feel? How? How how do you find that data set? So that data UM, the original study looked at data like that. They spoke to the seventies sixers and they asked them kind of qualitative questions. Do you get in the zone and you feel hot? And they all do right, Um, but it's it's hard to work with that. That's just looking at whether they believe in it or not, but then getting a sense of do
they believe in it too much or not? That get that gets a bit harder because you have to be able to somehow measure you know, they have to decide when are they hot? You know, so you really need a lot more cooperation from say like a coach or player to kind of sit there and maybe watch the games with you or something like that. That would be maybe a a better way of testing their you know,
their beliefs. So so when um Traverseking Gilvitch's original study came out, I'm forgetting the third person in g VT. When when that study came out, there was a tremendous amount of push back from coaches amount around the league. We mentioned ridd are back. Your study comes out and you basically say, no, you professional coaches, You were right. There is a hot hand, there is a streak. What sort of feedback have you gotten from players and coaches
about your research? Well, we're not entirely sure whether players and coaches were ever frowsled by the original study, so, you know, validating their beliefs for them. It's so, yeah, we kind of never believe that result to begin with. So so we haven't gone and sought the opinion of you know, players and coaches because it's it's not so clear how far that original conclusion reached into that world. Um,
while it did, especially you can see announcers mentioning it. Um. But yeah, yeah, so so when you what about some of the outlawer players. If you look at a Michael Jordan's um or a Steph Carry, guys who literally become just conscious and what Reggie Miller is another one, and the most improbable shots on a consistent basis start to drop when when you look at players like that, do different players seem to have a different set of streakiness, a different hot hand? Can you can you calibrate how
much of a hot hand different players have? So using game data that's a bit more of a challenge. So my couth or Adam and I looked at Spanish semipro players. We could collect a lot more data and we had more of their cooperation, and there seemed to be a clear difference with players. I mean, there's the obvious one is that you know, centers and forwards, people that don't
shoot that often. It's hard for them to get on a roll because you have to be consistent and they're kind of not that consistent when they don't touch the ball. All that and all that, right, and so you know, those are the people you'd expect maybe you know, they can't really sustain a streak. And that's what we find. You know, so there's some players that can and some
players that seem like they can't. Um. If we go to real NBA players, you know, that's a bit of a challe And so we have looked at the three point shooting contest and we have a paper on that. Um. The issue with the three point shooting contest is a lot of the players don't have much more than say a hundred shots total in the contest. You know, may some have a few more. You have a Craig Hodges who has over five hundred in our data, and we
find evidence there. Um. But what we can say is that among all the three point shooting contest contestants, there were way more that did better after a few in a row than making a few row the missing a funeral than you'd expect, but you don't really know which of them are really hot. You just know there's more of them than you expect, but you need more data to be really confident when you pick out an individual.
So at this point in the state of research on the hot hand, do you have any doubt that the hot hand exists? I don't have any doubt that the hot hand exists. What you mean by the hot hand is where the doubts come in, because there's many different mechanisms that can lead to evidence in the data that your success after recent success is higher than you know that it's higher than after recent So so the confidence factor, the endorphin factor, the further pressure that the other team
is placing. All those things they add up. You ask a player, they're gonna say, yeah, of course you get hot. But when you ask the stat statistician, the data supports it as well. Right, quite fascinating. We have been speaking to Joshua Miller. He is an economics professor and researcher
at the University of Alacante in Spain. If you enjoy this conversation, well be sure and come back and check out our podcast extras where we keep the tape rolling and we continue discussing all things statistical, sports and behavior. You can find that at iTunes, overcast, at your Bloomberg dot com, wherever your finer podcasts are sold. We love your comments, feedback and suggestions right to us at m
IB podcast at Bloomberg dot net. You can follow me on Twitter at rid Halts, or check out my daily column at Bloomberg dot com slash Opinion. I'm Barry Ridhults. You're listening to Masters in Business on Bloomberg Road. Welcome to the podcast. So, Josh, I have to tell you I was very much a skeptic. Um a little background. So first, I'm a fan of Gilovich for a long time when I you could take noise. We don't know when I started in this business a hundred years ago
as a trader. It was before the bad old days of behavioral economics had made its way to Wall Street, and I found a book by Gilovich, How We Know What Isn't? So it was the first mass book, more more popular book, not that it was all that popular, but it was the first book for a popular audience that had an enormous behavioral finance component. To it, so
I found him absolutely intriguing. He led me down the rabbit hole of behavioral finance, and it's been an enormous influence on my um professional career because very often, when I couldn't figure out what the hell is going on according to what the head trader was saying, behavioral finance gave a much better answer. And the same is true when you're looking at markets, or the economy or what people get wrong. So my bias was to say, Traversky Gilvitch,
these are two legends. Of course they're right. But I have to tell you this, having gotten through as much of your paper as I could until the formulas started to show up. It's a compelling argument that when we look at the data set, players on a streak from within that data set should have a lower shooting percentage following three in a row. Then you would intuitively inspect, expect, and when they don't shoot worse, It in and of
itself is evidence of the hot hand. It's such an eloquent and unexpected way to do the analysis of the hot hand. I have to ask, how did you guys come upon that? I mean, I would never I'm not a statistician but I would never have thought, because so much of it is so intuitive, I would not have thought, hey, let's look at what the expected shot is, because with coins it should be fifty fifty. Why would you expect it to be anything less following three in a row?
How did you sort of work your way towards that research? So you know, both my cauthor, Adamson ho Ho and I we didn't see any problem in that respect um with the original papers. So we didn't say, oh, they're clearly making a mistake here. No one did, you know, since we discuss for this thing, we've gone and we've
asked statisticians, people that are very good. They look at the that test and they say, oh, you know, maybe it's underpowered, or they might have some little quibbles, but they don't have any expectation that you would shoot worse after funeral. In order to do that, you actually have to go out and simulate or go sit down and really calculate, and so it doesn't strike you in any way. So we discovered it. You know, it was a bit of a stroke of luck. Um. We were looking at
the NBA three point contest data. We had to analyze it very quickly. Using a method different than the way we used it. So we just used the method of the original study, which was much quicker to run. So we ran that and we found this player who we knew was hot and have mentioned that earlier, Craig Hodges, and he shot no better after making a few in a row, and that just didn't make sense. Was it
was that a brute force quit down and dirty? So so you moved to something a little more um sophisticated, what's the better word for this. So so the sophistication
came later. So we we you know, we just took the test that used in the original study and that measure and it wasn't showing anything, and we that didn't agree with our perception of what we saw in those videos and and and and some of the elementary things he did, like he hit nineteen in a row at one point, never missed more than five, and he was around fifty percent shooter, which would be you'd never expect from a point and so, okay, nineteen a row is
astonished as this instance. Yeah, it's incredible. So then we went and we said, well, what if he were a coin? What if Craig Hodges was a coin? So let's just generate his shots as if he was a coin where he's really and we repeat this, like, imagine we did this many many times, and look what we'd expect from all. You know, if we run this many times and we see, oh, you'd actually shoot worse after making a funeral, and that seems very count We were struck. This doesn't seem like
it's right. But this is what the analysis is giving us. We have to understand this. This is what the data is saying. Two things we've discussed. One is after you have a streak of six in a row and you have a finite number of shots, well now there are six less heads in the in groups, so therefore there's a higher probability of tails after that. That makes perfect sense, right, because you're just changing the remaining data set by what you're looking at given a fixed data, given a fixed
number of coins, fixed number of shots um. And then of course mean reversion assumes after a long streak of heads you should start to see a streak of tails, which like Gambler's fallacy a little bit. So let's let's
go into that. Explain that. Yes, so I mean the gambler's fallacy is this idea that comes out of the casino, and it's been known for hundreds of years that if you see say five six blacks in a road or that table, it feels like that that the red must be more likely, right, um, And so people get drawn into this and they start betting more. Maybe, but it's still fort and the green. Yeah, but it's almost fifty fifty regardless, right, So in reality the probabilities haven't changed.
But but when you but when you look at a when you look at a fixed data set that you expect to be fifty fifty, not not perspectively at the roulette table in real time. But we know that, hey, there's a hundred coin flips, we're gonna assume half of them are tails and half our heads. After you've had a wrong long streak of heads, the assumption is that out of that full data set, there should be more tails coming up. I'm I'm in real time, it's truly
the gambler's fallacy. But when you're looking retrospectively with the data set, it's basically just a variation of, Hey, you've already exhausted a lot of heads, therefore there are more tails out there. Yes, exactly and as we mentioned before, there's a little bit there's an extra wrinkle on top. You know that it determines on how you know, how
are these streaks ordered. So like when you when you pick a pick up a shot, because the previous three were heads, the shot you pick up, these are either heads or tails, but it's much more likely to be tails one because of the heads that were removed too, because if it were a tail, you've interrupted the streak and you you can't begin until you have to wait
until you begin. So so there's you're pulling a big chunk of the possible selections out, so all the streaks come out they're all heads, so you're not picking that one. And and plus the total number of heads that you've used, so what's left becomes just from a data set group, what's less have become a not probability, which is which is fast. So you guys are doing this research, at what point do you say, holy cow, this is really a fascinating discovery, Like it's it's not just a tiny chance.
Ten percent is a huge number in this sort of data series. When did you guys look at each other and say, hey, this is something really important. We knew it was a big deal of the moment we saw it. Really we were on the phone where you didn't set yourself this has to be wrong. Ten P. How did nobody pick this up? In thirty years? Nobody has seen this? So we we had this is two years after we
had begun the project. Well maybe not that long, but almost two years, and we had read every paper in the literature, so we knew no one had had had nobody had seen this, no, no one had said so we knew it was a big deal for that literature. So the only question we had is how new. Well, I mean, we'd run you know, we knew the you know we can trust the computer, right, And of course you have to make sure you didn't make an error
in your code. You have to sit down and do the simple example to make sure you didn't do a calculation there. So once we did that with okay, this is clearly a true thing. Now the only question is did anyone know this about coin flips before? Is this
a new discovery about coin flips? And yes, there's some mathematical things that are somewhat related, but no, it was even new in that dimension, So we knew we had something really big and that was exciting because you have this moment where you're the only person in the world that knows something. It's kind of it's an exciting moment. I feel that way every day I wake up and I have that sensation, so I can appreciate you probably not as solidly based as as yours, at least that's
what my wife wee. So that's amazing. You guys come up with this incredible breakthrough. Nobody has has found this. It's been decades and it's it's been widely accepted. It's become part of the cannon. But it's classic confirmation bias, which is so um reflexive and meta. There is a study that says people are fooled by randomness and think there are streaks, which turns out perhaps to be confirmation biased by behaviorists who are warning people against being fooled
by randomness and seeing what they want to see. It's got a little bit of Mandel brought reflectiveness built into it. It's it's quite amazing. Yes, you know, in a sense that mistake proves kind of the spirit of the general point about misinterpreting randomness. Even the best of us, the best researchers there are out there still make these mistakes due to randomness, and while saying others are making the mistake,
you're making the mistake even within. So they accidentally proved their point, which is it's very easy to be fooled by a random data set into thinking there's a broader conclusion there, until subsequent research discovers that, hey, this isn't quite as random as you think it is. There's a ten gap between true randomness and the remaining data set. That that's quite that's quite fascinating. So you guys look at each other and say, hey, we're onto something real.
How did it progress from there? What year was this? This was? This was February. Found this and we knew so we knew it was important. So, um, we presented our work and when you see the eyes light up, you realize it's even bigger than you thought it was. And then you realize, hey, wait a minute, we don't have the paper yet, and now other people know about it. Who did you present it to originally? Um? So at Oxford University. That was the first review, right, and you
know you see the eyes light up in the room. Um, are you genuinely concerned at that moment? Oh? Someone's gonna try and beat us to publication, and so we put everything aside and we just we we went to the grind. We within two months we had the paper and no one was going to catch you. At that point you would enough of the head start and you were the original people who found this. So two months later the
preliminary papers come out. We put online. You put you posted online n b R and everywhere else or wherever you know, just get that time stamp right where wherever finer white papers are are sold. Um. And so that's what April of the paper went online June. What's the response to that? Um? The response was big. So statistician at Columbia University, Andrew Gelman, who has this blog and
everybody's heard of Andrew Galman. Well, or let me rephrase that, anybody who's interested in statistics knows who Gellman It Columbia is fair, fair statement. He's at the crossroads of pretty much all the social sciences, sciences when it comes to data and statistics, right, and so getting attention from Andrew Gelman huge, It's huge, and that that was you know, high fives all around. Yeah, but it's also scary when you get attention from Andrew Gelman, because if you made
a mistake, it's open peer review season. They're getting in there in the comments. He'll get you, you know, like they're just having fun. They love talking about data and and they're not gonna worry about how you feel about it because they're just interested in the main points, like what did the statistics say? And you're you know, you're sitting there sweat and ball. It's hoping you got you
didn't make a mistake somewhere. It's at that level. This isn't you know, Twitter fights and ad homin Hum attacks. It's hey, let's get into the math. Let's see if they did they're crunching their numbers correctly. Let's see if we can find an error in their modeling. What what did Gelman discover? So Gelman went and did the work himself, and he found what he found agreed with what we found, and so he said, hey, guess what. There is a hot hand that was his post and then it's kind
of snowballed from there. That's it. So then there's a Wall Street Journal piece on it, and then there was an ESPN or a sports illustrator was one of the sports are there, yeah, there there was ridiculous, like we're like, okay, when's the fifteen minutes gonna end? But I guess that, you know, the news world is so kind of balkanized by this point that like its not from subject to subject, it just kept rotating. And I saw something, and you guys published another a number of fair I have to say,
you're published popular stuff. I think you undersold the math on this because it's not that you dumbed it down, it's that you were so circumspect. And so maybe modest is the right word. Like if I'm a different person than you, I would have written written something that said, dude, just listen up the whole no hot hand things. Let us show you why that's not true. Here's the math. It's ten. It's a giant impact, and here's why. Like I thought, you guys were very circumspect in your what
was it the conversation or the yeah, the Australian conversation. Yes, that was like a fairly modest discussion. You know. I would have been like, hey, pay attention to this. We're changing an understanding of sports streaking nous, this is a big deal. What other applications are there? Of of the finding of both the flips of coins and the streak nous of shooters. Where else can this be applied? Are there other uses of this mathematical? Why should call its
statistical observation? Yes? So the bias that that we found has uh and it can it can manifest itself in many areas. So it's not just about time, right, So we're looking at like how how you did recently? Does that affect how you do in the future or how you do next? Right? If we found some biases there,
But it's not it's not about time. It's essentially about space because you're looking at data and we represent time with space because we have period one, period two, period three, they're all next to each other, and so you have this kind of one dimensional spatial thing continuing the line. But it can go in either directions. So it's not
you know, time's arrow that's determining it. Right. If I hit three in a row, the chance that the previous one that just preceded that streak as it heads is actually lower two um for the exact same reason, which means that the actual streaking nous of the player isn't relevant to the prior one, even though we would expect it to be relevant to the subsequent one. It's all the same statistical data set prior less, less heads in the remaining pool, etcetera. So you can extend this beyond
time and talk about space. Right, So if you're interested in you know, if I'm surrounded by you know, red people, am I more likely to be blue? You might go and hey, let's look at the data set. And this is the ping pun bulls in the vase statistical problem. Yeah, so you know, the people study segregation and clustering, you know, and where people live and things like this, and and see, you might go into data set and use this intuitive measure like let's see if I'm more likely to be
blue if I'm surrounded by reds. Um, you have the same issue here now, you know, if I were a blue, I've kind of excluded other possibilities of being surrounded by reds. Wherever that blue is what it actually makes blue more likely for some of the same reasons why we have this bias, you know, when we're talking about time, and so there are potentially many other areas where bias is
similar to this could could manifest themselves. It might be stumbling it so I remember a couple of years ago the cancer clusters around power lines, um, which a lot of statisticians came out and said, well, no, this is just you know, the heads and tails problem. Again, you have a lot all these non clusters around other power lines. So if it's a cause element, why is it causing it here but not a half mile down the same
power line. It's just a random aggregation of data, and you're seeing something that it's of course you're gonna get ten heads in a row if you flip a coin a million times. That's all you're seeing. Do you have an application to those sort of of cognitive issues, um, So we haven't found the specific application, to be honest,
we haven't scoured that literature. UM. You know, we have found papers that have measures of clustering, like how likely am I going to live next to someone of who's like me, you know versus not like me depending on who's around? Um. And there are some measures that are biased for a similar reason. Um that we have this bias now in the councilor cluster one. UM. You know that's that's a little bit different. UM. And it's and it's because of you know what you say is it's
kind of blade of grass fallacy that you know. You know, there's lots of blade of grass. You know, you shoot a speck of water and it hits the blade of grass, the blade of grass. Oh look, I'm so lucky it was coming for me. What had to hit some blade of grace? Right now, someone's gonna win the lot. Someone's got to win the lot everyone. You know, the chances that somebody wins the lottery is super high. Not so much. Yeah,
so that's interesting. So before I get to my favorite questions, I asked all my guests, I have to ask you what else do you guys um working on? What are the research is coming from the minds that brought you
um proof that the hot hands exists? Well, you know, so in our world, it's very tempting to move on to the next one before you know, finishing what you started, right, so you have not exhausted everything out of this one one piece of day, right, So we have we have a lot of kind of uh you know, I used to dot, he used to cross. But you know a
little bit more than that. You know, you want to you want to finish and get the message out, but also share, you know, the other insights that we have, because you know, they come out of the same you know, so this is our you know, say, the main insight, but there are other very subtle and interesting insights that we have because you know, when you master something, yeah, and you come back after working on something for a while,
there's a lot to share. So tell us some what what other insights can be derived from from the hot
hand papers. Yeah, so there's another result in in that study, um um Gilvich introversity study that Gilvich mentioned in the book that you talked about earlier, which is okay, so they kind of got that people, uh you know, that they measured hot hand in a certain way, and they realize, well, maybe we're not capturing everything that means about hot hand, and maybe some players are are seeing something that we,
the statisticians, the econometricians you know, aren't measuring. And they went and had people predict and bet on outcomes and they found that they their bets don't really correlate with
the outcomes, and so that's kind of evidence. Well, okay, well, even if we're not measuring everything, look if the players are seeing something you think they would bet successfully, and that you could also take that as evidence that now that there is a hot hand there, well, it's at least it's evidence that there's somehow not using exploiting it
in a profitable way. But there was actually a mistake as well in in that in that analysis, which is, even if someone were perfect at detecting the hot hand, they knew you know, you can imagine Ann and Bob. Bob's a shooter and his you know, predictor she's observing Bob. She knows when Bob's hot, and whenever Bob's hot, she's going to predict that Bob's going to make the shot. Now, you would expect if she's good, then that good, then her bets, her predictions are going to correlate really well,
um with Bob's outcome. But actually you wouldn't expect that. And that's that's another counterintuitive thing is that while she's perfect at detecting his state, the outcome of the state is noisy. You're just getting one draw from Bob's earns. So even if Bob's moving from a seventy to eight probability shooter, if you only take one draw from that urn, you're not getting a very good signal on Bob's state
you need a lot. Yeah, and so even if you're getting many predictions from Ann and Bob, you're still only getting you know, one draw on each one. And so the evidence that they have there was actually enough evidence that's consistent with and being very good at detecting it and actually with you, Rhianna. Is the data you find that Bob shoots seven around seven percentage points better when and predicts that he's gonna make it, you miss it. So in their data they have real people that are
paid basketball players are betting on each other's shots. And that's the evidence that we find that's quite interesting. That sounds that betting on the outcome of a shot sounds very much like fund managers selecting stocks for a portfolio. Have you applied any of the hot hands? Two? How do fund managers do when they're on a hot streak
or a cold streak? And there's a ton of mean reversion in that data series, right, So we haven't gone and analyzed and the mechanisms for being hot in the financial world are going to be quite different than in the basketball world. Right. So, you know, one way of thinking, you look for sec indictments, you look for no I'm
just kidding, um, for sure. It becomes so affected by such large macro things it's hard to give credit or or not or your You know, your model of the world happens to be uniquely fit the current situation, and you recognize that, but you know that may or may not be temperary, and that's going to know that that would probably expire, but that's a very different mechanism, and
say how it would emergency a basketball game? So I interrupted you, what what else did you do you see as an application of this elsewhere, um, an application of our, of your of what you've discovered to to the world of finance, to the world of finance. So the immediate applications maybe not so much. But if you think about people picking stocks, let's say not so much investing, but someone wants to prove that they're good, um at predicting when a stock is going to go up or down.
You have to pay attention not too how often they're right, but how much money they make when they're right and when they're wrong. Because it's very easy to gain these things. So if I were to say, let's say it's fifty fifty, I want to prove that I'm good at predicting coin flips. And so if every month you know a coin is flipped, each day stock goes up or down. But I only bet when there's three heads in a row. When the stock goes up three times in a row, and I
bet it's gonna go down. Right in any given month, I'm gonna be right more often than I'm wrong, And so I can game you know if you don't, if you, if you, if you brack it to the month level, I'm gonna be more often right in certain months, and it's gonna look like I'm doing well. But the thing you haven't paid attention to is how often I was right and half when I was wrong in the months
that I did poorly. And so if I'm always betting that it's gonna go down when there's a few ups in a row, there's gonna be those months when it keeps going down, right, But you know there's gonna be a few of those months, and there's gonna be many months when I did well, but I didn't predict very many times, and so you're not controlling for how often I predicted, and so it looks like like I do
really well. But if I were to bet I wouldn't be making any money because I'd be losing a lot of money in the months where I didn't predict that well, only winning a little bit of money in the months that I predicted well. The interesting thing is if you talk to active traders UM who have been successful, they're not aiming for fifty fifty. They're aiming for those opportunities where a trade becomes a winner and they don't sell too early. So it's not your banning average, it's how
far the ball goes when you actually hit it. That's right, Meaning you could have a winning trading record, but in terms of percentage of winning trades, but in terms of dollars one and lost, those twenty more than make up for the remaining eighty. And I always find a lot of new traders don't understand that. They think they're hitting
for percentage, but they're not. They're heading for distance. To bring in a different sports metaphor anything else you want to share about UM, the research, or what you guys might have coming out in the near future, you and your co author UM. Off the top of my head, no, I think we The one thing that we will say is actually there's one thing, So it's not simply that we found that the original analysis was vowed in the
original conclusions were invalid. You know, if you go back and you reanalyze that data, you find that players shoot a lot better. But it's not simply in that data set. So we've gone and collected many other data sets that replicated their original study got the same conclusion because they
had the same bias that the original study had. And so when you go back and you fix that, you find evidence everywhere, and and and that's we have a paper that we're finishing that's showing how robust our conclusions were. So I know that there are all sorts of interesting UM awards from mathematical and statistical research. Are you looking at applying for any of these? How how does that work? Are you can you self nominate? Does the institution have to nominate you? How do you say? How how does
that process work? Have you guys thought about this? It's not something we thought about. Oh well, let me plant that seed. And if this is significant enough, you should uh apply for either a grant or a mathematical award, although most of these you have to be nominated by other of people. But how hard is that to have your department share and nominate you. That's that's easy enough. I have to ask you. I didn't ask this earlier. You grew up in California, you went to Santa Barbara.
How did you end up in Spain? Oh? So I think you remember two thousand and eight, two thousand nine a little bit. So that academic jar market was an interesting one. I was going on the market in late two thousand and eight, two thousand nine, and so a lot of academic appointments, non appointments, advertisers for positions were
disappearing because of you know, the crisis. You would think academia is with large endowments and what have you, someone insulated from the vagaries of the stock market and even the broader economy. But apparently not yea. And so my advisor came to me and said, look, this year, everyone's applying everywhere, so you need to apply to Europe, even if you weren't even think, you weren't thinking about it.
So at that time I applied everywhere and it was great because it opened my mind mine to the great opportunities that are there. So right, so I moved to Italy in two thousand nine. That was my first stop Where were you in Italy? Bacona University in Milan? That there are worse places in the world to ride out a recession. Yeah, no, it was a good It was a good time, I can imagine. And then from Milan, how do you end up in in Spain? So I wanted to join my co author and finish our work.
Is that where he was located? Yeah, and he still is. So we're both the University of the Content on the lovely Mediterranean ceo of there there again, parts of that whole whole Millranian coast is just spectacular, isn't it. So you don't miss California too much? I get back a couple of times a year. Quite quite interesting. All right, let's jump to our favorite questions. I can't believe you guys never thought of saying, hey, maybe we should apply just for some of these grants and some of these
award rates. We think of applying to grants, But the award thing, I don't know. Yeah, all right. I always thought academics had to do stuff like that in order to maintain their academic standing. Grants and I apply for money, that's for sure, But the the the recognition, um, it's a good idea. Yeah, not a bad idea. Let me let me when you give your acceptance speech. Yeah, definitely,
all right. So, um, let's jump into a favorite questions, which I'll modify slightly because, um, most people, I think I don't know you personally. So let me ask the question, what's the most important thing, um, that your friends and family don't know about you? Friends and families? So you had mentioned this question earlier, and I was going to say, the most important thing we've actually already revealed, which is which is it's what most people didn't know we haven't
shared this much? Is that this might be in this research, um, the first joint eureka moment, right, Usually someone discovers something. You when they say simultaneous discovery, someone discover something at one point in time and somebody at another, but no one knew about it. Did The electric light bulb as a classical radio is another classic example. So, but my cauthor and I were in this, you know, we were in the same room, we were on the phone at the same time, and we both had the you know,
the gold is there and then how often does that happen? So? Who were some of your mentors in your early career? So in my early career, um, I would say, and my author would say the same. You know, my my advisor, Alto ruster KINI is a professor at University of Minnesota, and he's just you know, he's a neuroscientist, he's a mathematician, he's an economist. He's all about the science, right, And I'm thinking a renaissance person. Oh, he's a renaissance person.
And you know when you see that, um, and you see someone that's just you know, zeroed in on that and you see how they work, you kind of you get you kind of absorbed what they do through osmosis. And my CA author with the same to say the same thing about his advisor, Vince Crawford at Oxford University and a very deep guy, very brilliant, you know, both very brilliant. Um. And and those are formative years when
you're in grad school for for sure. For sure. UM, So what other behaviorists and statisticians influenced your approach um to thinking about the mathiness of things like shooting streaks? So that I would say the statisticians that have influenced me out there is one, right, there's Andrew Galman reading his blog has been eye opening for so many people.
I mean, he just introduced how to think about data in a way that most people don't get in their formal training because he's dealing with real practical examples all the time. Um, and so I would say he's been one of the biggest influences interesting about the what about on the behavioral side? On the behavioral side, there's just
so many great you know. I mean, there there was this vanguard of the people that the folks that came in in the eighties that really had to fight through, you know, the review process of all the skepticism towards you know, why a psychology you know, relevant to economics, Why are these other social science disciplines? What do they have to say about? People really had to fight a lot of skepticism. So give us some names. I'm putting
it on the spot, Okay. So the people that had to fight through that, I mean that's like, you know, Amos Tversky and Daniel Kaneman were very influential, but they know they were within psych psychologist they were fine. So the people that had to deal, you know, with this
kind of pushback. I mean you say, like you know, Richard Taylor, you know, as much as you know he he's been a bit skeptical of our work, but you have to respect, you know, and and you know, both his the insights he has into human behavior UM, and also just what he had to fight through get get listened to UM. So he was a guest, and my favorite quote from him was UM. Early on he decided he would never convince his peers. So he thought, I'm going to bypass them and just try and convince the
grad students, and we'll just wait it out. After enough funerals, we will have one. And and it's really turned out to be quite true. If you if you are influencing the next generation, that's more and more impactful than what Tversky said, winning all these arguments and convincing nobody. Uh,
it turned out to be very clever. Anybody else you want to mention from that group or oh no, I mean, I mean I wouldn't want to single out any person, you know, you just you look at the people that really kind of did a lot of the fighting that you're kind of push push the ideas through. But you know, in terms of the ideal dal level, there's so many right, you know, so even in the you know, when we were presenting our work. You know, you have somebody to say,
like Colin camer cal Tech. You know, he actually just met him at a conference and he's a he's really a fascinating dude. Oh yes, And you know there's a lot of similarities you know that I recognize in him. That's kind of similar to the advisor that Altarosta Kidia University of Minnesota. You know, just this, You know that he's all about the science and really, I mean he came to one of our talks and he brought up Footnotes seventy two, which was the weakest point, the point
he really wanted, and he found it. He found it. You know, we had a nice discussion about it. He saw our perspective after we had to talk. It's like, Wow, he's really taking this seriously, and it's just it's it's nice to see that that. That's got to be so delightful. I believe we have him tied up for the spring as a guest. Um. Yeah, he's really I love the work he does with virtual reality and showing people in incredible detail what they're gonna look like when they're older,
and it affects their decision making. Dramatically in terms of planning, not just like a computer generated picture that's been aged, but when you have this immersive VR experience of here's your life when you're eighty, it leads to all sorts of amazing changes when you're forty. It it's quite astonishing. I'm gonna to do that. So I'm glad, glad you brought that up. Um, so I'm gonna put down Gellman as one of those people who influenced your approach to
UH statistics. Let's talk about books. What are some of your favorite books? So I'll pick out a book. Um, you know, we could pick out a lot of nonfiction books and a lot of books like that. It hits you at the right time. And then if I tell you that book, I might you know, if I were to look at it now, it might feel trivial obvious things like this. You never know if the book is targeted for the right person. So I'll bring up a book that both my cauthor and I were a lot
of were very much influenced by. But it's literature. So there's this book called The Alexandria Quartet by Lawrence Durrow. We both read it in our university days, and it's kind of like The Blind Men and the Elephant, but for human relationships, and it has this really novel. Idea's four books. The first three books are about three different perspectives on a serious relationships and events that happened in a particular time in Egypt before World War two, um,
and from different people's perspectives. And so that's the space kind of. So it was inspired a bit by Einstein's relativity. So there's like three different perspectives on space and then they go forward in time and do the reflection back on on those relationships and it gives you, you know, this kind of you know, humility, kind of see like how how small your perspective is, how much missing information you have about what's happening, And it's it's it's it's
a it's a nice read, at least it was. And when I was in my twenties, that sounds quite fascinating that this question is what people ask me more about than any other question because they want to get a book recommendation from somebody who's accomplished something done, something interesting, has some experience. And when someone says, oh, and by the way, this book is worth reading, it's the greatest
endorsement anybody can ever get. So I'm going to press you and say, give us one or two more books, even if you think they may have been very time specific to you. So Duncan Watts has this book called Everything Is Obvious. Beautiful book. It's so interesting. It's all about client side bias and how you see things after the fact. It's it's you're the first person who's brought that book up and I find it. I love the cover with the wheel. I think that's a triangle instead
of a circle. It's um, it's really a very fascinating book. It's you know, it's as you have this curse of knowledge, right once you know something, it's obvious to you and you can't imagine how not obvious it would be to someone else. And in the references in that book, I mean he you know, he's an academic, so he's really given you, you know, the road map, and like if you want to go beyond that book, all the references
are there in that book. It's great. Um. And then I would say another book that's like Along the Lines, only because in the last year I read it. Um. You know. Super Forecasting by Philip another prior guest, delightful. Yeah, I mean it gives you, you know, humility to kind of realize how how you know, if you want to start projecting three years, five years and you're wasting you're wasting your time. But you know there's a lot more
than that, right, just a disciplined approach. Right, So it's not simply you know and one one mistake you can make, because you know mistake I made, say around the financial crisis time, I really was convinced City Bank would be bailed out, they wouldn't let City Bank completely crash. Well, you were, you were not wrong. They did bail it. Eventually, it was two hours at the time, but still they were bailed out. Had you said the same about leaven Brothers,
that would have been a different situation. What's most fascinating about Tetlock talk about recognizing your own issue. Tetlock's original book, If You Go Back, was on expert political judgment and that nobody is good at forecast now, and he then over time led to what what led from him going from oh, we're really bad as a species at forecasting too, But a handful of people have do a number of things that make them better at it, and therefore that's
how you end up with super forecasters. That's a fascinating arc over I don't know, third twenty years separating the two books. Yes, and and the thing I really got from that book is not getting fixed at it on your your one insight and putting all your cards on that. You know. So, so you know, these these super forecasters are right in the long run. You know they're using the law of large numbers. It's not that they're saying, oh, I have this one idea, I'm gonna fixate city Bank
has to be bailed out. No, no, take all your ideas and spread your bets across all your ideas, just like the forecasters, and you'll do well eventually. But don't get fixated on that one. And I think that's a nice feature, but quite fascinating any of the books before we move on, I think that's good. Those three, those are three three good ones. Um, so what are you excited about right now? What what do you jazzed about in the world of academic research. Well, and I think
mastery is addictive, right, So there's a story. There's a lot of drive by research out there, and you know, we're all a little guilty of it because you know there's this pressure to publish, and when you've gone and you've really dug into something, you've really mastered something, and just mind all as much of the gold as you have, but you also have this feeling of mastery. It just
feels so great and wanting to do that again. Right, So, so the exciting thing is to take that understanding of how good it feels, how fun it is to master something, and take it to the next subject. Well, of course, still finishing what you started. So that's that's the exciting things. What's next really really interesting? There have been all sorts of criticisms of the lack of reproducibility in academic research.
What changes are you looking forward to into. Do you think that increased big data and AI is ever going to help us with this reproducibility problem we're running into in academic research and in other research. Why aren't we seeing academic research being replicated and and even corporate research being replicated. I think there's a lot of cherry picking
that happens. Um. So when you go out and you analyze something, and you measure ten different things, and you just pick out the things that worked, Um, you're you're not acknowledging what didn't work, and so you're gonna you have this kind of you know, winner's curse in a sense where you know I won the that's sailors of his early books. Yeah, we don't have time to explain that I realized. But but um so, let me re ask that question. So what are you looking forward to?
What changes do you think you're gonna affect um your your world, the changes that are gonna affect the academic world. Um so, I think that the important change that's going to make this better. They're gonna gonna fix this, fix maybe not, but make it better. Is the idea of preregistration, meaning what you pre register what you're going to analyze. You register your predictions, and so you know your hands
are tied. So therefore you're going to go out and actually research what you're claiming as opposed to, oh, look at this anomaly, let's talk about that, even though it could be random or cherry pick or what have you, and still valuing even whatever your conclusion is, don't take that as truth when we have to make sure it also replicates, because even then you may have what if you don't find it you decide not to write it up, Right, isn't that a big issue that people don't publish on
um negative find things because there is value to say, Hey, we analyze this, couldn't find anything. It's a huge issue because then you get this kind of implicit cherry picking and that like, I don't want to spend my time writing up this paper because it's not a big finding. Well, then no one's seeing that. So then the papers we see are the ones that are kind of implicitly selected, and so you have the same kind of degrees of
freedom that's happening. But it's like socially in what do you call survivorship bias about things that don't work out? So I guess it is just straight up survivorship bias, right, In other words, what's public's negative shelter? You know, the research ideas that don't make it to the publication stage have died, and so the publication ones are kind of the ones that are kind of randomly better but not necessarily truly better. Interesting tell us about the time you
failed and what you learned from the experience. So one failure, it was a success in a failure. So a friend of mine, Patrick Flannagan from graduate school. We said, up this garage band hedge fund we called it. We were loaning money on the internet, and we thought we had this great idea and it was I mean, it weathered the crisis. We didn't lose money. We got like five percent. We weren't. It wasn't big money. It's maybe a hundred thousand or something. We're students, but this is peer to
peer lending. That But um, the thing we didn't anticipate was, you know, the legal uncertainty of the of the enterprise. And we weren't lawyers, and we just oh we we we modeled. We we had a real confidence in our model. We had our automated bidding algorithm. It was like it was great, we're doing well. But the thing we didn't get is that the SEC could potentially crack down on on this and ruin our business model because then they changed the rules completely that made it impractical. So we
just we just left. Because instead of having a direct a connection to the person that you're loaning, and it was now mediated through the company, and now you had to somehow pricing the risk of the company itself rather than a loan. So um and and so interest. Yeah, quite interesting. What do you do for fun when you're not crunching numbers? Well, I mean this is pretty fun
right here, I am in New York. You know, you get to travel around, you get to meet with your co authors, finish your papers, and nice locations, and you know, meet interesting people. Um, I would say just seeing family, because I get to travel so much. I get to see my family and friends in different cities, and that's
a great thing. What sort of advice would you give to a millennial or recent college grad who was interested in a career either in behavioral finance or stat statistics or um, any of the sort of work that you you do, economics, whatever. Don't be in too impatient to have life figured out. Um, it's not too late. I've seen people in their twenties and their thirties go back change, go back to school. You know, maybe they have to start a little lower rank school than they wanted to
begin with. But you can get funding at those schools, and if you work hard, you can transfer, you can apply to another school, you can move up. And a lot of people get this kind of false notion that, oh, if I didn't if I wasn't a serious student, in high schoore in college that I'm too far behind. It's like, no, if you're really motivated and you're you know, you're you're capable, and there's plenty of people that are. You can catch up. You just have to be patient, take a few years off.
It's never too late to get serious. Yeah, never too late to get serious. And our final question, what do you know about the world of statistics and data analytics today? You wish you knew a decade or so ago. Fake data simulation basically creating you want to know. You can't just go out and analyze data and show that however I analyze it, I still get the same result. No, you have to sit down and generate fake data. So what if the world looked like this, how would my
analysis behave? What if the world look like that? How the analysis behaves? You have to do this hard work of like building models of the world and then seeing what does your analytical approach tell you under those different kind of assumptions about the model of the world. And to do that you need fake data. And that when you say fake data, I think of that as a
counterfactual or how do you contextual? I mean, I guess everything is a counter factual because all models are wrong, right, Um, But but but some are useful, but some are useful, And you want to know how how you know under under different assumptions. If the world you know looked differently than you think it looks, is your analysis still gonna say something meaningful or not? Um And you need to actually go out and check that, And a lot of
people don't do that. And so that's what happened in this hot hand example, right, I mean, what would this analysis give you if there were no hot hand? Quite quite interesting. We have been speaking to Josh Miller of the University of Alcante. If you enjoy this conversation, we'll be sure and look up an inch or down an inch on Apple iTunes, where you can see the past two and fifty or so previous conversations we've had. We love your comments, feedback and suggestions right to us at
m IB podcast at Bloomberg dot net. I would be remiss if I did not thank the crack staff that helps put together these conversations each week. Medina Parwana is our producer, Michael bat Nick is my head of research. Taylor Riggs is our booker. Slash producer. Attica val Bron is our project manager. Tim Harrow is our audio engineer. I'm Barry Ritolts. You've been listening to Masters in Business on Bloomberg Radio.