Pushkin from Pushkin Industries. This is Deep Background, the show where we explore the stories behind the stories in the news. I'm Noah Feldman. These days, it seems we live and die by the model. You can't turn on your phone, or open the newspaper or watch the television without being hit in the face by some sort of model or graph or chart that purports to show you how fast the coronavirus is likely to spread, when it will peak, whether it will plateau, how many people will die, you
name it. These graphs and charts and models are constantly changing, and sometimes they are in conflict with one another. So how should we be making sense of all of this? Here to help is Carl Bergstrup. He's a computational biologist at the University of Washington who's got a deep background both in epidemiology and in model building and in model analysis.
He's also an expert on the spread of misinformation. With his co author Jevin D. West, he's written a forthcoming book, Calling Bullshit, The Art of Skepticism in a Data Driven World. I spoke to Carl on Monday afternoon. Carl, you are, by profession some combination of a modeler and an explainer of models and a debunker of bad models, and we now live in model world twenty four seven. So my first question before I ask you to do all three of those things simultaneously is how weird has this time
been for you? That's a good question. I think it's hard when you're in the middle of it. You don't even really pause to think about whether it's weird or normal. It just is. The thing that strikes me as the strangest in a sense, is that I spent many years doing confectious disease epidemiology, and then, by various convoluted paths, ended up over the last few years studying the spread
of misinformation on social networks. And to have those two things come together the way that they're coming together right now has been really striking. Let's start with a model that you've spoken in favor of, in fact strongly in favor of, namely the simple model that produced the expression which has now entered the lingo of flattening the curve. Say a word about why that basic curve with the
flattening was so effective. Well, I think it's the notion of a very simple idea made concrete with a nice picture, and so you know, the most important thing about that picture was that it is worth a thousand words, and those thousand words are worth thousands of lives, I believe, because they showed people at the time why one argument
that was floating around was not a good argument. And what had been floating around at the time was, look, we're all going to you know, people were saying, we're all going to get this or a large fraction unless they're going to get this, and there's nothing we can do about it. So let's get this over with as quick as possible. So they must talk about taking it on the chin, talk about you know, why would you
prolong the economic disruption and so forth. And one thing that people hadn't been thinking about was the sort of way that if we didn't do anything to control the pandemic,
then we would badly exceed hospital capacity. And so what this one little picture did was stressed that you have to worry not only about the area under the pandemic curve where the epidemic curve, which also have to worry about the height of that epidemic curve at any given time because we have a limited hospital capacity and for
this particular disease, I see you care saves lives. So I just shifted the framework of people's thinking in a very simple way, and I think, you know, people definitely started rallying to this notion of flattening the curve, and people now take that for granted. It's time to start thinking about new models and new new pictures now that
this is something we all take for granted. But I think certainly some of the reason we haven't exceeded healthcare capacity any worse than we have is that people kind of took this message to heart and realize that there is an important public health role that'll be played by trying to slow down the spread of this early on, Carl, I want to ask you a sort of philosophical question about models that's been really very much front of mine
for me, and that is, broadly speaking, the relationship between a model that's meant to tell you what you should do in life, a kind of normative model like the model that you were talking about, the flattening the curve model, and a descriptive model, a model that's meant to describe the world as it is or as you think there's some probability of it being. This is a very hard
line to draw. But I think it's a distinction that really matters because we're in this delicate moment now where people are observing that models that were meant to say, hey, if you don't do anything, you'll have results X and Y and Z are now giving way to new models that say, well, we've updated the models in light of the social distancing that we've done. And that's leading some part of the public, including even the somewhat educated public, I think, to say, well, wait a minute, maybe those
initial models were grossly overstated. And it seems to me that part of the distinction here is the difference between a model that is meant to say, take action and a model that's meant to say, here's the description of the world, updating for what we have done. Could you say a word about that distinction. Yeah, I mean, it's not actually a distinction I had been thinking along about.
I mean, I would have thought, you know, more about the fact that the models have a feedback in terms of the models influence our behavior, which influences the data that go into the model that you know as you go through this loop. And so what's happening, of course, is that early models, including the flat and the curve just a conceptual model influence people's behavior, which then changes the situation that we're in. And so now people can say, oh, well,
you know that was stupid. Why didn't we flatten the curve. Hospitals haven't been overrun. It's like, well, yeah, hospitals haven't been overrun precisely because we locked down places and flatten the cur and that's why. You know, except for a New York City, essentially most places are managing pretty recently
in terms of hospital capacity. I'd kind of, rather than saying there are models that are normative and models that are descriptive, I would say, there's the models can be used in both of those directions, and so you can come up with a model likely fight the curve model, and that sort of describes two different things that could happen, and then you have to draw your own normative conclusions from that. So, you know, we could have this big fast peak, we'd be done with it by the middle
of the summer. We'd all be back, you know, those of us who are still alive, to our normal lives. But we would have had to live through this period where the hospitals are massively over capacity. We would have lost friends and relatives and neighbors and so on. Or we can try this other approach, which we don't even know exactly how we're going to manage to keep this thing down run in the future, but we can at least get ourselves onto that track so that we can
solve the problem. And then if we do that, we're probably going to be looking at a more protracted period of life being different, but we're going to avoid these really catastrophic periods of exceeding health capacity. So I guess that would be sort of my distinction, which is perhaps
a little different than the one you're driving. So if I understand you correctly, what you're saying is the model itself is at least arguably neutral, and then it can be used either to make a descriptive point about how the world is or a normative point. And I guess my pushback on that, if I'm understanding you correctly, would be that because as you say, you're modeling either taking into account probable effects on the world or not taking
those things into account. It's tricky, maybe not impossible, but tricky to separate out a model that says, hey, this is the way the world, you know, take it or leave it. But this is the world way the world would be if you do nothing, versus here's a model based on what we think will happen if you do take these steps. Yeah, I think that's right, and I mean, but I agree with you about that, and I also
with respect pushback. I mean, I pushback. I'm making you know this claim that they're absolutely neutral myself, because I mean, models or tools, models are designed for purposes, and so people are making each of these models for some particular reason. You know, the flat and the curve was made to try to help people see that it's not just the total number of cases, it's also the timing of those cases, whether they all happen at the same time or not.
It had a purpose. I think you can kind you can kind of see them that way when I look at some of the models that people are doing now, For example, the there's this IGMME model that's being used a lot that predicts hospital needs by state and also death rates by state and so on, and so that's a model that's been designed to be used precisely for thinking about, you know, what kind of equipment, how many beds do we need and so on, So you know, even though you can say, well it's just a model
of what happens, it's still designed to this purpose. And then once that purpose comes in, then you know that purpose is usually to make a decision, and so the whole normative aspect of modeling flows into that through that channel. I think if we're going to try to reopen the economy, cause before we have the capacity, assuming we ever get the capacity to do millions and millions of tests, then there are going to have to be, as it were,
many spikes in the curve. We just want those spikes to be below the point where they overwhelm the hospitals. What's a model that sort of pictures that? I mean? Is it a kind of sign curve model where the top of the curve is just beneath the number of people who will flood the hospitals? Is that what we're realistically talking about until such time as we have either
a vaccine or very extensive testing. Well, I'm not very optimistic about that approach, because you don't get something like a nice smooth signed curve, but you usually get as a pretty rapid ramp up to being at hospital capacity and a fairly slow drop off and then a rapid ramp up at a fairly slow drop off, so you actually spend most of your time with social distancing still han and only these little gaps with social distancing off. That's what came out of the Imperial College model at
the scenario. And then there are been other plans along these lines, a two day work week plan that our lines groups proposed and so on, and they have this general form. So that's one problem. The other problem is you're dealing in these cases with exponential growth and imperfect measurement, and so when you try to manage exponential growth to sort of top it out right below hospital capacity, that's a very difficult call to make given the amount of
information you have. And so what's going to happen is you're going to miss, and you're going to miss by a lot because it's exponential growth. So I think the other thing, you know, there was a nice paper that you looked at that sort of optimal management of a pandemic like this and keeping it under hospital capacity. But the conclusion, informally stated, you know, was essentially, don't try to get cute with epidemic growth. And I think that's kind of a key message that comes out of modeling
there is. It's you know, as soon as you start thinking about the uncertainties that are in place, you recognize that actually keeping it under capacity is an almost impossible challenge. And making a really powerful point about the likelihood of best case and worst case scenarios, which I think is quirky and totally non obvious to the labors of whom I am a good example. So we often think, well,
there are two models. There's a best case, they're in a worst case scenario, and in life things will probably be somewhere in the middle. That's how we often think. It's a kind of Goldilocks principle. Yeah, we go through life, and you have been arguing that that's exactly wrong. In thinking about an epidemics say more for us, Well, epidemics are an interesting kind of dynamical process. There are sort
of attractors in an epidemic. You can have a dynamic where you relatively quickly suppress an epidemic, and you go to a relatively small number of cases where you can have a dynamic where it sort of blows up and racist through the population. And whether you have this former case where you suppress it or the latter case where you fail depends basically on this number or not that
everyone's talking about basic reproductive numbers. So it's how many cases does an initial infected case generate downstream, And of course that number depends not only in the properties the virus, but on the amounts of social distancing we're doing and
so forth. And so as we've gone into this lockdown period in the social distancing, we've tried to knock our not down pretty low, and we've got it down around one, but we don't know exactly where it is, and so if it's a little bit below one, then each case generates fewer than one additional cases, and we get this exponential fall off in the number of cases, and we might end up with say about three percent of the US having been infected by the virus after the first way.
If we're just a little bit above one, then each case generates more than one new cases, and it grows and grows, and it grows exponentially, and so it races through the whole population. And if that happens, you're going to end up with you somewhere in the range of thirty to seventy percent of the population infected in the absence of continued intervention by people changing their behavior so that this are zero are not is adjusted over time, you really end up on one of these two trajectories
or not. Compared to rolling a ball down a ridge or a fence, I mean, Paul may stay on the ridge line for a while, but sooner or later it's going to drop off of one side or the other just because your trajectory is just slightly in one direction or the other. And so sort of the what happens
in the pandemic is a bit like that. What's really interesting is something that people have suggested to me that we may have this element of what physicists call self organized criticality where actually, even though you've got this unstable point where or not is one where you're kind of rolling down the ridge line, the pandemic starts to take off and really accelerate, Then people get scared and they do more social distancing, and they push it back down.
If the pandemic starts to get shut down, then people start to relax and they do less. And so you've got on one hand, you've got the fundamental dynamics of the pandemic. If everyone's behaviors helped constant we'd either have a great, big one or we'd shut it down pretty quickly. People are modulating their behavior, and they may actually be pushing us along this along the ridge, yeah, or on the ridge exactly. So that's an interesting sort of second
layer on all of this. I'm glad you raised that issue, because that's exactly what I was and to ask about it. I mean, in a dynamic picture where we're always reacting to whatever we see happening out there, you could imagine a government at least trying very hard to get its people to type trate going to work and then coming home from work, going back and forth, and then getting
some of a thing in the middle. I thought, in that context of the Swedish example, I mean, Sweden is sort of functioning as a kind of experiment where they're doing a fair amount of social distancing, but in otherwise way as they aren't, it'll be really interesting to see maybe they will turn out somewhere in the middle. Yeah,
I mean it's certainly possible. I think even there, you're kind of unlikely too, because not only do you have this system that's you know, with exponential growth, but then you've got long delays, so there are big delays between when people are infected and when we start to really observe the consequences of those infections. You know, with full on testing, it would be five days to a week. If we're waiting for people to end up in the hospital,
we're looking at more like ten to fifteen days. And so whenever you're trying to do control on a system that has exponential growth, implification had major delay place that control becomes extremely hard and you're overshooting. You you're fish tailing and overcorrecting, and you're going to end up off the road. We'll be back in just a moment. Can I ask you to put on your misinformation science has tad?
What are the most egregious examples that you've seen thus far in this pandemic of misinformation with observable real world consequences? You know. Unfortunately, I think some of the most egregious misinformation in terms of the magnitude of the consequences has been coming out of the White House. And it was the protracted period where there was a serious effort to downplay the magnitude of what was going on, and that delayed the national response in a whole bunch of ways.
And I think it also has the consequence of really hemorrhaging the trust that people need in order to comply over the long term with public health measures, which are more or less our only way of controlling the pandemic
right now. So, if you have a story that is changing all the time, where you've got different agencies presenting different version of the story, You're hearing from some that this is going to go away on its own, that it's on the way down, it's going to disappear in April, and whatever the case is, and then the same government has other agencies telling you this is a serious threat, there's going to be protracted to a human to human spread in the United States, etc. People start to not
be sure who they can trust. And when that happens, then it becomes harder and harder to manifest the political will that you need to take unpopular measures like closing schools,
closing non essential businesses and like so. I think, but in terms of like the measurable consequences, one of the worst things that's happened in terms of misinformation has been this initial attempt to downplay the seriousness of a pandemic in order to I think, prop up the stock market on a first or you have a book coming out soon with the wonderful title calling Bullshit. Presumably when you were writing the book was before the pandemic broke out.
What were the things that at the time you wanted to call bullshit on? And then maybe from there we'll go to how that's relevant in the present moment. So when we wrote the book, we were really thinking about the ways that quantitative information is used to mislead people. So what we thought was very, very important was to teach people that you are not at the mercy of the person who's bringing data and statistical analyses and machine
learning or whatever it is to the table. You don't have to just accept those because you are not a PhD satistician, or you're not a computer scientist or whatever. The basic way that we think about this is that with any of these systems you've got, you know, when people are drawing conclusions based on data, they collect a
bunch of data. That data then or put into some kind of machinery which you can think of as a black box out the other end, and comes some results, and then people draw conclusions from the results, and the bullshit is rarely in the black box where we find, so the bullshit is rarely in hard effect of the technical construction of the model. Of course, there are cases where that is what happens, but that's a strong minority
of the bullshit that we see out there. Almost all of it is because people have data that are not necessarily representative or appropriate for the question that they're asking. You're comparing apples and oranges. They've got a biased sample. There are all these kinds of things that can go wrong, or people get results out and then they draw unjustified
conclusions from the results. They overgeneralize, they infer causality where they've only got an observational study and only know about correlation, these kinds of things. And so what we wanted to really stress with the book was that you don't have to be sort of held hostage by people that have the numbers, because you can use the basic critical thinking skills that anybody has in order to see through this
sort of stuff. You say, well, these data appropriate for answering that question, to these conclusions actually follow from those results. I love the idea that careful critical thinking and analyzing the premises can really help you identify bullshit when it
is being dealt to you. I want to ask you though about a variant on that which I think may be pretty different that we've been seeing throughout the current pandemic, which is that it's not just that non experts in fields are calling bullshit on models, they also think that they can build their own models better. And at first I thought to myself, this problem is mostly out there on medium, or it's you know, the co worker in
the office. And I actually wrote a column for Bloomberg where I write a column saying, well, the amateur epidemiologists please just sit down and shut up. So this phenomenon seems to be kind of everywhere right now, and I'm wondering if there's, like, is there a cognitive thing going on here? If people think they can find a problem, which is true by your view, they also think that they can then devise a better model, and at least as far as I can tell, that's not true. They
usually cannot devise a better model. Well, I think, you know, on one hand, we only need input from a lot of sources, and so I think that it's really important to recognize that there are going to be good ideas coming from outside of the epidemiology community at the same time, one wants to be aware of the dunning Krueger effect.
You know, the basic idea of the dunning Kruger effect is you don't know enough to know you're wrong, essentially, and so you think that you have a very good understanding. And so the Dunning Kruger effect typically has this sort of you know, non monotone distribution of people's confidences. If they don't know much, they're quite confident. If they go some they aren't confident, and if they know enough they start to get relatively confident again. And so we're seeing
a lot from both of those confident peaks. Professional epidemiologists and otherwise there are some really good ideas coming from people that are not professional epidemiologists. But I think one of the things that's hard is when things are presented as a fact instead of suggestion, and particularly in a context where these stupid epidemiologists they don't even know about such and such. So this is the sort of the
not helpful side of things. I'm personally collaborating with a number of people who are outside of the epidemiology community, not only economists on the economics side, but you know, for example, been consulting with a group of baseball analytics people that wanted to find a small, tractable problem that they could contribute to because they're really really good at figuring out things from numbers. And you know, they called me and were they understood that they needed some background
in order to be able to do something useful. They couldn't just sit down and make their own model, and so we spent a lot of time talking about what's sort of an unsolved problem that they could take a crack act. And you know, I think that kind of thing is really constructive, So I guess, yeah, it's a it's a double its sort. Well, if you're helping saber metrics to save the world, that'll be a further contribution that you're making. Yeah, well it's kind it's kind of fun.
You know, we're trying to figure out how to epidemologists are trying to figure out how to save baseball, and these guys are trying to figure out how to do effidemiology. So it's a nice combination a mutual assistance society in a moment when we very much need that. Yeah, Carl, what am I not asking you that you think I should be asking you? What? Are what are salient problems on any of your dimensions of expertise that you're observing
that you think people should know about. I think at this point, the really big salient problem is to think about how do we emerge from the situation that we're in now. We all need to do that. We can't wait around for twelve to eighteen months, and so what are the possible solutions. And there's so many different disciplines
that are involved in finding these solutions. You know, how do we need to restructure the economic system so that we can in healthy ways help businesses weather this shutdown?
As you're asking those questions, at the same time, you're trying to figure out things, you know that are really technical immunology problems, like how long does immunity last, what fraction of people generate, is it even safe for people who've had the disease go back to work, etc. So someone out we need to find very good ways for us to as an extremely broad community to sit down and talk across these boundaries without epidemiologists saying that economists
don't know anything about the economy or vice versa. So that's really important finding that way to communicate. I think you know. The other you know, one of the other things that has been really pressing challenge that none of us saw coming really was we've you know, we've been planning for a crisis like this for a long time and thinking about what it would look like and how
we might react, and hoping it would never come. In the epidemology community we have and now we're here, of course, and the thing we weren't really thinking about was about the way that all of this was going to be so heavily politicized that we feel like we're fighting a battle on two fronts. We're fighting a battle against the virus, but then we're fighting a battle against misinformation around the virus that's being promulgated by people up to it, including
the White House. And so that's a challenge and threat that we just hadn't been thinking about seriously. And I think that was a missed opportunity on our part to not be thinking about the information side of this. Do you think that's partly the result of a kind of
unexpressed but mistaken sense of cultural superiority. So I know, epidemiologists who have worked extensively in subserent African Ani Bola for example, and who were acutely attuned and have written extensively about how do you deal with public misinformation, but also with governmental distortions, you know, regimes that aren't willing to play ball, or the distort facts and circumstances under
some conditions. And that was I think pretty commonplace in the epidemiological community, in the public health community as well. But there were somehow this unspoken thought that that can't happen here, not that a pandemic can't happen here, but that in the United States or in some Western European country, we would all have Angelo mercles, you know, we would have rational home and political leadership. And of course that's
totally false, and it was knowably false. I would have said in advance, I think that's a very sharp observation, and I completely agreed. At least I fell into that trap. I don't want to speak for everybody. You're completely right that there are people working in other parts of the world that are well aware of these challenges, and I just don't think we were thinking as seriously about it as we should have been. You know, Obama was in some ways, in a lot of ways close to that
morbial mold and we weren't really thinking about that. I mean, I remember going back to Bush administration and thinking, you know, in the discussions with the Bush administration, there were concerns about the degree to which the government should step in and be involved in something like pandemic planning. Why can't
it be privatized and so on. So we had these sort of ideological disagreements about the best way to handle something like a pandemic planning, But we very much had the sense that if the thing actually broke out, we'd all be on the same page, and we'd all acknowledge that it was happening and we try to do the best we could to get rid of it, and there wouldn't be this period where we were pretending that it
wasn't happening at all. Carl, thank you for an extremely illuminating conversation and for your work, and please keep up bullshit on the things that need bullshit called and clarifying things for us. I really appreciate your time. Thanks. So I'm talking talking to Carl really brought home to me just how dependent we are on models from making sense of what's going on here. It's not only that they're everywhere, it's that they have the capacity to fundamentally shape our
ideas about what we ought to do. Indeed, the very phrase flatten the curve, which has been a kind of motto for all of us in this early period of the pandemic, is itself language directly taken from modeling, and I can't think of the last time that a modeling term became our guide for how we should be behaving
at the most fundamental level of our ordinary lives. Yet at the same time that Carl shows us the importance of models, he's also very attuned to the idea that models can be deceptive, and indeed that they can lie. And they can lie, he says, if we fail to take into account using our critical faculties, what's going into them. The problem, he says, is not that the models themselves are fallacious, that if the premises are wrong, we can
be led to very, very bad conclusions. Carl is also closely focused on the question that we've been thinking about here at deep background, and then all of us are going to continue thinking about going forward, namely, how do we come out from behind our social distancing and slowly and carefully begin the process of reopening the economy. I promise you We'll be talking more about that in the episodes ahead. Until the next time I speak to you, be careful, be safe, and be well. Deep Background is
brought to you by Pushkin Industries. Our producer is Lydia gene Coott, with research help from zooe Wynn. Mastering is by Jason Gambrell and Martin Gonzalez. Our showrunner is Sophie mcibbon. Our theme music is composed by Luis GERA special thanks to the Pushkin Brass, Malcolm Gladwell, Jacob Weisberg, and Mia Loebell. I'm Noah Feldman. I also write a regular column for Bloomberg Opinion, which you can find at Bloomberg dot com slash Feldman. To discover bloom its original slate of podcasts,
go to Bloomberg dot com slash Podcasts. You can follow me on Twitter at Noah R. Feldman. This is Deep Background.