Welcome to the Psychology Podcast, where we give you insights into the mind, brain, behavior and creativity. I'm doctor Scott Barry Kaufman, and in each episode I have a conversation with a guest who will stimulate your mind and give you a greater understanding of yourself, others, and the world to live in. Hopefully we'll also provide a glimpse into human possibility. Thanks for listening and enjoy the podcast. Today
we have Brian Nosek on the podcast. Brian is co founder and executive director of the Center for Open Science that operates the Open Science Framework. The Center for Open Science is enabling open and reproducible research practices worldwide. Brian is also a professor in the Department of Psychology at the University of Virginia. He received his PhD from Yale
University in two thousand and two. He co founded Project and Plicit, a multi university collaboration for research education investigating implicit cognition, thoughts and feelings that occur outside of awareness or control. Brian investigates the gap between values and practices, such as when behavior is influenced by factors other than
one's intentions and goals. Research Applications of this interest include implicit bias, decision making, attitudes, ideology, morality, innovation, and barriers to change. Noseek applies this interest to improve the alignment between personal and organizational values and practices. In twenty fifteen, he was named one of Nature's Ten and to the Chronicle for Higher Education Influence List. Brian's so glad to chat with you today. Yeah, thanks for having me on
quite a lengthy bio. There we can keep going. I've got all kinds of history with my family. Maybe we should take out of it. Well, we get to that if you want me to psychoanalyze you. But that you stay busy, don't you? I mean you don't like do you ever have existential crisses? I feel like you don't have enough to think about yourself. I certainly have a full life, and I enjoy time to time looking up and trying to think about what it is we're doing
and why we're trying to do it. But you're right that that doesn't happen as frequently as I would like. What spending time to yourself, it doesn't happen as frequently. Yeah that's right, Yeah, Yeah, gotcha, gotcha. Well that's that's a good thing for the world. I don't know if it's good for you. Probably could use some more downtime, but I, you know, I want to just like I do like to do with my guests is go back. Let's go back. So you got your PhD at Yale in two thousand and two. I got my PhD at
Yale in two thousand and nine. So who did you study with? Their? Lazarine Banaji was my primary advisor, right, So when she was at Yale and then's right, she went to Harvard and you did you move to Harvard as well? Yeah? So in my last year of grad school, she relocated to Harvard, and it so happened that my spouse got her clinical internship in Boston at that same time. So I in fact moved before Mazarine did and help set up the lab that she was going to be
moving too. And so I spent my last year sort of on a postop at Harvard instead of but it was really just my last year of grad school. Gotcha, Are you like officially a co creator of the implicit
association test? Yeah? Tony Greenwald is the inventor, Okay, And then he and Mazarine had just started working on it with some of their collaboration, and then I got very involved in it right away and then built the website that became Project Implicit, and so the three of us are co founders of Project Implicit as a nonprofit organization, and then I ran that for the first eleven years
of my faculty career. Oh wonderful. Yeah. I remember about seven years ago or so, I was teaching cognitive psychology at NYU, and I showed my students every semester of video of the IT and you're you're the one that talks in this video that they show a younger looking version of you. No offense. What are you talking about? No offense age? I hate to break it to you.
Oh yeah, yeah, I don't look in mirrors just to be but yeah, I mean that's an irrelevant detail, but it's kind of surreal to and I'll be talking to you, interacting with the real you. I showed that video to my students every semester, and at that time I talked about the IT and about how, you know, there's still a lot of research that needs to be conducted on
it in the future. I wrote an article for Psychology Today around that time with some research kind of starting to cast some doubt on whether or not it's really how strong the correlation is with explicit racism, for instance. And I remember telling my students about, you know, we should really look at this with an open mind. Okay, let me ask this question first. Were you like ever
a zalot of the I? Are you like a reformer now or is it more of like you always liked the it, but you had some of your own criticisms from the beginning, and and then you spent many years refining and adding nuance to How would you describe the difference between where your state of mind was at back then when you were creating this thing and talking about it in public. Yeah, and your state of mind now?
How discrepant are those two state of minds? Yeah? Well, I hope I've learned a lot from all of the research that we and others have done on it over the years. But in the big picture, I have the same view that I've had since we did the first set of studies to just sort of get a handle on what is this tool at all, and that is
that it's a great tool for research purposes. We've been able to learn a lot about the tool itself, and about human behavior and interaction with the tool, and a lot about the psychology of things that are occur with less control and less awareness than just asking people how they feel about topics. So that it has been and continues to be a very productive research area for trying
to understand better how humans work. And then the main concern that we had it onset and this is actually a lot of the discussion of even creating the website is the same anticipated some of the concerns and overuses that has happened with the itIt in the present, and that is the natural I don't know if natural is the right word, but the common desire that people have
for simple solutions and thinking. Well, a measure is a direct indicator of something that we care about, and it shouldn't have any error in measurement, and it should be applicable to lots and lots and lots of situations. And there's lots of potential misuse of the itIt despite it being a very productive research tool and education tool. I like the experience of doing it and delivering it in an audience and the discussion that that provokes about what is it that it means and what does it mean
about me? What does it mean about the world. Those are really productive intellectual discussions and debates. But the risk parts are the over application of the IAT two selection processes. We should use this for deciding who gets a job or not. We should use this to decide who's on a jury or not. Those are the kinds of app real world applications of IT as a measure that go
far beyond its validity. And so this isn't exactly answering your question, because we even at the very beginning when we launched the website, we said it should not be used for these purposes, and I still believe that to be true. But what I have changed over time is the refinement of where it is we understand the evidence base against some of the major questions. And what's amazing about it is there's been so much research and we still don't have a great handle on really big questions
relating to the itIt and measures like it. So that just is part of illustrating this field of how hard it is to actually make progress on studying human behavior. Oh yeah, and we'll get to your more recent open science initiative and how that relates to that. But let's stay in this. Let's stay in two thousand and two to two thousand and twelve for a second. You know, sure, this time period of your life. Now, let's talk shop
a second. So my dissertation at Yale a little bit a couple of years after yours, was looking at the question is there such a thing as individual differences and implicit cognition? And the idea was to like ask this question because, you know, from a trade perspective, I felt
like that was a huge gap in the literature. Was there's so much research on the reliability and validity of IQ tests, for instance, But I wanted to ask that question, like if we adapt some of these implicit cognition measures from the social psychological and experimental literature for an individual differences paradigm, you know, are there reliable and stable differences? And I have a whole appendix of failed experiments. By the way, you should tell me how to publish that
some but we'll get to that in a second. But I have a hope. But so much of my dissertation where I guess you know, I'm putting failed in quotes because you know, I mean, what do you mean that was so useful? What you got. Wasn't that useful? I thought that was useful information that like, wow, the majority of these and puts it well. So I also worked and puss it learning, so not just the kind of
the if I'm talking about. I looked at like, you know, pus a learning test, artificial grammar learning and SEI re reaction and all this stuff, and I found that virtually all of them it was almost impossible to capture reliable individual differences that like cohered over time. And but I did find like one that did and I published that
it was the Sill reaction time task. But anyway, before we completely lose my audience, which is a general audience, I just want to say that I'm trying to link this because I feel like for me, one of the things that I am most wary about with the IAT is like and then this might be more of a
feature than a bug. But it may be capturing you know, at this given moment at time when the person is taking that test, it's capturing a a lot of what the societal norms and influences are on that person's associations, but not capturing so much an intrinsic sort of stable individual differences variable so I just wanted to throw that
out there and see what your current thoughts on that are. Yeah, well, it's clear it is not trait like in the same way that a measure like the Big five for personality is trait like. It does show stability over time, but much more weakly than that. So across a variety of different topics you might see a test retest correlation for the IT trying to measure the same construct of a round point five. The curiosity for this is, well, I
guess it's a few curiosities. One is, does that mean that we have it measures some degree of trade variance? It seems to do so, because there is some stability over time. Then what's the rest? Is the rest error? Or is it state variance in some way? Right, variation that is meaningful variation that is sensitive to the context of measurement. And surely it's some of both, but we don't know how much, and there isn't yet real good insight on where the prediction components of the IIT are
in how it anticipates behavior. Right, if we could separate in a real reliable way the trait part, the state part, and the error part, then we should be able to uniquely predict different kinds of things between the strait, the trait and the state and trait components. Another twist which is very interesting that's totally understudied in my view, is that these the variation that agree to which it's state versus trait like seems to vary by the topic that
you're investigating. So when you do a Democrat Republican iat where you see to what extent do people favor one or the other in US respondents, the correlation with self report is very strong, and the stability over time is stronger than when you measure black, white, or some of
the other types of topics. So there is also something about the attitude construct itself that you're assessing that's not as much measurement based, but is interacting with the measure that may anticipate the extent to which is trade or state like. So these are all interesting things that if I had time to study them, that was there would be the problems that I would continue to be studying, But I've had to leave that aside. Oh, there's a
million million interesting questions. Thank you for bringing that up. Relating to the point I was trying to make about cultural influences on IT. I mean, have you done in and out like it seems to me like it's a testable hypothesis to be like, okay, for instance, in more gender equality societies, Let's say you, like, you know, you
look at average global IT differences around the world. Do you find has that study been done like to show that, like you know, in more gender equality societies there's less bias in certain ways and others. I feel like that's an interest and open question. Yeah. Yeah, there are a number now because we've made the project implicit data publicly accessible, there are a number of investigations looking at regional differences.
One that we published in two thousand and nine, I think it was was an examination of the strength of implicit gender stereotypes about science versus humanities, right, the extent to which I associate male with science or male with math. And we looked at that by nation and by nation
that based on who came to the website. The strength of that stereotype is related to the sex gap in performance among eighth graders in that country, So places where the stereotype is stronger, the sex gap of men doing better than women in science and math standardized tests or boys versus girls in this case is larger, and we can't say anything about the causal directions there, but at least it shows that there is a covariation with things
that are happening at a cultural scale, differences in engagement performance in these domains, and these implicit stereotypes as they're measured indirectly. That is really really cool. It was in PNAS, I think in two thousand and nine. So well, that's really interesting. So, yeah, there are a lot of unanswered questions. In what ways do you think that the it has been misapplied? I mean, you hint it at one, which
is personnel selection. Yeah, so that seems like in what other ways can you just riff a little bit about, you know, I mean that's really important when you think of the audience you have right now. You have a very large general audience, So this is an opportunity to really, you know, explain a little bit about, you know, ways in which you think that society maybe has taken this
a little too far beyond the science. Yeah. So the other main thing that I spend a lot of time trying to correct the record as it were in the public discussion is on what is the likely impact of implicit bias training. So this isn't the IAT per se, but certainly the IAT research has led to this sort of brad embrace of wanting to educate and train people
about implicit bias. It's great for education. I'm all for education, but then with this idea that if you go through an implicit bias training in your organization and learn about these then you will no longer be a biased person, and the evidence there is not good for that being an effective method for actually changing behavior. I give an education like this, and I think it's useful to educate
organizations and individuals about implicit bias. But what I think the limits are, and this is based on some evidence that we've gathered, is that the training is really just education. It's like learning about a topic and knowing what the state of the science is. And at most it will provide a basis for increasing motivation to do something about addressing unwanted biases within an organizational setting, but the training
itself is not giving skills that will reduce it. This is sort of a very hopeful but not very behaviorally informed approach to trying to address biases, thinking that just teaching someone about the existence of bias will be sufficient to get rid of it, and that I think has been count in some cases been counterproductive to actually addressing some of the real disparities that are occurring in society, because it's an oversimplified view of what really is a
complex and structural problem of how it is that organizations define, how hiring gets done, create decision processes, promotion processes, succession planning, all of the different aspects of effective organizational management. Really the solutions are in how those decision processes are structured, rather than in trying to get people to have better intentions. Yeah, it seems like that's very consistent with a recent study at Warden by Rebra Belle and Adam Grant and others.
I don't know if you saw that study that cannot recently try to I didn't assess the effectiveness of well diversity training. Maybe it's a little bit okay, maybe it's a little bit different, but it's probably the broad broader brush. But but surely a diverse implicit bias is featuring in a lot of diversity training these days. Yeah, and you touched on a really interesting point about like, well, what is the best Like how would you measure the outcome
of this like two day or week training thing. I mean, it seems like that would not be a very good thing to like then go back to the I T and see a difference between it pre and I T posts. That doesn't seem like the best sort of outcome you know, you would want to I mean, it's like, yeah, yeah, you could start just focusing on let's change the I T and have that be the end of it. But yeah, of course it's that doesn't actually shift behavior. Then what
was the point? Yeah, what was the point? Yeah? Yeah, So there's there's so many dealing with such a systemic problem. It is dealing with a system of things, you know, that's like of interacting parts, and it does seem too simplistic to kind of just treat one part of the system as the cure. All so great, I'm glad you made that point. What about the association here between this kind of research and the literature on microaggressions for instance?
Tell me if this is too far different, But it inttally feels to me like it's related to some more genus of things where you have this assumption that some sort of like implicit or unconscious motivations are going to creep out, I guess into external behavior of manifestation of behavior. Yeah, yeah, I think they are conceptually related. I haven't seen a
lot of good research showing a functional relationship. But the notion that things leak out and in ways that are unintended or automatic or hard to detect is thematic in the in the microaggressions discussion, and certainly that is consistent in a very global sense with this notion that we have thoughts and feelings that are either outside of our awareness or outside of our control that may be different than our conscious values what we're trying to do in
the moment, and we can be genuinely saying I'm not trying to be biased, I am trying to be fair. I'm trying to engage this person as a person, and nevertheless be influenced by otnizer influencing my behavior, and the detectability of those may differ by the perceiver and by the actor. And that's I think a key part in that microaggressions literature. Well, what is the latest state of the science on not just the micro aggressions such, all
of these genus of things. Because you're in the forefront of this, you know, to what extent do you think, like we are really making advances in showing that there are these impuls influences on our explicitly behavior that operate outside of our value system. Where are we at right now? Yeah, it's a good question. I can't really characterize the microaggressions literature per se. I don't follow that as a distinct literature,
but i'm a general point. I think it's the big picture story is pretty clear with evidence, which is we do things automaticity. We do things that are counter to our interests all the time, and sometimes we recognize that we're doing it, sometimes we don't, but a lot of times it's not controllable. So but that's a very big picture, very global, very non specific point. By the way, so that wasn't true. If that wasn't true, the mindfulness field would not be people wouldn't be making so much money.
Oh so you're saying that you can we make money if things are true, Well, we've got some other conversations to have here, But it wouldn't have been so hot and popular people need it people. Yeah, so yeah, it's in Yeah, many parts of that are sort of obviously true. You're right in the sense that, yeah, we recognize we're doing things counter to our interests all the time. Right, we want to lose weight and we can't. We want
to exercise more and we can't. We recognize that we're not interact with our loved ones and the way that we want to, and yet we still do it. So there's it's easy to recognize that there are factors that are outside of our own control, that are inside our own minds, that are influenced in our behavior. And that's a very important insight where the science does not have a great handle yet is clarifying when that happens and precisely how that happens, and then what to do about
it when it happens. And there's some places where it's a lot better. For example, the training of specific phobias right cognitive behavioral therapies for specifics is really effective. Even though that's a problem that people can't easily overcome just by sheer will, there are treatment approaches that address it in a very effective and efficient way, but at scale across the range of human behaviors that we have, we still have a lot to learn about how to address that.
That's for sure. So give me, like something that that you discover psychologists have discovered is not obvious, Like give me, like you know what I mean, It's like it's like, you know, it's like, why are we doing all this research, you know, to just show that the unconscious has influences on behavior? Like give me. Let's talk about some specific studies. I'll pick out one because I was going down your Google scholar things yesterday of new studies preprints that have
come out. Here's an interesting one, how ideology impairs sound reasoning. So and this is research that it was led by Anugampa. Yeah, and colleagues that you're on this list. So this seems very relevant to our current political landscape. So do you think a lot of people do you think the research is suggesting that our ideology like creeps into what you know, we think is this pure reasoning process? Yeah, yeah, exactly.
So it's easy for us to RECOGNI eyes that when if I am a conservative and you hear and you're liberal, and you hear me make an argument about some issue that has a conservative liberal difference, it's easy to understand that you would disagree with my claims and perhaps even the assumptions underlying my claims, right, I just don't agree that you know, this is the way that society should
work or the economy actually works. That's not surprising. What we'd found in this research is that that disagreement, your dislike of my conservative position on something even makes it harder for you to recognize the logical reasoning in it, whether the reasoning. You'll be better at detecting logical errors in or liberal, and you'll be worse at recognizing sound reasoning regardless of whether you believe the content or not. Right.
So there's a difference between soundness the premises, the conclusions follow from the premises, then from whether the premisses and the conclusions are themselves true. And ideology is so strong that it impacts even our ability to see whether the arguments have logical coherence to them when we disagree with the claims in them. So that just to me shows how deep and difficult it is to actually have productive debates across different when there are ideological differences. Yeah, it's
a great example. It seems to me like it's becoming increasingly increasingly more difficult in this society or this political landscape to be ideology free, even if you want to be Yeah, do you know what I mean. It's like almost like you're penalize these days for trying your best to be logical. Yeah, and see things from other perspectives.
It's like, yeah, pick a stand man. Yeah, right. So we even have a paper from a few years ago, Carl Hawkins was the lead author that we called Motivated Independence, and what we did was we identified people who said that they were independent right, they do not subscribe to a political left or political right, and we gave them an IAT and the IAT measured their implicit preference for
Democrats versus Republicans. And even among people who declare that they are independent, there is wide variation and whether they were pro Democrat or pro Republicans. Some were close to the middle, some were very pro left, some were very
pro right. But that's not where it stopped. What we then did was we presented them with a paper where two different policies were proposed, and we randomized whether policy AA was reposed by the Republican or policy B was proposed by the Republican, and then the Democrat proposed the other one. And then we just asked them which policy to prefer AARB and it turns out that among these independents, their implicit preference for Democrats as republican predicted whether they
which policy they supported. If they were implicitly pro left, they tended to select the policy that whichever one it was that the Democrats supported. So these ideologies creep in everywhere, and even when we're trying to be unbiased, as it were, or independent, if we have any of that in our minds, it may yet shape how it is we make our decisions. Wow, that is a truly cool study, by the way, really neat good example of how this is relevant to a
current landscape. But the idea of I'm really trying to wrap my head around this idea of an unconscious association being pro like you said, pro republican, pro Democrat, as if like our unconscious has its own value system that may be separate from her explicit value. So there are there implicit values and explicit values. Has the field distinguished between those? That's a good question, and that that may imply more richness than what is justified and how it
is these are represented in the mind. When I say pro Democrat or Republican, all I mean is that what we do with these measures is a relative assessment how much more easily do you associate goodness with Democrats versus Republicans? And if it's easier for you to put good with
Democrat than good with Republican, then you're pro Democrat. So it may be as very base level as a simple associative or affective relationship, or it could be that a lot of the stuff is more rich like you're describing, and actually has some representation about value systems and things
that we normally would associate with much more deliberate thinking. Yeah, because I mean you could imagine someone living in a society where because of the people around them, let's say you live in a side of everyone's racist around you, you're explicitly racist explicitly and you're not explicitly racist, and that is very much against your value system, and it's picking up and you take this test, it just shows you're easier to categorize you know, African Americans as bad,
I guess, or whites as good. Yeah, I mean you wouldn't want to say that like that means that person is pro white, you know what I mean? Like using the same nowgy, do you say pro Republican or pro Democrat? I mean, would you say that some one is just because they're easier to categorize something. Yeah, So that you're raising a good question, which is what do we call these things? Yeah, when there are associations in the mind, and usually when we talk about things like beliefs or
attitudes or related concepts, we're thinking in terms of endorsement. Right, this is what that person says they believe, and so they do believe it. And when we're talking about implicit measures, we're not talking about endorsement. People don't have an opportunity to say I agree or disagree with this, and they may genuinely disagree when confronted with those associations. Right, So I have these implicit biases in my own mind. I would reject them, the ones that are different than my
conscious values. I would reject them out of hand, and yet they are still in my mind. We have embraced the notion of calling the association between a racial categories
or political categories and evowness or badness. We've called those implicit attitudes deliberately to try to make the argument that an attitude isn't just what we consciously believe, that the same sorts of behavioral consequences could occur with that kind of association of a group with goodness or badness, even if you don't endorse it, just because it exists in the mind and that is different than their conscious beliefs. But it is still attitudinal in the sense of its consequences.
And so this is a real interesting part of how we debate and think about terminology and then what its implications are for how you understand both the concepts and their implications. Oh, for sure, I'm not fully understanding why attitudes is the right word. They're like, even that seems like it's a low subjective call. Why not just call it impussit habits or impussit patterns, like pus associations is calling it what it is versus attitudes. It seems like
you're imparting some sort of subjective label on what it is. Yeah, well, I think ultimately every label is a subjective label in that sense. So the question is what is the most
psychologically appropriate for what the terms mean? And there's a great Mazarine Bnage and Tony Greenwald, my advisors and collaborators on this work, published a paper in nineteen ninety five where they made the case for implicit social cognitions being attitudes and stereotypes with the qualifier implicit to make very clear,
not endorsed. What they did, and their development of that argument, in part was go reference the define of attitudes that we have been working with as a field for many years and sort of unpack them and said, look at these definitions. None of them imply what a lot of our measurement approaches have done to try to assess attitudes. None of them imply, for example, that the person believes it, or endorses it, or says it. They all talk about attitudes in a much more functional way or whatever each
of the definitions were. So these are consequential, but I think they all are also important for all of our psychological theories and terminologies is that we imbue them with meaning, and it is important to represent as clearly as possible what that meaning is and the attention of that meaning is because words have ambiguity and people use them in different ways. Yeah. Absolutely, and it'll be easy for me to say, okay, let's move on now to open science.
But I think this is a you know, this is something it says, A gret opportunity to talk to you about this, because it's just been. It's been on my mind a while, Like you know, like the book blind Spot. It's called the book is called blind Spot, subcol and Hidden Biases of Good People, And that was written by
both of your advisors, right the Naji and Greenwald. Now when you read that, obviously coming from a place of immense respect and love for them and their research, I mean, what do you think they were exaggerated things at all in that book? I don't know if I would say that or not. I'd have to go and look at specific claims to decide whether I think the calibration of
the on the evidence is off the mark. In general, my recollection of read It's been a while now was that I thought it was a good popular summary of the state of the research literature at that point in time. And I don't think a whole lot has changed in
the research literature in terms of the broad brush conclusions. Okay, yeah, but this is an you know, like I said before, there's lots that were learning and there have been lots of things that each of us has assumed along the way of Oh, I bet this is going to happen where we're all confronted with evidence of Oh no, that's not actually how it works. Lazarine has this great chapter
from like mid two thousands. I'm forgetting what when exactly was published, but it's a chapter that's all about how her grad students showed she was wrong on different things, and one of the examples is on the malleability of implicit evaluations. She went into the work saying, no, no, no, these things are fixed. They're not going to change at all. They're not going to change easily. This is not going
to happen. And then she had a series of grad students that said, I don't know about that, and did some studies to show, like we were just talking about that they're not so trait based, that they are state influenced, and she was blown away by that, and oh, geez, okay, I guess they that isn't how it is. And of
course even that the pendulum has swung back some. So we've had more recent studies that we have found that, yeah, they may be variable by the context, but they're actually really hard to get able change from any of the interventions that we've tried. So even just one area of evidence about implicit biases, we sort of think, okay, they're they're not just stable, they're variable. Oh, in fact, wait a second, they're not totally variable. They're they're varying, but
they're not changing. Waita wait, okay, so when I get changing, and what does that mean if they're not changing and they're variable. So that as we dig into the problem, it just gets more and more complicated, which you know, that's great in many ways. It keeps us very busy and learning and everything else, but it just shows, like we were talking before about how hard it is to make real progress on real behaviors like what people do. Yeah,
for sure, absolutely. You know, I'm really trying to think this through in terms of, like, at the unconscious level of associations, won't there always be bias in one direction? Like what would a quality of implicit associations look like? You know, let's say we reverse it and we say, you know, white's bad, Black's good. Is that the society we want to live in? Either, you know, I'm just trying.
I'm trying to think what is a quality? You're never going to have an implus association test where it's like everything is good and nothing is bad, or I'm just thinking like our im plus it's always going to be
biased in some direction. Bias is not always and bias is not always bad, right, Yeah, I think that's a critical point, right, is that we've societally we've sort of pejorative judgment about if you're biased, that means something's wrong with you, right, And that's a really unfortunate because a lot of these biases come from very ordinary operations of the mind, for one, and are in fact things that
we endorse. Right, the same mental systems that lead me to be wanting to call my doctor rather than my mechanic when my back hurts are the same things that lead me to have biases of kinds that I would disagree with. And that is, you hear things in the world, you see things related to other things, and your brain stores that information and sometimes it's information that you would agree with and you want to use. And it's still a bias in the very functional sense of it leads
you to do one thing over another thing. But it's not an unwanted bias. It's not even as a bias. I'd be perfectly happy to say, yes, of course I prefer my doctor to my mechanic one of my back hurts because I have a bias A belief that's grounded in whatever is grounded in that I think that person is going to do a better job than that person.
The challenge, of course, is that we don't always get purely accurate information exposed to us, and we don't necessarily agree with the information that we do get exposed to, but nevertheless it gets in our heads and it has potential to influence our actions. Yeah. Absolutely, I've seen some instances where some people will say things like, oh, came out, I'm biased against males who I'm safe, you know, and
then you know and everyone Wilaugh said that. But then you say you're biased against females, and everyone come like the Uni gay mobbed on Twitter. So it's like, I guess the point is, like, I'm a humanist, you know, a humanistic psychologist. I want to think through what would mean to live in a world where you know, we didn't say it was great to be biased against anyone except Nazis. Well, it all can be biased against Nazis.
But do you see where I'm going with this? Do you see I'm saying yeah, totally yeah, And I think so one sort of thing where I think I resonate with what you're saying, is that the goal of the project, if there is a social agenda, a linked to the scientific agenda, just to understand how these things work is not very usefully thought of as let's get rid of bias, because that's sort of like saying, let's get rid of perception, right, you know, it's just it's the mind does that, and
it's going to do that. If there is a social justice project, it's let's make sure that our values are the things that are driving our behaviors to the extent that we can right, and when they're misaligned, that's an opportunity to figure out how is it that we can
improve our decision making process. The identification of those potential biases some intervention, but it really should be about matching our values with our behavior rather than saying, well, we just have to be bias free in some I don't even know what it would mean as you were saying, I have to be bias free and self functional. Great, perfect, let's end it. Let's end that this topic, that this conversation on that note, So let's now move into the
reproducibility project. Sure, now, why did you start that? What year did you start that? Why did you start that what was going on in the culture in psychology at that time, that was the impetus for this. So we launched the Reproducibility Project in twenty eleven, and it came up in the context of a discussion that was had been going at a low hum in psychology and other fields for decades, which is, we're not so sure that the research literature is as credible as we assume it is.
Published findings we presume are based on some degree of likelihood of being true or credible evidence. But there are things that happen in research practice that may undermine that credibility right like, for example, positive results finding a relationship between something or finding that intervention works are more likely to be reported than not finding anything, finding a negative relationship, finding this thing doesn't work, and so that might be
biasing the literature. People might be making decisions about how they analyze their data and only reporting the ones that make the findings look as publishable or as credible as possible, undermining the actual credibility the evidence. So there's been this discussion for decades about these things seem to be happening might be undermining the credibility of the literature. And we're kind of worried about it, but there hadn't been much change.
But in twenty eleven a few different things happened that, at least within psychology, seem to help to foment a much broader cultural discussion. One was a major fraud scandal Diedrich Stoppel in oh Yea, the Netherlands, you know, created out of fifty papers ended up something like that getting retracted of made up data that people hadn't seen. They've seen the papers, but no one recognized for many years that these papers were based on no evidence at all.
People hadn't tried to replicate them to in order to evaluate whether those are reproducible evidence. So there was a worry that how is it that we never even recognized as a field about this fraudulent evidence getting into the literature.
What's wrong with our practices? So the other major event in twenty eleven was that Daryl Bem who was a prominent social psychologist and has done lots of excellent work for many years, published a paper in the most prestigious social psychology journal showing evidence for ESP and people were shocked, how is it that this journal, it is a premiere peer reviewed journal would publish evidence for something that we're pretty sure isn't true. And the answer was that the
paper was beautiful. It was beautiful in the sense that it followed all of the rules for what one does to get evidence and to publish that evidence in psychology. And so the editor said, well, look, it's just like every other paper that we would end up accepting because it did all of the things we expect to do. It just comes to a conclusion that we don't believe. And so the debate that followed that was this followed all the rules. No one's suggesting he did something wrong
or different than everyone else does. If he followed all the rules, then either we now need to believe VSP or we need to question the rules. How is it that we end up reporting, doing and reporting on the evidence for our research findings. So that was sort of the global context was the sort of stunning events along with this long history of concern about the credibility of
the literature. So we started the Reproducibility Project with a goal of saying, boy, everybody's worrying about the credibility of research findings. The normal way that researchers evaluate the credibility of findings is to replicate them. Right, you see a finding, you say that's interesting, I'm not so sure, or oh I want to use that and extend it in some way. I'll run a replication see if I can use the
same methodology and then get the similar result. But people don't tend to do that because it's not rewarded very much. So we said, let's organize a project with a bunch of us, and we'll try to replicate a sample of findings and see if we can reproduce the results that
are in the literature. And so we so yeah, right, yeah, So we just started it as a you know, sort of informally and made an announcement on some discussion group online social media somewhere, and hundreds of people joined the project. We ended up the final paper, which was published in twenty fifteen, had two hundred and seventy co authors and another eighty something people helped in some way just short
of authorship. So it was this massive crowdsourced effort where one hundred replications were done from a sample of paper leading journals in psychology. And you know, the short summary is we were able to successfully replicate less than half about forty percent of the findings that we tried to reproduce, and so that just spawned or at least helped to fullment this what was now a very robust discussion about what are the challenges for reproducibility in psychology and beyond psychology.
This is now an issue that is prominent across all research domains. I would say less so my field, which is personality psychology. Oh you would you all right, but those are kind of fighting words to say that to Brian notes. But there was a study that was done recently to see whether or not the same sort of wreckability crisis as operating in personal psychology. It was like, on the much higher than fifty percent of the things are replicating in that field. Yeah, although it was a
very specific subset of findings. So I while I do believe that the and we even have evidence in our own studies that some of the personality findings that the rate might be a bit higher than others, it is not a challenge that is that it has been avoided across disciplines writ large, including personality research. For sure. I don't want to be hubristic. I don't be hubristic, but
I'm stating a datum. You know, they did this, but maybe we'll see if that's study replicats that's like a meta replica, meta replication, Yeah for sure, Yeah, does it replicate that a lot of the things do replicate? Will yours replicate? Well, your science paper replicate will come out
that it actually is actually anyway higher than forty percent. Yeah, well, in fact, we are doing a replication of that subset of that study, so that you know, we had those one hundred and one of the criticisms that was let it. There's lots of people raised criticisms, but one of them that yeah, I think that's a real important limitation is that maybe the replication team screwed up. Right. There's a very plausible reason. It's like, well, geez, who said that
we did a good job of it? Right, Maybe the reason we failed to replicate is because we think at
doing it. And there are ten of the papers, ten of the replications in the reproducibility project where the original authors had raised some concerns prior to running the study, so they said, you know, they're presumably pretty expert, right, they did the original research, and they had identified something that they said, I'm not so sure about this, ranging from moderate concern to major potentially, but the replication team said, well, we think we can still do a fair replication despite
that concern. Nine of those ten failed to replicate in the reproducibility project. So those ten are a perfect pescothesis, which is, if we actually can meet those concerns that they raised, revise the studies to try to do it in the way that the original author said, you need to do it this way in order to get the effect, then we should be able to find evidence of this
impact of expertise in improving replicability. So we're running an experiment right now where actually the data collection is done. We're getting close to the analysis phase where those ten studies have now been replicated multiple times with the process that we did in the reproducibility project that they had raised concerns about, and a revised one that meets the concerns that were raised to try to maximize the quality of replication. And we'll see is this plausible reason for
some failures to replicate an actual reason in these cases? Oh, that's really great. That's really great, Brian. What are some like old chestnuts that have not replicated, like you know some things that are taught as staples and introductory psychology textbooks,
you know what are just name a couple. Yeah, well this is there are areas that have had particular attention for being challenging to replicate, and of course that doesn't mean that they're wrong, but it does mean that we don't yet understand what's needed to make these findings reproducible,
if they are reproducible at all. And the two of the biggest ones in the public discussion are about ego depletion, the idea that our ego is like a muscle, right, you use it a lot and you get tired, and so once you've been working hard to regulate yourself in some way, then you may be more likely after that to be blurting out things that you don't want to do or doing things that are less effective in self
regulation and management. Right right, Yeah, well that's something else that you, as a personality psychologist, should be able to tell us about. So that's one where there's lots and lots of research about it and lots of exciting findings, but attempts to replicate some of those central findings have not been successful, or at least not very successful, and so now there's a lot of debate of what in that literature is credible and what isn't. And that's not result.
And then the other that's gotten a ton of attention is very generally called social priming, which isn't a super informative label, but it's the idea that very subtle cues may influence our behavior in surprising ways. So the classic demonstration is by having people get exposed incidentally to words
relating to oldness Florida, Caine, other things. Then they later will do things that make it look like they are older, like walk slower down a hallway when they leave the experiment, compared to if they weren't primed that these words meaning oldness. And so there's a lot of interesting demonstrations of these ways that subtle primes may influence our behavior. And that's been a very hot both heat and light area for a debate about replicability, with some high profile failures to replicate.
So there's the current state of the literature right now is not one where there are clear answers. Oh here, okay, here are all the findings in our literature. These are the ones that are replicable, and these are the ones that aren't. Mostly it is a morass of debates of oh, this one's been challenging to replicate, and people say, well, I think it's because you did this, or I think it's because of this reason, or I think it works
here and not there. And to me, that's as long as it stays focused on the evidence, that's a very productive kind of discussion to be having. We work on complicated stuff, and so some attention to trying to figure out the conditions under which those core findings actually are observable is very effective and useful for improving our theories, for understanding how it is those things work. Absolutely. What
about the fundamental attribution not bias? Is that still fundamental? Well, the fundamental portion of that has actually evolved a lot over time. We change the label. Yeah, so the term that is common now for a portion of it is called the correspondence bias. So we don't need to get into the details for why and the distinction between that and the fundelent attribution error. And there's been interesting debates
on whether it's an error or not or a bias. Right, Yeah, you know exactly, So this comf full circle right right, And there is a comfull circle that is unproductive, which is you just keep spinning in the same place. And there is a cyclical nature of scientific advancement that's very productive. Revisions that we have about any area of research to sort of check in and say, hang on a second.
You know, we thought about this this way. Let's look at it from this new perspective now that we have new evidence and see if we can revive an old claim or see if we can refine the way that we understand it. So, by and large, I'm find what has been happening within our literature to be very exciting recently because I think we are questioning assumptions in a quite productive way. Yeah, a lot of people are excited
about it. There are a couple of people are not excited about it, and those are those whose life's work is shown to not replicate. So I mean hopefully they I mean, I mean I say that like in a cheeky way, but having some capassion as well. For the scientists. Yeah, yeah, for sure, scientists buy and large a human and we
get attached to our findings. Yeah. Most Yeah, we won't talk about the words that aren't yeah all the but a clue is they all have a last thing that starts with Z. But the challenge, of course, is that we treat those findings like possessions, right, they become the
basis of our identity, as in our profession. And so it's very understandable if we're thinking of these things as possessions, that we would get defensive about them, and that they would have implications for our reputation when they get challenged. And some challenges are fair and some are not fair, and so there is a lot of person in these
debates rather than them just being purely scientific. And if we don't recognize that in how dos we conduct the debates, then there'll be more unfortunate interpersonal consequences than there need to be. There will inevitably be consequences for people's reputations, because science is a reputation based discipline, and and some work is more robust than other work. But you're right, the compassion part of that is a key part of
how one walks into this kind of work. Yeah, you know, And it's so easy just to say, oh, the problem is, well, just don't hitch your identity to it. I actually I think that there's a deep issue. There's a deep existential issue here. If you spend thirty years of your career, it's making sacrifices in your life. We have a short life span, and I didn't marry this person, or I didn't take this job, I didn't have this child. Yeah,
because I did this work. I guess. I just I really like feel I almost feel like it's a little bit too dismissive to say, well, we just have to be these spocks and rise above and pitching identity to our science. No, we have made these sacrifices in our lives what we're going to commit our lives to studying.
So perhaps in the way we have these debates, we could come from a place of like, well, you know, look, hey, this work that you spent through year studying added a lot of value because it showed us better boundary conditions or better ways forward for methodologies. Without you having done that work, we couldn't have used that as a stepping stone. But instead I see a lot of these discussions devolving in a different direction of like it doesn't lead from
that place. It almost ignores like the fundamental sacredness of a human existence. There's scientists who would just make fun of things, you know, and mock them, and that's got to hurt if you've like you really have contributed, even if it didn't all replicate, you still like your life had value, do you know what I mean? So it gets deep, But yeah, no, I think you're right that this is much deeper than a very simple come on, don't be so attached to your findings, so that, if anything,
it's psychologically naive to think that we could do that. Right, if anything, right, come on, it'sycholog as. We know that's not going to happen. So let's at least recognize that we are going to feel that and whatever in whatever position we are in any given debate, we should be able to at least empathize with the other positions of if my work was being attacked or challenged or whatever productive or unproductive term you want to use, how would
I respond. And there's there's a lot that we can understand of how we are tied to our findings and our claims and the work that we've done in the past, and simultaneously we do need to go through that, right and that's the real challenge is that it's you know, the flip of what you're describing is also equally problematic, which is to say, oh, we don't want to challenge
people because it make them feel bad. No, no no, no no, this is we really have to Science is about skepticism, and really it is our commitment to what it is we are doing in the first place, to make it so that our claims are as robust and evidence based
as possible. And sometimes that's going to mean, in fact, most of the time that's going to mean wrong about pretty serious claims that we've made, and so at base, I think cultivating an intellectual humility is a real key part for everybody in the game, right, whether you're criticizing
or whether you are defending a particular position. The intellectual humility of uncertainty, of knowing that I might be wrong but I still think that this is a debate worth having is hard to do, easy to say, but super important. Oh I love that. I love that point you just made. So it seems to me like there's a shift going on that is healthy and sort of the spirit upon
which we're doing psychological science. I think for too long in the field of psychological science, people's the spirit upon which you did it is sort of like, oh, so I could own this theory, you know, like it's Zimbardo's model, do you know what I mean? Trying to pick out there for a second, but you know me subconsciously it works. There you go, there you go. But like I think there's a lesson we can learn from this that I'm personally trying to learn too. And it's like it's like science.
It's fun though, Like it's fun if you change your mindset about this to like it's all in. We're all in explorers, we're exploring uncharted territory. The uncharted territory can change, you know, like the problem is like clearly when you you touch your ego to a result or a finding. But psychological science, the field doesn't have to be that way, like we can. It just seems like it's more fun when like it's like, oh, let's do this study and you know, who knows what's good? What well? It is
good sometimes the pre register predictions. Now you can't be totally exploratory, but at least like being open to like you know, what's going to happen, Like we're in this sort of sailboat or something all together, but all together though, you know, so yeah, it seems like there's yeah, so does this make sense. There's sort of like a shift in the kind of that the healthier spirit of the field. Yeah,
I think there is. I think that positive shift is underway, and and the way that we've phrased it in some papers is really cultivating. Instead of the desire to be right, it's the desire to get it right right, and just that just that shift would open us up more to the challenge being an opportunity. Right. There's in the in open source software development, when another software developer points out a bug in your code and says, here's a fix for your bug. People don't say, you jerk, why you're
pointing out this bug in my code? Right. The reaction is so thank you. Now my code works better. That's fantastic, and we could do more of that. But an important part of what makes that work, I think, is that the solution is often part of that discussion. Right. It isn't just you are wrong and you're wrong, it's you're wrong, and here's some evidence that we can use to get less wrong, and that can make it much more productive
in terms of a scientific dialogue. Absolutely, so, open science is this broader framework we' you're giving you explained to why, how about the world's authority? And open science explains to me what open science is supposed to be explained to you. So open science is the idea that one of the ways that we can be more effective in accelerating discovery, finding solutions, developing knowledge is by showing the basis of
evidence for the claims that we make. So it's not just enough for me to tell you here's my paper and this is what I found, but rather I should show you here's the process that I went through. Here's the initial plans that I had, and what I was going to do with the study, and how I was going to analyze the data, so you can compare against
what I ended up saying. Here are the data, and here are the materials and methods that I used, so that if you want to interrogate the data with alternative specifications, you could do that, or you could take the materials and you could run a replication more easily yourself. And also, open science means that anybody that has the wherewithal the interests, the skills, the time, the resources to contribute to the scientific process should have avenues to contribute either as a
producer or a consumer of that research. So it has a strong inclusivity aspect that we have thought of the scientific process as isolated to elite groups right in the elite places as doing the science. But we can distribute that a bit better. There are ways to get more people involved in more avenues to get involved in scientific research. So both of these parts of open science I think are critical for making the science itself more robust and for making it easier to leverage the wide range of
skills and interests and talents that people have for contributing. Well, thank you so much for advancing the field so much in that direction. What is this expression I saw that came across pro science pound bro science? What is bro science? Yeah, bropen science is a term that's come up in social media. That's referencing how open science movement, just like any movement, has the foregoing in directions that reinforce some of the status or inequality hierarchies and activities that can occur in
everyday life. And I think every community and every movements those risks is ongoing and present risk right, and social media in particular can foment the worst in social communication because it can amplify the more extreme voices, the more hostile voices, and you can lose sort of the view of that most people are just trying to be decent, trying to do a good job, and just trying to get the work done and talk about it. When when
the real hostility becomes so amplified. So I think that's partly the origin of that term, and an ongoing part of the discussion is, Yeah, we have values for how we're trying to improve science, we also need to be attending to the values of who should be able to be contributing and how they're contributing to the science. Beautiful. Yeah, I like this twitter yours. You wrote Twitter is the perfect communication medium if the goal is to escalate conflict
as rapidly as possible. Yeah, I thought that was great. That was great, right, Yeah, well it's true, right, because what can you do? That's why create and now suddenly just worse and worse. So again, I want to thank you for coming on the show and for talking to me about all your work, and I hope that we can work together to balance getting it right with being kind to others. So thank you so much for being on the podcast today, Brian. Yeah, my pleasure. Thanks for
doing it. Thanks for listening to the Psychology Podcast. I hope you enjoyed this episode. If you'd like to react in some way to something you heard, I encourage you to join in the discussion at the Psychology podcast dot com. That's the Psychology podcast dot com. Also, please add a rating and review of the Psychology Podcast on iTunes. Thanks for being such a great supporter of the podcast, and tune in next time for more on the mind, brain, behavior, and creativity.