Self Help For Smart People - How You Can Spot Bad Science & Decode Scientific Studies with Dr. Brian Nosek
Jul 05, 2018•56 min•Transcript available on Metacast Episode description
In this episode, we show how you can decode scientific studies and spot bad science by digging deep into the tools and skills you need to be an educated consumer of scientific information. Are you tired of seeing seemingly outrageous studies published in the news, only to see the exact opposite published a week later? What makes scientific research useful and valid? How can you, as a non-scientist, read and understand scientific information in a simple and straightforward way that can help you get closer to the truth - and apply those lessons to your life. We discuss this and much more with Dr. Brian Nosek.
Dr. Brian Nosek is the co-founder and Executive Director of the Center for Open Science and a professor of psychology at the University of Virginia. Brian led the reproducibility project which involved leading some 270 of his peers to reproduce 100 published psychology studies to see if they could reproduce the results. This work shed light on some of the publication bias in the science of psychology and much more.Does the science show that extrasensory perception is real?Is there something wrong with the rules of the science or the way that we conduct science?What makes academic research publishable is not the same thing as what makes academic research accuratePublication is the currency of advancement in scienceNovel, positive, cleanWhat does “Nulls Hypothesis significance testing” / P-Value less than .05 even mean?Less than 5% of the time would you observe this evidence if there was no relationshipThe incentives for scientific publishing often skew, even without conscious intent by scientists, towards only publishing studies that support their hypothesis and conclusionsThe conclusions of many scientific studies may not be reproducible and may, in fact, be wrong How the reasoning challenges and biases of human thinking skew scientific results and create false conclusionsConfirmation biasOutcome bias“The Reproducibility Project” in psychologyTook a sample of 100 studies Across those 100 studies - the evidence was able to be reproduced only 40% of the timeThe effect size was 50% of what it was What The Reproducibility Project spawned was not a conclusion, but a QUESTIONHow do we as lay consumers determine if something is scientifically valid or not?We discuss the basic keys to understanding, reading, and consuming scientific studies as a non-scientist and ask how do we determine the quality of evidence?Watch out for any DEFINITIVE conclusionsThe sample size is very important, the larger the betterAggregation of evidence is better - “hundreds of studies show"Meta-studies / meta-analysis are important and typically more credibleLook up the original paperIs there doubt expressed in the story/report about the data? (how could the evidence be wrong, what needs to be proven next, etc)Valid scientific research often isn’t newsworthy - it takes lots of time to reach valid scientific conclusions It’s not just about the OUTCOME of a scientific study - the confidence in those outcomes is dependent on the PROCESS Where do we go from here as both individuals and scientists? How can we do better?Transparency is keyPreregistration - commit to a designThe powerful tool of “pre-registration” and how you can use it to improve your own thinking and decision-makingHomework - deliberately seek out people who disagree with you, build a “team of rivals"
Learn more about your ad choices. Visit megaphone.fm/adchoices
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.