The rash of high-profile journal retractions, revelations of systematic frauds in peer-review, and journals publishing deliberately bogus papers (e.g. "Get Me Off Your Fucking Mailing List") — are we experiencing a crisis in science?
An outstanding piece of interactive journalism in Five Thirty Eight concludes that there isn't a crisis — rather, it's just science doing what it should: self-correcting. The scientific method is supposed to assume that assertions of truth are being made by people who are prone to corruption and self-delusion, and it uses a deep, layered defense to root out bad conclusions and abolish them.
But it's not so simple as that. For one thing, the media's tendency to report interesting preliminary findings as once-in-a-century breakthroughs, and the scientific establishment's historic embarrassment and shyness at reporting retractions and wrongdoing have fed into one another, so that it's grown progressively harder to make and publicize retractions, which should be seen as part of the normal course of scientific inquiry.
Author Christie Aschwanden and programmer Ritchie King created an excellent interactive toy that shows how easy it is to make spurious statistical correlations seem like certainties that confirm the experimenter's biases — and they propose remedies for this problem and the overall problem of communications in science.
“By default, we’re biased to try and find extreme results,” Ioannidis, the Stanford meta-science researcher, told me. People want to prove something, and a negative result doesn’t satisfy that craving. Ioannidis’s seminal study is just one that has identified ways that scientists consciously or unconsciously tip the scales in favor of the result they’re seeking, but the methodological flaws that he and other researchers have identified explain only how researchers arrive at false results. To get to the bottom of the problem, we have to understand why we’re so prone to holding on to wrong ideas. And that requires examining something more fundamental: the biased ways that the human mind forms beliefs.Some of these biases are helpful, at least to a point. Take, for instance, naive realism — the idea that whatever belief you hold, you believe it because it’s true. This mindset is almost essential for doing science, quantum mechanics researcher Seth Lloyd of MIT told me. “You have to believe that whatever you’re working on right now is the solution to give you the energy and passion you need to work.” But hypotheses are usually incorrect, and when results overturn a beloved idea, a researcher must learn from the experience and keep, as Lloyd described it, “the hopeful notion that, ‘OK, maybe that idea wasn’t right, but this next one will be.’”
“Science is great, but it’s low-yield,” Fang told me. “Most experiments fail. That doesn’t mean the challenge isn’t worth it, but we can’t expect every dollar to turn a positive result. Most of the things you try don’t work out — that’s just the nature of the process.” Rather than merely avoiding failure, we need to court truth.
Science Isn’t Broken [Christie Aschwanden and Ritchie King/538]