Here’s an issue we don’t talk about enough. Every year, peer-reviewed research journals publish hundreds of thousands of scientific papers. But every year, several hundred of those are retracted — essentially, unpublished. There’s a number of reasons retraction happens. Sometimes, the researchers (or another group of scientists) will notice honest mistakes. Sometimes, other people will prove that the paper’s results were totally wrong. And sometimes, scientists misbehave, plagiarizing their own work, plagiarizing others, or engaging in outright fraud. Officially, fraud only accounts for a small proportion of all retractions. But the number of annual retractions is growing, fast. And there’s good reason to think that fraud plays a bigger role in science then we like to think. In fact, a study published a couple of weeks ago found that there was misconduct happening in 3/4ths of all retracted papers. Meanwhile, previous research has shown that, while only about .02% of all papers are retracted, 1-2% of scientists admit to having invented, fudged, or manipulated data at least once in their careers.
The trouble is that dealing with this isn’t as simple as uncovering a shadowy conspiracy or two. That’s not really the kind of misconduct we’re talking about here.
Even when scientists are caught, red-handed, inventing results out of whole cloth, it’s not usually a political or ideological agenda driving the fraud. Instead, this misconduct tends to be caused by biases and motivations that are a lot harder to spot and root out. It’s about the personal and economic pressure to be successful and regularly publish (and, especially, pressure to publish something really important). It’s about having spent years on a project and really, really, really not wanting to believe that time was wasted. It’s about doing things by the book, even when you know the book is wrong. It’s about laziness. Or jealousy. Or trying to please a demanding boss. It’s about convincing yourself that you can cheat a little, just this one time, because your particular circumstances are just.
In other words, the problems with science are problems that exist in every human industry. Like any other job, most people are honest most of the time. Unlike other jobs, however, the culture of science has long operated on the assumption that everybody is honest all of the time — and that we always catch them when they aren’t.
That’s why it’s actually exciting to me to see more people talking about the problems and misconduct that do happen in science.
It means more people within science are acknowledging that those problems are real and are starting to think about how to deal with them. It also means that you, the general public, are more aware of the ways in which science isn’t perfect. That’s important.
I talk a lot here about how hard it is to know what to think about the new studies you read about in the paper. Do they represent absolute Truth? Why are they so often contradicted by other studies? To really know why individual studies matter and what they mean for your life, you need context — context about the specific subject, and context about how science works as a whole. Knowing that scientists are human — and that the scientific process has flaws — is part of that.
And if what you’re thinking right now is, “But this means that I can’t totally trust any individual scientist, and it means that I can’t think the newest research paper actually tells me much on its own, and it means that even honest researchers could be producing bad results, and it means I can’t go around assuming that the facts I know are always going to be right,” well … congratulations. You’re learning.
Some Suggested Reading/Listening:
• The New York Times on the recent research suggesting that fraud and misconduct are bigger issues in science than we like to think.
• Nature on retractions and why there are more retractions today then there used to be.
• Neuroskeptic on bad people breaking good rules, and the even bigger problem of good people following bad rules.
• Ed Yong’s speech on the very human flaws in the scientific process.
• Ivan Oransky’s Retraction Watch blog, where every day is “Time to Give the Scientific Process the Side-eye” Day
• The Half-life of Facts: Why Everything We Know Has an Expiration Date
— a new book by Samuel Arbesman, which I am currently reading with deep interest.
• Wrong
— a book by David Freedman that everybody should read.
Vintage advertising image via X-Ray Delta One