Boing Boing Staging

How scientists trick themselves (and how they can prevent it)

A smashing editorial in Nature catalogs the many ways in which scientists end up tricking themselves into seeing evidence that isn’t there, resulting in publishing false positive. Many of these are familiar to people who follow behavioral economics (and readers of Predictably Irrational). But, significantly, the article advocates a series of evidence-supported techniques (some very simple, others a little more mostly/tricky) to counter them.

The problems with reproducibility in scientific results are now understood to be grave, and the scientific establishment is on the lookout for ways to improve the quality of results. Insisting on some or all of these methods as a condition of publication would significantly advance the field.

One debiasing procedure has a solid history in physics but is little known in other fields: blind data analysis (see page 187). The idea is that researchers who do not know how close they are to desired results will be less likely to find what they are unconsciously looking for13.

One way to do this is to write a program that creates alternative data sets by, for example, adding random noise or a hidden offset, moving participants to different experimental groups or hiding demographic categories. Researchers handle the fake data set as usual — cleaning the data, handling outliers, running analyses — while the computer faithfully applies all of their actions to the real data. They might even write up the results. But at no point do the researchers know whether their results are scientific treasures or detritus. Only at the end do they lift the blind and see their true results — after which, any further fiddling with the analysis would be obvious cheating.

How scientists fool themselves – and how they can stop
[Regina Nuzzo/Nature]

(via Mathbabe)


Exit mobile version