Our job as scientists is to discover truths about the world. We generate hypotheses, collect data, and examine whether or not the data are consistent with those hypotheses. Although we aspire to always be accurate, errors are inevitable.Why does this phenomenon occur?
Perhaps the most costly error is a false positive, the incorrect rejection of a null hypothesis. First, once they appear in the literature, false positives are particularly persistent. Because null results have many possible causes, failures to replicate previous findings are never conclusive. Furthermore, because it is uncommon for prestigious journals to publish null findings or exact replications, researchers have little incentive to even attempt them. Second, false positives waste resources: They inspire investment in fruitless research programs and can lead to ineffective policy changes. Finally, a field known for publishing false positives risks losing its credibility.
In this article, we show that despite the nominal endorsement of a maximum false-positive rate of 5% (i.e., p ≤ .05), current standards for disclosing details of data collection and analyses make false positives vastly more likely. In fact, it is unacceptably easy to publish “statistically significant” evidence consistent with any hypothesis.
The culprit is a construct we refer to as researcher degrees of freedom. In the course of collecting and analyzing data, researchers have many decisions to make: Should more data be collected? Should some observations be excluded? Which conditions should be combined and which ones compared? Which control variables should be considered? Should specific measures be combined or transformed or both?The problem of "false positive science" is of course not limited to the discipline of psychology or even the social sciences. Simmons et al. provide several excellent empirical examples of how ambiguity in the research process leads to false positives and offer some advice for how the research community might begin to deal with the problem.
It is rare, and sometimes impractical, for researchers to make all these decisions beforehand. Rather, it is common (and accepted practice) for researchers to explore various analytic alternatives, to search for a combination that yields “statistical significance,” and to then report only what “worked.” The problem, of course, is that the likelihood of at least one (of many) analyses producing a falsely positive finding at the 5% level is necessarily greater than 5%.
This exploratory behavior is not the by-product of malicious intent, but rather the result of two factors: (a) ambiguity in how best to make these decisions and (b) the researcher’s desire to find a statistically significant result. A large literature documents that people are self-serving in their interpretation of ambiguous information and remarkably adept at reaching justifiable conclusions that mesh with their desires (Babcock & Loewenstein, 1997; Dawson, Gilovich, & Regan, 2002; Gilovich, 1983; Hastorf & Cantril, 1954; Kunda, 1990; Zuckerman, 1979). This literature suggests that when we as researchers face ambiguous analytic decisions, we will tend to conclude, with convincing self-justification, that the appropriate decisions are those that result in statistical significance (p ≤ .05).
Ambiguity is rampant in empirical research.
Writing at The Chronicle of Higher Education, Geoffrey Pullam says that a gullible and compliant media makes things worse:
Compounding this problem with psychological science is the pathetic state of science reporting: the problem of how unacceptably easy it is to publish total fictions about science, and falsely claim relevance to real everyday life.Pullam provides a nice example of the dynamics discussed here in the recent case of the so-called "QWERTY effect" which is also dissected here. On this blog I've occasionally pointed to silly science and silly reporting, as well as good science and good reporting -- which on any given topic is all mixed up together.
When prominent members of the media take on an activist bent, the challenge is further compounded. Of course, members of the media are not alone in their activism through science. The combination of ambiguity, researcher interest in a significant result and research as a tool of activism makes sorting through the thicket of knowledge a challenge in the best of cases, and sometimes just impossible.
The practical conclusion to draw from Simmons et al. is that much of what we think we know based on conventional statistical studies published in the academic literature stands a good chance of just not being so -- certainly more than the 5% threshold used as a threshold for significance. Absent solid research, we simply can't distinguish empirically between false and true positives, meaning that we apply other criteria, like political expediency. Knowing what to know turns out to be quite a challenge.