09 May 2012

Sarewitz on Bias in Science

In the current issue of Nature, Dan Sarewitz has a column about the threat posed by bias to scientific research.  (The image above is a screenshot of a paper cited by Sarewitz, which is by J. Ioannidis, 2005, "Why Most Published Research Findings are False").

Sarewitz explains the systemic findings of bias in clinical trials as follows:
Like a magnetic field that pulls iron filings into alignment, a powerful cultural belief is aligning multiple sources of scientific bias in the same direction. The belief is that progress in science means the continual production of positive findings. All involved benefit from positive results, and from the appearance of progress. Scientists are rewarded both intellectually and professionally, science administrators are empowered and the public desire for a better world is answered. The lack of incentives to report negative results, replicate experiments or recognize inconsistencies, ambiguities and uncertainties is widely appreciated — but the necessary cultural change is incredibly difficult to achieve.
Why are such findings of bias turning up in clinical trials?
A biased scientific result is no different from a useless one. Neither can be turned into a real-world application. So it is not surprising that the cracks in the edifice are showing up first in the biomedical realm, because research results are constantly put to the practical test of improving human health. Nor is it surprising, even if it is painfully ironic, that some of the most troubling research to document these problems has come from industry, precisely because industry’s profits depend on the results of basic biomedical science to help guide drug-development choices.
Is the problem of bias limited to clinical studies?
It would therefore be naive to believe that systematic error is a problem for biomedicine alone. It is likely to be prevalent in any field that seeks to predict the behaviour of complex systems — economics, ecology, environmental science, epidemiology and so on. The cracks will be there, they are just harder to spot because it is harder to test research results through direct technological applications (such as drugs) and straightforward indicators of desired outcomes (such as reduced morbidity and mortality).
Read the whole thing here.


Stan said...

The bad results are harder to spot in the other areas because no one replicates anyone else's studies. They got spotted by Amgen, Bayer and venture capitalists because those folks tried to replicate the work.

The scientists who produced the studies which were exposed by industry knew or should have known that their work would be replicated. This gave them extra incentive to check and double check. In academic areas where the authors have no reason to expect replication, we should expect that studies are flawed in even greater percentages.

Sean said...

This statement "Nor is it surprising, even if it is painfully ironic, that some of the most troubling research to document these problems has come from industry, precisely because industry’s profits depend on the results of basic biomedical science to help guide drug-development choices."
As an applied scientist working in product development, the statement mentions profit (a reward) but does not mentions risk which can take the form of added costs for pursuing a dead end path or liabilities for unexpected consequences (side effects for drugs) that are not fully understood. Fear in business is a powerful motivational tool that never goes away. It is possible that academia is heavy on the carrots but light on the sticks, particularly as researchers advance in their careers?

n.n said...

Competing interests are both the problem and solution. When they are optimally independent, and capable of exerting equivalent leverage, they keep the honest people honest, and hold others accountable. Science, as every other human enterprise, needs to follow the market model. We need to remove preference from the authority and consensus model which tends to engender monopolies or monopolistic behavior. In its most fundamental conception, the market is constituted of individuals competing with each other for finitely accessible resources, both natural and human.

It would also be prudent to dissuade people of the false notion that instant and equal gratification is possible in this world.

This is a cultural failure. It begins at home and is reinforced throughout society.

bernie said...

Private industry has no corner on valid, replicable research. I worked with many high tech companies including pharmaceutical research labs. They also had their share of flawed research, dry holes, flawed products, marketing flops and serious financial missteps. We are all capable of weak methods, confirmation bias, and self-delusion. Openness, candor and continuous dialogue are antidotes but they are part of a dynamic process which has to be continuously monitored, reinforced, and verified.

sykes.1 said...

It is not merely that academic research is heavy on carrots, the bigger problem is that academic researchers are not subject to any kind of review or discipline. Literally anything goes. As a result, perhaps as much as one-third of academic researchers engage in some degree of misconduct. Occasionally, it is egregious, as in the famous plagiarism/theft by Robert Gallo, but more commonly it is mere sloppiness in practice.

grey said...

really sykes? up to 33% of academics are engaging in some kind of misconduct? I suppose you have a source for that, right?

Post a Comment

Note: Only a member of this blog may post a comment.