11 February 2010

Unpublished Paper on Problems in Scientific Assessments

Last week I alluded to a paper that I had submitted for publication and which was ultimately rejected. The journal was MTS Journal and the occasion was that a colleague was putting together a special issue and had asked me to contribute something. I was not too bothered by the rejection, apparently, and at the time I was getting ready to move to Oxford for a sabbatical. In light of recent events regarding the IPCC, the paper now appears a bit more significant than it did back then. Below are links to the (a) original submission, (b) reviewer comments and my responses, and (c) the revised resubmission, which was rejected for publication.

If nothing else the review comments indicate how criticisms of the IPCC were received circa late 2006. Comments welcomed.

(a) Original submission (PDF)
(b) Reviewer comments and response (PDF)
(c) Revised submission (PDF)
Pielke Jr., R. A. 2010 (2006). Effective Science Arbitration: Some Lessons from Recent Scientific Assessments, unpublished manuscript, December, 2006.
For those not interested in all the details here are the three lessons that I draw:
The three cases discussed here were not selected through some random procedure, but happened to be instances in which I observed problems in the assessment process while doing research. Thus it is difficult to assess how widespread the issues discussed here might be in the assessment literature. However broad the problem is, as the IPCC prepares to publish its fourth assessment report, and scientists and policy makers continue to emphasize the importance of assessments, it seems critical to carefully evaluate procedures for accuracy, and for users of assessments to understand the strengths and limits of assessments. . .

Each of the three cases discussed in this paper reinforce the continuing importance of the conventional peer-reviewed literature. . . While assessments can serve as a useful “shortcut” to researchers, particularly for areas outside their direct expertise, it is appropriate for researchers to continue to rely on original literature in their scientific work, rather than to simply depend on assessments as accurate means to convey scientific findings. Inevitably, assessments must simplify, in the process losing much of the nuance and uncertainties that characterize any complex scientific study. . .

Asking an assessment to distill the potential relevance for action, or at a minimum to specify criteria of policy relevance, would not necessarily require abandonment of a focus on positive questions. An assessment built upon questions provided by policymakers would create a close tie between the information demanded by decision makers and that being produced in assessments . . . In each of the three cases discussed here, shortfalls in credibility have potential threaten the assessment legitimacy. And both credibility and legitimacy could be enhanced through a more explicit focus on assessment salience, which was lacking in all three instances.