28 September 2012

Thanks to John Holdren

We had a great day yesterday celebrating our Center's 10th anniversary. The day was capped off by a keynote lecture by President Obama's science advisor, John Holdren, to a big crowd at Old Main.

Dr. Holdren is pictured above after the lecture with one of my STS students, Adrianne Kroepsch, who asked him to pose for a photo with a message to some middle school girls Adrianne is going to speak to about science. Dr. Holdren showed good humor and readily agreed.

Thanks much to Dr. Holdren for spending the day in Boulder!

27 September 2012

Is US Economic Growth Over?

Here is how it starts:
Over that past month there has been much discussion of a new paper by Robert Gordon, a prominent economist at Northwestern University, which carried the provocative title: Is US Economic Growth Over? Faltering Innovation Confronts the Six Headwinds.” (See for instance, Annie Lowrey here, the Economist here and David Keohane here).

In what Tim Harford called “the summer’s most talked about working paper in economics,” Gordon argues that the economic growth of the past century may represent an aberration from the normal state of society, which experiences little economic growth. Look far enough back in time Gordon says and the world had minimal, if any economic growth, and looking ahead, we may be returning to that dismal state. Gordon explains that he is raising “the audacious idea that economic growth was a one-time-only event.”

Over the past month I have taken a close look at Gordon’s paper, the data he relies on and the papers that he cites. My conclusions are that Gordon’s analysis is deeply flawed and tells us essentially nothing about the potential for future economic growth. It does help to reveal a big gap in the discipline of economics, and that is the utter lack of an explicit theory of growth and the mechanisms by which it actually takes places. What Gordon has provided, in his own words, is a “a provocative fantasy” one that tells us much about the discipline of economics but little about the state of the world.
To read the rest head on over. Comments welcomed.

24 September 2012

Drought and Climate Change

The amount of nonsense in public debate on extreme events and climate change remains at a high level. This is great news for me because it provides plenty of opportunities to discuss what the actual science says and how we think we know what we know. As I have long argued, accurate representations of the state of science of extremes is far more important as a matter of scientific integrity than to the hyper-politiczed debate over climate change. Put another way, it is highly unlikely that misrepresentations of the state of science will do much to move action on energy policies, but they could damage the integrity of leading institutions of science.

This is a short post about drought, which simply summarizes the bottom-line conclusions of two of the most recent major scientific assessments of extreme events and climate change, one by the US government, released in 2008 under the Bush administration (PDF, and then reaffirmed in the CCSP Unified Synthesis under the Obama Administration, here) and the second from the IPCC.

First, from the US government's assessment of extreme events in the US, here is what it concluded about drought (here in PDF, at p. 42):
The most widespread and severe drought conditions occurred in the 1930s and 1950s (Andreadis et al., 2005). The early 2000s were also characterized by severe droughts in some areas, notably in the western United States. When averaged across the entire United States (Figure 2.6), there is no clear tendency for a trend based on the PDSI. Similarly, long-term trends (1925-2003) of hydrologic droughts based on model derived soil moisture and runoff show that droughts have, for the most part, become shorter, less frequent, and cover a smaller portion of the U. S. over the last century (Andreadis and Lettenmaier, 2006). The main exception is the Southwest and parts of the interior of the West, where increased temperature has led to rising drought trends (Groisman et al., 2004; Andreadis and Lettenmaier, 2006). The trends averaged over all of North America since 1950 (Figure 2.6) are similar to U.S. trends for the same period, indicating no overall trend.
Got that? Over the climate time scales "droughts have, for the most part, become shorter, less frequent, and cover a smaller portion of the U. S. over the last century." At the top of this post is Figure 2.6 from that report, showing drought incidence in the US (red) and North America (blue) from 1900.
The IPCC in 2012 conducted a survey of drought globally, and concluded with the following (here in PDF, at p. 171):
There is not enough evidence at present to suggest high confidence in observed trends in dryness due to lack of direct observations, some geographical inconsistencies in the trends, and some dependencies of inferred trends on the index choice. There is medium confidence that since the 1950s some regions of the world have experienced more intense and longer droughts (e.g., southern Europe, west Africa) but also opposite trends exist in other regions (e.g., central North America, northwestern Australia).
Got that? Some places have become dryer, others wetter, and not much confidence in asserting the presence of any trends at the global scale.

Now it is of course true that the recent assessments of the US government and IPCC are not the last words on these subjects. They represents attempts by governments to have scientists systematically arbitrate questions that can resolved empirically, such as "has drought increased or decreased?"

Anyone who shows up in public debate (or the scientific literature) with an alternative view (e.g., droughts have become worse in the US or there is more certainty at the global level) had better come with some strong evidence. The need for strong evidence comes not simply from the authority of these assessments, but because they represent a condensation of a large amount of literature. To overturn the conventional wisdom expressed in the assessments requires overturning the arguments found in the literature on which the assessment is based. The same logic goes for people who would challenge other findings from such assessments, such as in making a claim that the globe is not warming.

The findings of assessments are not however received truth. As we have seen, those leading and participating in such assessments can err egregiously -- where an error means a failure to accurately reflect in the assessment the underlying scientific literature. Here overturning the assessment is relatively easy, as it does not require over turning the literature, but simply representing it accurately. This is why the IPCC SREX came to different conclusions on extremes than did AR4.

Unfortunately, there are no shortcuts here to reliable knowledge - just because the IPCC erred on one subject does not mean that it necessarily erred on others. Each knowledge claim must be evaluated on its merits. Shortfalls in an assessment process may cause you (or me) to lose confidence in the integrity of that process, but it does not eliminate the need to evaluate knowledge claims on their merits using the methods of science.

The good news is that for extreme events and climate change there is a well-developed scientific literature on this topic, and it does not always jibe with what you might read in the news or hear from experts. Given the politicization of the climate debate, and the penetration of that politicization into academia, I don't expect this to change anytime soon, if ever.

As always -- Caveat Emptor!

20 September 2012

Big Fun

Clear your schedule. Cancel meetings. Wow, this is good.

Check out the detailed agenda for our 10th anniversary celebration next week on the CU Campus in Old Main:
Cooperative Institute for Research in Environmental Sciences
Center for Science and Technology Policy Research
10th Anniversary Agenda
September 27, 2012
Old Main Auditorium
University of Colorado-Boulder

Hotel Information for out-of-town guests

9:00-10:30 am “Extremes: Nature, Society and Policy” panel discussion


Shali Mohleji (American Meteorological Society Policy Program)
Roger Pielke, Jr. (CSTPR)
Kevin Vranes (E Source)

Moderator: Bill Travis (CSTPR)

10:30-11:00 am Break

11:00-12:30 pm “Public Engagement in Science and Technology: When the Stakes are High and Debates are Lively” panel discussion


Jason Delborne (Colorado School of Mines)
Rebecca Safran (CU)
Suzanne Tegen (NREL)

Moderators: Lisa Dilling and Max Boykoff (CSTPR)

12:30-1:30 pm Lunch on your own

1:30-3:00 pm "Usable Science: The Promise and Peril of Directing Research Toward Practical Outcomes" panel discussion


Susan Avery (WHOI)
Shannon McNeeley (NCAR)
Carl Mitcham (Colorado School of Mines)

Moderator: Elizabeth McNie (Purdue)

3:00-3:30 pm Break

3:30-4:30 pm “Life after CSTPR” panel discussion


Shali Mohleji (American Meteorological Society Policy Program)
Genevieve Maricle (USAID)
Shep Ryen (former Hill staffer)

Moderator: Roger Pielke, Jr. (CSTPR)

7:30-9:00 pm Keynote address: Dr. John P. Holdren, Assistant to the President for Science and Technology, Director of the White House Office of Science and Technology Policy, Co-chair of the President's Council of Advisors on Science and Technology

***All events will take place in Old Main on the University of Colorado-Boulder campus and are free and open to the public.***

18 September 2012

Book Review: The Signal and the Noise

In my column over at the Breakthrough Institute I have a review of Nate Silver's (@fiverthirtyeight) new book, The Signal and the Noise. Here is how it starts out:
Nate Silver is a Wunderkind. Not yet 35, Silver has already developed the leading statistical tool for assessing prospective professional baseball players, been consulted by the 2008 Obama presidential campaign to help assess the implications of opinion polls for the election’s outcome, writes a widely read New York Times blog and in 2009 was named by Time magazine as one of the 100 most influential people in the world. Along the way, and apparently in his spare time, Silver made more than $400,000 playing professional poker (before losing about a third of it and moving on). Silver’s initial foray into professional poker turned a $100 initial stake into $15,000, which was apparently enough to convince Silver to quit his day job, which has been to the benefit of the rest of us. Silver adds to this impressive list of accomplishments with the publication next week of his first book The Signal and the Noise: Why Most Predictions Fail – But Some Don't (Penguin USA).

This generally well-researched and ambitious book covers a lot of ground. It describes Silver’s evolution as a Fox (to use Philip Tetlock’s terminology favored by Silver) who is “tolerant of nuance, uncertainty, complexity, and dissenting opinion.” Silver discusses prediction in sports, geosciences, economics, politics and health. He clearly has done lots of reading in the academic literature. In most of these chapters he has tracked down and spoken to the characters that show up in the book -- a nice touch for a work that combines analysis with a bit of reporting. I’m not one of those characters, but I did engage in a long phone conversation with Silver while he was researching the book to discuss various aspects of prediction. In our discussion, I was impressed by his thoroughness and attention to detail, and have been looking forward to the final product.
To read the rest head on over. Feel free to comment here or there.

13 September 2012

The IPCC Sinks to a New Low

Back in May, Chris Field, co-chair of IPCC Working Group II emailed me with a request:
As per your request, the IPCC is prepared to take another look at the AR4 text on disaster trends. The spirit of these "second looks" is to evaluate whether the assessment should have said something else, based on the literature cited and on the information that was available within the window for AR4 literature.  The error protocol does not allow a new assessment based on literature published since the AR4 literature cutoff, and it is not intended as a broad reinterpretation of the information assessed by the authors.

To clarify your request, can you send a specific statement of the alleged error or errors that you would like to see addressed?
As my request to Field was informal, and not one I ever expected to see action on, this initiation of contact with me seemed to me like the IPCC was turning a corner, and taking seriously scientific accuracy on disasters and climate change. So I prepared a concise and specific reply to Field's request. Today I heard back from the IPCC. The response is laughable, and indicates that the IPCC is more interested in playing games than in scientific accuracy. Nothing below is complicated or nuanced.

Here are the details from the response that the IPCC sent to me today, annotated with my comments. In the material below the four passages under "Text from Roger Pielke, Jr." is that which I provided to Chris Field in May in response to his email request. Under each of those I have blockqouted the IPCC response to my claims, which is titled "CLA Finding." Below that I highlight my comments today in response to each of the four responses.

With that, let's have a look . . .

Alleged errors in the treatment of disaster trends in Chapter 1, WGII, AR4
CLA response from Cynthia Rosenzweig and Gino Casassa
August 23, 2012

Alleged Error #1

Text from Roger Pielke, Jr.

Error #1: IPCC p. 110: “These previous national U.S. assessments, as well as those for normalised Cuban hurricane losses (Pielke et al., 2003), did not show any significant upward trend in losses over time, but this was before the remarkable hurricane losses of 2004 and 2005.”

FACTUALLY INCORRECT: Figure 5 in the following paper, in press prior to the IPCC AR4 WGII publication deadline, clearly shows that the addition of 2004 and 2005 losses do not alter the long-term trend in hurricane losses:

Pielke, Jr., R. A. (2006), Disasters, Death, and Destruction: Making Sense of Recent
Calamities. Oceanography 19 138-147.

This same information was also in the report of the 2006 Hohenkammer Workshop on Climate Change and Disaster Losses, which was cited by the AR4 WGII: http://cstpr.colorado.edu/sparc/research/projects/extreme_events/munich_workshop/pielke.pdf

RECOMMENDED CORRECTION: ““These previous national U.S. assessments, as well as those for normalised Cuban hurricane losses (Pielke et al., 2003), did not show any significant upward trend in losses over time, and this remains the case following the remarkable hurricane losses of 2004 and 2005.”
CLA Finding

There is no error in the statement. No correction is needed and the text can stand as is.

The clause about the published analyses being before the 2004 and 2005 hurricane seasons is a statement of fact about the time line, and it is not a statement that the results were different after including 2004 and 2005. The statement does not infer that the overall pattern of losses would be different; instead it suggests that 2004 and 2005 were remarkable years in terms of hurricane losses, which they were.
PIELKE RESPONSE SEPTEMBER 13:  This boggles the mind. The time line was such that published analyses (I provided 2!) that were available to the IPCC when drafting the AR4 included 2004 and 2005. The IPCC is say that up is down, and with a straight face. Did they not even read what I wrote?

UPDATE: Here is how the IPCC handled this issue in the 2007 AR4 review process.

Alleged Error #2

Text from Roger Pielke, Jr.

Error #2: IPCC pp. 110-111: “Global losses reveal rapidly rising costs due to extreme weather-related events since the 1970s. One study has found that while the dominant signal remains that of the significant increases in the values of exposure at risk, once losses are normalised for exposure, there still remains an underlying rising trend.”

That “one study” is Muir-Wood et al. (2006), a white paper prepared as input to a workshop that I organized.

FACTUALLY INCORRECT: (a) The first sentence should say “1950s” not “1970s,” which is the starting point of the Munich Re dataset being referred to. (b) Several normalization studies (not “one study”) available at the time of the AR4 had noted that a dataset that begins in 1970 and ends in 2005 will show an annual rate of increase, including papers parallel to the Muir-Wood et al. (2006) presented at the Hohenkammer workshop as well as the IPCC TAR (2001) and Munich Re (2000). All such studies find no evidence of “an underlying rising trend” over longer time periods.

RECOMMENDED CORRECTION: ““Global losses reveal rapidly rising costs due to extreme weather-related events since at least the 1950s. Multiple analyses have found an increase in normalized losses since 1970, due entirely to US hurricanes, but such studies are in agreement that no such trend can be found over longer time periods (dating to 1950 globally and 1900 for US hurricanes).” In addition, the full text of this section should be clarified along these lines.
CLA Finding

There is no error in the statement. No correction is needed and the text can stand as is.

The year 1970 was used in the analysis of Muir-Wood 2006. They used a data set with information going back to 1950, but they decided to omit the period prior to 1970 because they are limited in completeness. The “correction” proposed by Pielke acknowledges that analyses for the period starting in 1970 observe an increasing trend, which is all that is reported in the text. The qualifier about “one study” documenting a trend indicates the limited foundation for the conclusion, in the context of other studies showing the large importance of trends in exposure.
PIELKE RESPONSE SEPTEMBER 13: Again, the IPCC is saying that up is down. Multiple studies showed disaster losses increasing from 1970 (I point them to several, including IPCC TAR). The reason for this was simple -- low hurricane damages in the 1970s and 1980s. Not new or interesting. To highlight one study and then now suggest that this was done to suggest that the conclusion is tenuous makes the IPCC look utterly clueless. The IPCC cherry picked a date to make a suggestive claim, and ignored contrary data. Not good. The cover-up is worse than the crime.

Alleged Error #3

Text from Roger Pielke, Jr.

Error #3: IPCC p. SM.1-4: Figure SM-1.1 and caption: “An example from the literature of one study analysing rising costs of normalised weather-related catastrophes compared with global temperatures. Data smoothed over ±4 years = 9 years until 2001 (Muir Wood et al., 2006).”

FACTUALLY INCORRECT: Neither the figure nor the underlying data appear in the scientific literature (peer reviewed or grey) at any time. The figure was created by Robert Muir-Wood and included in the report with an intentional mis-citation to circumvent the IPCC publication deadline, according to Muir-Wood himself (an audio recording of his admission is available on the website of the Royal Institution in London, and linked from my blog here:
http://rogerpielkejr.blogspot.com/2010/02/ipcc-mystery-graph-solved.html ).
He apparently believed that the figure would appear in a future paper – it did not, and that future paper, eventually published in 2008, found no relationship between temperatures and disaster losses.

RECOMMENDED CORRECTION: The figure and caption should be removed as well
as reference to it in the text.
CLA Finding

Post erratum to the caption of figure SM-1.1 clarifying the source of the data in the figure.


The figure is a replotting of the data used for the analysis of Muir Wood et al. (2006). These data combined temperature data from CRU (Climatic Research Unit 2006),(University of East Anglia, Norwich) with the disaster loss database compiled by and described in Muir Wood et al. (2006). Since Muir Wood et al. (2006) did not cite CRU, the figure caption would be more accurate if modified to read:
Figure SM-1.1 Costs over time of normalized weather-related catastrophes compared with global temperatures. Data smoothed over ±4 years = 9 years until 2001. Based on the dataset used in Muir Wood et al.(2006) and temperature data from CRU, 2006*.
(Climatic Research Unit (CRU) (2006).University of East Anglia, Norwich.
www.cru.uea.ac.uk/cru/data/temperature/#sciref .)
Pielke Response September 13: This one takes the cake. Utterly remarkable. The graph was conjured up by Robert Muir-Wood based on data in his possession, he intentionally miscited the analysis and there is no scientific basis for plotting damages against temperature. None. The IPCC response on this issue is that their falsified graph was not fully cited, but otherwise OK is an insult to scientists everywhere and a mockery of the IPCC process. I cannot express this strongly enough. The IPCC has demonstrated that it is utterly incapable of correcting even the most egregious violation of its standards.

Alleged Error #4

Text from Roger Pielke, Jr.

Error #4: Erroneous IPCC Press release of 25 January 2010 (a) “one study detected an increase in economic losses, corrected for values at risk, but that other studies have not detected such a trend,” and (b) “In writing, reviewing, and editing this section, IPCC procedures were carefully followed to produce the policy-relevant assessment that is the IPCC mandate.”

FACTUALLY INCORRECT: (a) as documented above in Error #2, multiple studies had identified an increase in economic losses since 1970 (but not from earlier starting points), and (b) as documented above in Error #3, IPCC procedures were not carefully followed, but violated.

RECOMMENDED CORRECTION: The IPCC should withdraw its 25 January 2010 press release and issue a press release noting the inaccuracies in both the report and the release.
CLA Finding and Rationale

The January 25, 2010 IPCC statement is not part of an IPCC report, and the error correction protocol is therefore not relevant
PIELKE RESPONSE SEPTEMBER 13: Not that the IPCC has shown a commitment to accuracy, but here it relies on a bureaucratic dodge to ignore the false information it put out via press reslease. Not good. 

Let Them Eat Refrigerated Foods

Note: This is a guest post by Jack Stilgoe, a lecturer in the department of Science and Technology Studies at University College London, where he teaches science policy. Between 2008 and 2011 he was a senior policy adviser at the Royal Society. His blog is jackstilgoe.wordpress.com. He is on twitter @jackstilgoe

I used to work there, so I should watch my tongue. But I can’t help drawing attention to this press release from the Royal Society. The Royal Society is the UK’s national academy of sciences. It is the world’s longest-running national academy and remains one of our most esteemed scientific institutions. There is much that is amazing about the place and much that is utterly bizarre.

Following recent pronouncements on policy issues ranging from climate change, human population, geoengineering, food security and the value of science, the Society has seen fit to pronounce that the refrigeration is the best innovation in the history of food.

The list that the committee of fellows (“including a Nobel Prize Winner”!) came up with seems rather odd. It mixes innovations and inventions in a rather slapdash way. One can argue about the relative positioning of, say, the microwave and plough, or the absence of the Spork. But that’s not the point.

When I was working in the Royal Society’s policy outfit, we were acutely aware of the attention (some of it deserved) that the institution’s pronouncements would attract. We would occasionally smirk at the efforts of learned societies of one sort or another to draw attention to themselves. The Royal Society of Chemistry in particular was fond of stunts such as honouring dead, fictional detectives, identifying the nation’s cheapest lunch or inventing topically-convenient molecules. Those last two stories come from the Daily Mail, a mid-market right-wing British tabloid that would conventionally be uninterested in the musings of our scientific elite. And the Mail have swallowed the Royal Society’s latest offering as well. The press department will today be congratulating themselves on having spread the word to a normally disengaged audience.

So why does this sort of thing matter? Surely it’s fair game for any institution to draw attention to itself from time to time, even if the reason is to nakedly market its own events facilities? My argument, and the reason I agreed to write in public about this, is that it matters a very great deal when organisations that make claims to some sort of authority use that authority in opaque ways for shady purposes. When people or institutions speak in the name of science when there is no scientific foundation, we should be wary. PR companies are well aware that a scientific-looking conclusion or equation can grab the attention of lazy journalists in the less-discerning news organs.

The Royal Society does not have a democratic mandate. Its legitimacy (threatened as it is by these sorts of stunts) derives from claiming to represent the best that science has to offer. So when the Society states that a committee of its fellows (“including a Nobel Prize Winner”!) has decreed something, it expects the world’s attention. This sort of press release gives the impression that, like the Wizard of Oz, while there is a big booming voice, there may not be much going on behind the curtain.

The Power of Scientific Knowledge: A Guest Post

NOTE: This is a guest post by Reiner Grundmann and Nico Stehr on their new book, The Power of Scientific Knowledge (Cambridge University Press, 2012).

In this book we examine the relation between knowledge and decision making, more precisely, the practical effectiveness of scientific knowledge in political contexts. We do this through the study of three cases, Keynesian economics, race science, and climate science. In all cases there it is policy relevant knowledge which has been taken up by policy makers to greater or lesser degrees.

We use the conceptual distinction between knowledge for practice (politically relevant knowledge) and practical knowledge (knowledge which can make a difference in practice). This simple difference allows us to emphasize several points which all too often get neglected in studies of a similar kind. First, some knowledge (perhaps much knowledge) does not intend to be practical, or does not lend itself to practical applications. Second, knowledge that has practical implications is not always practical, because it does not recognize, let alone specify the levers for action. Practical knowledge is knowledge that contains realistic assumptions about its own implementation within specific socio-political contexts. And third, whenever knowledge is produced which has practical implications the knowledge producers are drawn into the policy making process. If they fail to specify the tools for implementation their knowledge will be ineffective.

These are necessary, not sufficient conditions for knowledge to become practical. Policy proposals that are based on such knowledge still need to find support by political coalitions, stakeholders or social movements. We show in the book that the cultural and political resonance of knowledge claims and ideas matters very much.
Our findings in the book may surprise. Keynes, who developed his theory while advising the British government, made several practical policy suggestions but did not prevail at the time. His policy was taken up later in different countries (arguably through processes of diffusion and changing perceptions). He famously stated that what matters for practical knowledge is not a theory which mimics complex reality but a theory which contains those elements which can be manipulated by decision makers.

Race science developed tools for the classification of races and individuals who were then identified for specific "treatment." This knowledge was keenly taken up by several governments. In Germany these policies were taken to the extreme with the extermination policies in the Holocaust. In this sense race science was extremely powerful in practice. Due to this history, it is quite common for commentators to deny the contemporary scientific status that was held by eugenics and race science. In our view this is little more than wishful thinking.

Climate science has made the case for anthropogenic global warming and spent much energy on providing proof in this respect. The debate has been immensely politicized but with little practical effect. GHG emissions are not falling in line with the scientific recommendations. In this sense, climate science has proven ineffective for policy making. It failed to identify the levers for action which could make a difference in practice.

There is an eerie similarity between race science and climate science in that both see their services as essential for solving pressing social problems.

The use of science by the Nazi regime has been described as follows:
‘The Nazis took major problems of the day—problems of race, gender, crime, and poverty—and transformed them into medical or biological problems. Nazi philosophers argued that Germany was teetering on the brink of racial collapse, and that racial hygiene was needed to save Germany from ‘racial suicide.’ Racial hygiene thus combined a philosophy of biological determinism with a belief that science might provide a technical fix for social problems. Harnessed to a political party mandated to root out all forms of racial, social, or spiritual ‘disease,’ the ideology of biological determinism helped drive the kinds of programs for which the Nazis have become infamous.’ (Robert Proctor, Racial Hygiene).
This could serve as a cautionary tale when considering other, more benign cases of public policy, such as climate change. Here, too, some climate scientists and their supporters have defended specific courses of action with reference to scientific truth, disregarding the fact that it is largely a public choice to identify and implement sound and just policies.

Let us quote from our book here:
What follows for climate policy? Let us tackle this question indirectly, starting with the conclusion just reached, and asking: Can policy-makers appeal to a body of scientific knowledge and authority? And what non-scientific principles could be used to reach sensible policies? While there is a robust consensus among scientific authorities regarding detection (and perhaps also attribution) of anthropogenic global warming, this does not tell us what to do. To be sure, some scientific authorities demand cutting CO2 emissions radically within the next decades. But this may be an ‘impractical’ strategy, so to speak. In the short term, levers for action in this regard have to be seen with pessimism, as argued above. What is more, if society were prepared to take preventative and/or adaptive measures with regard to climate change, it would not need to wait for scientific studies to deliver the foundations for this belief. And if climate change were seen as a risk we want to avoid, we should try to reduce our vulnerability and take adaptive measures (coastline protection, increasing agricultural and infrastructure resilience). As in other policy fields, we face the prospect of acting on the basis of limited information, where Lindblom’s principle of incrementalism should be followed. Social and economic policies are prime examples of areas where we do this all the time, mostly without waiting for yet another improved report on the state of knowledge (bearing in mind that such reports, if available, will in all likelihood be used as ‘trump cards’ if they fit the proposed policy option—otherwise they will be ignored).

About 100 years ago, the great Austrian philosopher Otto Neurath expressed the hope to unite society through science, to solve political conflicts through rational scientific argumentation. He said: ‘Metaphysical terms divide; scientific terms unite.’ If we interpret the ‘metaphysical’ to comprise ideological and political aspects, we arrive at a truly astonishing conclusion. In the example of Keynes, the practical policies and agreements that were implemented were first and foremost based on politics. Keynes himself may have subscribed to a technocratic (and elitist) world view, but he practiced differently. His economic policy proposals were based on the recognition of the ‘stickiness of wages’ and a respect of trade union power. He developed his policy in opposition to neoclassical market equilibrium models. With regard to racism, we realize that the Nazis used an ideology to unite a majority of the German people and enrolled race science to bolster additional support for the extermination programs. So what happens in the climate discourse? Again, we see a widespread political agreement about the serious implications of climate change. But when it comes to action, ironically, science has been used to divide, not to unite. Because of a misunderstanding of the relation between authoritative knowledge and political power, a battle has been waged for the ‘correct’ understanding of the climate system as a precondition for action. This has alienated many citizens who would be prepared to discuss preventative policies (as expressed in opinion polls). Many perceive that they are expected to subscribe to a platform that is either beyond their grasp or suspicious. It is perhaps the ultimate irony that the science war about climate change has all the overtones of a religious war.

We do not claim to provide a general theory of how knowledge becomes practical and effective in policy making. Even if knowledge has the hallmarks of being practical, this does not mean that it will be implemented and thus automatically become effective in practice. Too many factors influence policy making and the unpredictable nature of historical processes inevitably thwarts any kind of determinism. The many different possible combinations of policy streams, windows of opportunity and active policy entrepreneurs lead to different policy outcomes, over time and in different jurisdictions.

But even where practical knowledge is developed, this does not create the conditions for its own practical success. Political and cultural forces are usually far more important for decision making. So does our study on the Power of Scientific Knowledge end with the conclusion that it does not yield any? It certainly does not in the literal sense of the term. However, scientists can act as agenda setters, influencing the belief systems of decision makers and provide legitimation for decisions taken. A science push model is highly improbable. As the cases of Eugenics and Keynesianism show, even the most practical knowledge does not create the conditions for its own implementation. And the case of climate change provides another lesson, the futility of trying to influence policy making without practical knowledge.

10 September 2012

New Paper: Basic Research as a Political Symbol

I have a new paper just out in Minerva as part of its special issue celebrating its 50th year as a journal. The paper is titled "Basic Research as a Political Symbol." The special issue has a range of interesting papers by leading scholars of science and technology policy.

Here is the full introduction to my paper:

R. A. Pielke, Jr. 2012. Basic Research as a Political Symbol, Minerva, doi: 10.1007/s11024-012-9207-5


Since the end of the Cold War in the early 1990s, observers of science and technology policies have studied in depth the on-going changes that have characterized thinking and practice (e.g., Sarewitz 1996, Gibbons et al. 1994, Guston 2000, Jasanoff 2004, Lentsch and Weingart 2011). Much of the post-War discussion and debate of science in society and politics orients itself with respect to an overarching narrative that characterizes post-World War II science and technology policy, with less attention focused on the establishment of that narrative (Kline 1995). At the core of that narrative is a concept – “basic research.” This paper documents the rise of “basic research” as a political symbol in the early to mid-20th century, thus setting the stage for its well-documented fall and the changing nature of science and technology policies now underway.

The phrase “basic research” has been fundamental to discussions of science policy for almost 70 years. It represents both a concept and a relationship. As we will see the phrase carries with it multiple meanings which support the expectation that public investments in science and technology are the wellspring of economic growth and prosperity. Although “basic research” entered the English lexicon around 1920, it only became a term broadly used in science policy discourse upon the 1945 publication of Vannevar Bush’s seminal report, Science – The Endless Frontier.

“Basic research” first appeared as a phrase on the pages of the New York Times in 1922, in a US Congressional hearing in 1919 and in 1923 in the Congressional Record of a floor debate among members of Congress, and in Science and Nature, respectively 1924 and 1928. The fact that the phrase appeared first in political discourse prior to appearing in the leading journals of science is symptomatic of the role of the phrase as an important political symbol with a meaning that has evolved over time.

From its initial years, Minerva was home to early discussions and debates over “basic research.” Toulmin (1966) presented the “warrants for basic research.” Michael Polanyi’s (1962) “Republic of Science” in the journal’s very first issue wrote what remains today an oft-cited defence of logic of support for basic research, which at the time was part of his long-standing debate countering J. Desmond Bernal’s emphasis on mission-oriented research (Brown 2007). Calvert (2006) explored the continuing use of “basic research” as a key concept in science policy. All told, almost 100 contributions to Minerva since 1962 have used the phrase “basic research” (according to a search of the journal’s website). As the journal looks to its next 50 years, this paper contributes an analysis of some neglected aspects of the past.

Political symbols, according to Edelman (1967), “bring out in concentrated form those particular meanings and emotions which the members of a group create and reinforce in each other.” More precisely, as a political symbol “basic research” represents the identifications, expectations and demands of scientists, policymakers and the public about the role of science in society (Lasswell et al. 1952). Symbols play an important function in political debate as “there is nothing about any symbol that requires that it stands for only one thing” (Edelman 1967). Thus, the public can rally around a national flag or members of a legislature around the issue of, say, “tax reform,” even though in each case individuals may ascribe vastly different meanings to the idea of the nation represented by the flag or the specific meaning of “tax reform.”

This paper argues that in discussions of science policy “basic research” has played an important symbolic role in maintaining a political consensus on the role of science in society that has continued largely unbroken for almost 70 years. The political consensus has undergirded an approach to science policy that has been found wanting by analysts and scholars for decades (e.g, Sarewitz 1996). However, to date, despite the perceived weaknesses in contemporary science policy identified by academics and other observers, a new political consensus has yet to emerge to replace “basic research” as a central, organizing symbol.

To explore the symbolic functions of “basic research” in political discourse in science policy over the past century, this paper documents through a straightforward content analysis the rise (and subsequent fall) of “basic research” as a political symbol, and how the phrase has evolved and changed as the politics of science policy have changed over time. The paper proceeds in four parts. Part one provides an overview of symbols in political discourse and the methodologies of content analysis used to track symbols over time. Part two conducts a simple content analysis of the presence of “basic research” in the United States in the elite media, Congress and in the leading scientific journals Science (published in the US) and Nature (published in the UK). Part three explores in more detail the historical context which gave rise to “basic research” as a political symbol. Part four discussions some of the reasons that “basic research” has sustained as a political symbol, despite the introduction of various alternatives. Despite various proposed alternatives to “basic research” – such as transformational research or use-inspired – the concept has yet to be replaced in science policy discourse. This paper helps to explain why. . .

To read the rest of the paper, please visit Minerva. Anyone wanting a copy who does not have access to Minerva can just send me an email. Comments welcomed!

06 September 2012

Upcoming Lecture on the "Green Revolution" at the University of Missouri

Next Tuesday I'll be giving a seminar at the University of Missouri on some of my work in progress related to innovation, with a focus on the so-called "green revolution." If you are in the vicinity please stop by and say hello.

Here are the details:

“The Mythology of the Green Revolution” a seminar by Dr. Roger A. Pielke, Jr, Tuesday, September 11,  1:30pm in the Benton Bingham Ballroom, Memorial Union

Dr. Pielke is a Professor in the Environmental Studies Program and a Fellow of the Cooperative Institute for Research in Environmental Sciences at the University of Colorado at Boulder. He has authored The Honest Broker: Making Sense of Science in Policy and Politics and The Climate Fix: What Scientists and Politicians Won’t Tell you About Global Warming. There will be time for questions. Seating is limited and refreshments will be available. This event is free and open to the public.


The "green revolution" has been studied from many angles. In this talk I discuss work in progress on how the "green revolution" became characterized in terms of science and innovation. This particular mythology of the green revolution emerged more than 40 years ago from a complex stew of factors, among them the political and scientific interests of neo-Malthusians, American Cold War policies, scientists seeking influence in high level political debates and the vagaries of nature.  The mythology often repeated today emphasizes "revolution" as a burst of scientific innovation and its successful, essentially technocratic, application to solve an emerging crisis of famine. But when the historical context is examined, this mythology suffers from several empirical shortfalls. A more useful view of the "green revolution" -- a different mythology, perhaps -- may be summarized as the "green evolution."  Science and innovation as related to agricultural productivity have been consciously implemented for more than 250 years, at least in Europe (perhaps longer elsewhere), characterized by incremental progress punctuated by occasional major scientific breakthroughs. This evolution has at times been transformative and has never been dissociated from its social and political consequences. The mythology of the "green revolution" may thus distract more than enlighten.


Usain Bolt, Tiger Woods, Roger Federer, Lance Armstrong, Serena Williams, Oscar Pistorius ... and Bill Clinton. He is unarguably he most talented US politician of his generation.

Whether you agree with his arguments or history, or not, you would be hard pressed to find a more purely talented politician from the Baby Boom generation.

Here is Clinton at the 2012 DNC, Enjoy!

04 September 2012

Where Did the Jobs Go?

I have a new column up over at the Breakthrough Institute in which I discuss two recent studies on US unemployment. In the column I try to answer the question, where did the jobs go?

Here is how it starts out:
US unemployment rates remain high and a major issue in the US presidential campaign. This post seek to explain with simple math where the jobs went as a first step in understanding how unemployment might be reduced. The math is simple, and it helps to show how most debate over economy and employment miss what actually matters most – and that is innovation policy.
 To read the rest, please head over there and feel free to comment/critique here or there!

Romney vs. Obama in the ScienceDebate

A group called ScienceDebate.org -- sponsored by the AAAS, NAS and Council on Competitiveness -- asked both US presidential candidates on a range of questions related to scientific topics to which they have now posted replies.

A lot of the replies are just campaign boilerplate, but there are a few items of note. Most significantly is the fact that Mitt Romney's reply on climate change is far more substantive than that from President Obama -- call me surprised.

Economic growth and technological innovation, not economy-suppressing regulation, is the key to environmental protection in the long run. So I believe we should pursue what I call a “No Regrets” policy — steps that will lead to lower emissions, but that will benefit America regardless of whether the risks of global warming materialize and regardless of whether other nations take effective action. 
Somebody on that campaign is channeling The Climate Fix;-)

Since I took office, the U.S. is importing an average of 3 million fewer barrels of oil every day, and our dependence on foreign oil is at a 20-year low. We are also showing international leadership on climate change, reaching historic agreements to set emission limits in unison with all major developed and developing nations. There is still more to be done to address this global problem.
Seriously? "Reaching historic agreements"? Historically inconsequential maybe. For those single issue voters focused on all things climate, the Obama campaign's response says: "You''ll vote for me no matter what pablum I give to the ScienceDebate." Slap!

The election certainly won't swing on issues related to climate, much less science. One reason for this of course is that there isn't really much difference between the candidates on most issues of science -- at least not as expressed in the answers provided to ScienceDebate.org -- both candidates love science, innovation, education, healthy food, clean water, science free from politics and an open internet.

I'd be surprised in a single US citizen changes their vote based on the replies. They are interesting nonetheless. Have a look at their answers here.

Postscript: And in case anyone is curious, I am voting for the incumbent.

01 September 2012

Five Commandments of Decision Making Under Uncertainty

In a paper presented yesterday at the Jackson Hole Economic Policy Symposium (a history of the symposium is here in PDF), Andrew Haldane and Vasileios Madouros recommend "Five Commandments" of decision making under uncertainty. The paper is titled "The Dog and the Frisbee" and in it they describe these "five commandments":
These are “Five Commandments” of decision-making under uncertainty. That description is apt. Like disease detection, frisbee catching, sports prediction and stock-picking, living a moral life is a complex task. The Ten Commandments are heuristics to help guide people through that moral maze, the ultimate simple rules. They have proven remarkably robust through the millennia. Less has been more.
The "commandments" are summarized below, based on my distillation of the text of their paper, and frequent readers of this blog are going to find much in them that is familiar:

1. "Complex environments often instead call for simple decision rules"
The simplest explanation is that collecting and processing the information necessary for complex decisionmaking is costly, perhaps punitively so. Fully defining future states of the world, and probability-weighting them, is beyond anyone’s cognitive limits. Even in relatively simple games, such as chess, cognitive limits are quickly breached. Chess grandmasters are unable to evaluate fully more than 5 chess moves ahead. The largest super-computers cannot fully compute much beyond 10 moves ahead (Gigerenzer (2007)).

Most real-world decision-making is far more complex than chess – more moving pieces with larger numbers of opponents evaluated many more moves ahead. Simon coined the terms “bounded rationality” and “satisficing” to explain cost-induced deviations from rational decision-making (Simon (1956)). A generation on, these are the self-same justifications being used by behavioural economists today. For both, less may be more because more information comes at too high a price.
2. "Ignorance can be bliss"
Too great a focus on information gathered from the past may retard effective decision-making about the future. Knowing too much can clog up the cognitive inbox, overload the neurological hard disk. One of the main purposes of sleep – doing less – is to unclog the cognitive inbox (Wang et al (2011)). That is why, when making a big decision, we often “sleep on it”.

“Sleeping on it” has a direct parallel in statistical theory. In econometrics, a model seeking to infer behaviour from the past, based on too short a sample, may lead to “over-fitting”. Noise is then mistaken as signal, blips parameterised as trends. A model which is “over-fitted” sways with the smallest statistical breeze. For that reason, it may yield rather fragile predictions about the future.

Experimental evidence bears this out. Take sports prediction. . .
3. "Probabilistic weights from the past may be a fragile guide to the future"
John von Neumann and Oskar Morgenstern established that optimal decision-making involved probabilistically-weighting all possible future outcomes (von Neumann and Morgenstern (1944)). Multiple regression techniques are the statistical analogue of von Neumann-Morgenstern optimisation, with behaviour inferred by probabilistically-weighting explanatory factors.

In an uncertain environment, where statistical probabilities are unknown, however, these approaches to decision-making may no longer be suitable. Probabilistic weights from the past may be a fragile guide to the future. Weighting may be in vain. Strategies that simplify, or perhaps even ignore, statistical weights may be preferable. The simplest imaginable such scheme would be equal-weighting or “tallying”.
4.  "Other things equal, the smaller the sample, the greater the model uncertainty and the better the performance of simple, heuristic strategies"
The choice of optimal decision-making strategy depends importantly on the degree of uncertainty about the environment – in statistical terms, model uncertainty. A key factor determining that uncertainty is the length of the sample over which the model is estimated. Other things equal, the smaller the sample, the greater the model uncertainty and the better the performance of simple, heuristic strategies.

Small samples increase the sensitivity of parameter estimates. They increase the chances of inaccurately over-fitting historical data. This risk becomes more acute, the larger the parameter space being estimated. Complex models are more likely to be over-fitted. And the parametric sensitivity induced by over-fitting makes for unreliable predictions about the future. Simple models suffer fewer of these parametric excesssensitivity problems, especially when samples are short.
5.  "Complex rules may cause people to manage to the rules, for fear of falling foul of them"
There is a final, related but distinct, rationale for simple over complex rules. Complex rules may cause people to manage to the rules, for fear of falling foul of them. They may induce people to act defensively, focussing on the small print at the expense of the bigger picture.

Studies of the behaviour of doctors illustrate this pattern (Gigerenzer and Kurzenhäuser (2005)). Fearing misdiagnosis, perhaps litigation, doctors are prone to tick the boxes. That may mean over-diagnosing drugs or over-submitting patients to hospital. Both are defensive actions, reducing risks to the doctor. But both are a potential health hazard to the patient. For example, submitting patients to hospital increases significantly their risk of secondary infection. Hospitals are, after all, full of sick people.

Doctors unencumbered by a complex rulebook will have fewer incentives to act defensively. They may also be better able to form their own independent judgements when diagnosing medical problems, using their accumulated experience. That ought to more closely align a doctor’s risk incentives with their patient’s. The same is likely to be true of other professions, from lawyers to policemen to bank supervisors.
A focus on simple vs. complex analyses and decisions that are based on heuristics rather than optimization runs counter to the grain of conventional wisdom across many areas, from financial regulation to environmental protection.

One important point to note is that their paper uses two conflicting definitions of "uncertainty." One definition of uncertainty is equivalent to "risk" or the odds of a particular outcome from a known distribution of outcomes. If I bet $1,000 that the next roll of a die will turn up 6, I am taking a risk on an uncertain outcome. A second definition of uncertainty ("Knightian uncertainty") is equivalent to what I typically call "ignorance" following from the work of John Maynard Keynes, as discussed in The Honest Broker. These two definitions are obviously not referring to the same concept, and thus are subject to confusion unless care in taken in the interpretation. (I discuss uncertainty-as-ignorance at length in this recent paper in PDF).

Academics and policy makers typically like to focus on uncertainty-as-risk rather than uncertainty-as-ignorance as the former is more readily subject to easy quantification and manipulation. This focus reinforces the values of academia (where physics-envy runs rampant through the social sciences) and the desire of politicians to make concrete-looking claims backed by authoritative-sounding expertise. The result can be to create a zone of ignorance surrounding our decisions. Not surprisingly, bad decisions can result.

Haldane and Madouros apply their analysis to financial regulation, but the heuristics that they introduce have a much broader applicability. The entire paper just scratches the surface of this important topic, but it is a readable and valuable contribution. Have a look (PDF).