31 March 2013

Fixing the Marcott Mess in Climate Science

In 1991 the National Research Council proposed what has come to be a widely accepted definition of misconduct in science:
Misconduct in science is defined as fabrication, falsification, or plagiarism, in proposing, performing, or reporting research. Misconduct in science does not include errors of judgment; errors in the recording, selection, or analysis of data; differences in opinions involving the interpretation of data; or misconduct unrelated to the research process.
Arguments over data and methods are the lifeblood of science, and are not instances of misconduct.

However, here I document the gross misrepresentation of the findings of a recent scientific paper via press release which appears to skirt awfully close to crossing the line into research misconduct, as defined by the NRC. I recommend steps to fix this mess, saving face for all involved, and a chance for this small part of the climate community to take a step back toward unambiguous scientific integrity.

The paper I refer to is by Marcott et al. 2013, published recently in Science. A press release issued by the National Science Foundation, which funded the research, explains the core methodology and key conclusion of the paper as follows (emphasis added):
Peter Clark, an OSU paleoclimatologist and co-author of the Science paper, says that many previous temperature reconstructions were regional and not placed in a global context.

"When you just look at one part of the world, temperature history can be affected by regional climate processes like El Niño or monsoon variations," says Clark.

"But when you combine data from sites around the world, you can average out those regional anomalies and get a clear sense of the Earth's global temperature history."

What that history shows, the researchers say, is that during the last 5,000 years, the Earth on average cooled about 1.3 degrees Fahrenheit--until the last 100 years, when it warmed about 1.3 degrees F.
The press release clearly explains that the paper (a) combines data from many sites around the world to create a "temperature reconstruction" which gives a "sense of the Earth's temperature history," and (b) "that history shows" a cooling over the past 5000 years, until the last 100 years when all of that cooling was reversed.

The conclusions of the press release were faithfully reported by a wide range of media outlets, and below I survey several of them to illustrate that the content of the press release was accurately reflected in media coverage and, at times, amplified by scientists both involved and not involved with the study.

Examples of Media Coverage

Here is Justin Gillis at the New York Times, with emphasis added to this excerpt and also those further below:
The modern rise that has recreated the temperatures of 5,000 years ago is occurring at an exceedingly rapid clip on a geological time scale, appearing in graphs in the new paper as a sharp vertical spike.
Similarly, at the NY Times Andy Revkin reported much the same in a post titled, "Scientists Find an Abrupt Warm Jog After a Very Long Cooling." Revkin included the following graph from the paper along with a caption explaining what the graph shows:
Revkin's caption:  A new Science paper includes this graph of data providing clues to past global temperature. It shows the warming as the last ice age ended (left), a period when temperatures were warmer than today, a cooling starting 5,000 years ago and an abrupt warming in the last 100 years.
Revkin concluded: "the work reveals a fresh, and very long, climate “hockey stick.”" For those unfamiliar, a hockey stick has a shaft and a blade.
Any association with the so-called "hockey stick" is sure to capture interest in the highly politicized context of the climate debate, in which the iconic figure is like catnip to partisans on both sides. Here is Michael Lemonick at Climate Central:
The study... confirms the now famous “hockey stick” graph that Michael Mann published more than a decade ago. That study showed a sharp upward temperature trend over the past century after more than a thousand years of relatively flat temperatures. . .

“What’s striking,” said lead author Shaun Marcott of Oregon State University in an interview, “is that the records we use are completely independent, and produce the same result.”
Here is Grist.org, which refers in the passage below to the same figure shown above:
A study published in Science reconstructs global temperatures further back than ever before — a full 11,300 years. The new analysis finds that the only problem with Mann’s hockey stick was that its handle was about 9,000 years too short. The rate of warming over the last 100 years hasn’t been seen for as far back as the advent of agriculture.

 To be clear, the study finds that temperatures in about a fifth of this historical period were higher than they are today. But the key, said lead author Shaun Marcott of Oregon State University, is that temperatures are shooting through the roof faster than we’ve ever seen.

“What we found is that temperatures increased in the last 100 years as much as they had cooled in the last 6,000 or 7,000,” he said. “In other words, the rate of change is much greater than anything we’ve seen in the whole Holocene,” referring to the current geologic time period, which began around 11,500 years ago.
Back to more mainstream outlets, here is how Nature characterized the study, offering a substantially similar but somewhat more technical description of the curve shown in the figure above:
Marcott and his colleagues set about reconstructing global climate trends all the way back to 11,300 years ago, when the Northern Hemisphere was emerging from the most recent ice age. To do so, they collected and analysed data gathered by other teams. The 73 overlapping climate records that they considered included sediment cores drilled from lake bottoms and sea floors around the world, along with a handful of ice cores collected in Antarctica and Greenland.

Each of these chronicles spanned at least 6,500 years, and each included a millennium-long baseline period beginning in the middle of the post-ice-age period at 3550 bc.

For some records, the researchers inferred past temperatures from the ratio of magnesium and calcium ions in the shells of microscopic creatures that had died and dropped to the ocean floor; for others, they measured the lengths of long-chain organic molecules called alkenones that were trapped in the sediments.

After the ice age, they found, global average temperatures rose until they reached a plateau between 7550 and 3550 bc. Then a long-term cooling trend set in, reaching its lowest temperature extreme between ad 1450 and 1850. Since then, temperatures have been increasing at a dramatic clip: from the first decade of the twentieth century to now, global average temperatures rose from near their coldest point since the ice age to nearly their warmest, Marcott and his team report today in Science.
And here is New Scientist, making reference to the exact same graph:
Shaun Marcott of Oregon State University in Corvallis and colleagues have compiled 73 such proxies from around the world, all of which reach back to the end of the last glacial period, 11,300 years ago. During this period, known as the Holocene, the climate has been relatively warm – and civilisation has flourished.

"Most global temperature reconstructions have only spanned the past 2000 years," says Marcott.

Marcott's graph shows temperatures rising slowly after the ice age, until they peaked 9500 years ago. The total rise over that period was about 0.6 °C. They then held steady until around 5500 years ago, when they began slowly falling again until around 1850. The drop was 0.7 °C, roughly reversing the previous rise.

Then, in the late 19th century, the graph shows temperatures shooting up, driven by humanity's greenhouse gas emissions.

The rate of warming in the last 150 years is unlike anything that happened in at least 11,000 years, says Michael Mann of the Pennsylvania State University in University Park, who was not involved in Marcott's study. It was Mann who created the original hockey stick graph (see upper graph here), which showed the change in global temperatures over the last 1000 years.

Over the Holocene, temperatures rose and fell less than 1 °C, and they did so over thousands of years, says Marcott. "It took 8000 years to go from warm to cold." Agriculture, communal life and forms of government all arose during this relatively stable period, he adds. Then in 100 years, global temperatures suddenly shot up again to very close to the previous maximum.
It seems clear that even as various media took different angles on the story and covered it in varying degrees of technical detail, the articles listed above accurately reflected the conclusions reflected in the NSF press release, and specifically the "hockey stick"-like character of the new temperature reconstruction. Unfortunately, all of this is just wrong, as I explain below. (If you'd like to explore media coverage further here is a link to more stories. My colleague Tom Yulsman got punked too.)

The Problem with the NSF Press Release and the Subsequent Reporting

There is a big problem with the media reporting of the new paper. It contains a fundamental error which (apparently) originates in the NSF press release and which was furthered by public comments by scientists.

In a belatedly-posted FAQ to the paper, which appeared on Real Climate earlier today, Marcott et al. make this startling admission:
Q: What do paleotemperature reconstructions show about the temperature of the last 100 years?

A: Our global paleotemperature reconstruction includes a so-called “uptick” in temperatures during the 20th-century. However, in the paper we make the point that this particular feature is of shorter duration than the inherent smoothing in our statistical averaging procedure, and that it is based on only a few available paleo-reconstructions of the type we used. Thus, the 20th century portion of our paleotemperature stack is not statistically robust, cannot be considered representative of global temperature changes, and therefore is not the basis of any of our conclusions.
Got that?

In case you missed it, I repeat:
. . . the 20th century portion of our paleotemperature stack is not statistically robust, cannot be considered representative of global temperature changes . . .
What that means is that this paper actually has nothing to do with a "hockey stick" as it does not have the ability to reproduce 20th century temperatures in a manner that is "statistically robust." The new "hockey stick" is no such thing as Marcott et al. has no blade. (To be absolutely clear, I am not making a point about temperatures of the 20th century, but what can be concluded from the paper about temperatures of the 20th century.)

Yet, you might recall that the NSF press release said something quite different:
What that [temperature reconstruction] history shows, the researchers say, is that during the last 5,000 years, the Earth on average cooled about 1.3 degrees Fahrenheit--until the last 100 years, when it warmed about 1.3 degrees F.
So what the paper actually shows is the following, after I have removed from the graph the 20th century period that is "not statistically robust" (this is also the figure that appears at the top of this post):
Surely there is great value in such an analysis of pre-20th century temperatures. And there can be no doubt there will be continuing debates and discussions about the paper's methods and conclusions. But one point that any observer should be able to clearly conclude is that the public representation of the paper was grossly in error. The temperature reconstruction does not allow any conclusions to be made about the period after 1900.

Does the public misrepresentation amount to scientific misconduct? I'm not sure, but it is far too close to that line for comfort. Saying so typically leads to a torrent of angry ad hominem and defensive attacks, and evokes little in the way of actual concern for the integrity of this highly politicized area of science. Looking past the predictable responses, this mess can be fixed in a relatively straightforward manner with everyone's reputation intact.

How to Fix This

Here are the steps that I recommend should be taken:

1) Science should issue a correction to the paper, and specially do the following:

(a) retract and replot all figures in the paper and SI eliminating from the graphs all data/results that fail to meet the paper's criteria for "statistical robustness."
(b) include in the correction the explicit and unambiguous statement offered in the FAQ released today that the analysis is not "statistically robust" post-1900.

2) NSF should issue a correction to its press release, clarifying and correcting the statements of Peter Clark (a co-author, found above) and Candace Major, NSF program manager, who says in the release:
"The last century stands out as the anomaly in this record of global temperature since the end of the last ice age," says Candace Major, program director in the National Science Foundation's (NSF) Division of Ocean Sciences.
3) The New York Times (Gillis and Revkin, in particular), Nature and New Scientist as outlets that pride themselves in accurate reporting of science should update their stories with corrections. Grist and Climate Central should consider the same.

[UPDATE: Andy Revkin at DotEarth has updated his posts here and here to reference the "lost blade" from the hockey stick and link to this post. That was quick and easy. Others take note.]

Let me be perfectly clear -- I am accusing no one of scientific misconduct. The errors documented here could have been the product of group dynamics, institutional dysfunction, miscommunication, sloppiness or laziness (do note that misconduct can result absent explicit intent). However, what matters most now is how the relevant parties respond to the identification of a clear misrepresentation of a scientific paper by those who should not make such errors.

That response will say a lot about how this small but visible part of the climate community views the importance of scientific integrity.

22 March 2013

Spring Break

Spring break! I'll be back in April . . .

21 March 2013

Climate Attribution Alchemy

NOTE: This post has been updated, based on suggestions made by HowardW in the comments, and to correct an error on my part.

In a paper just out in the journal Climate Dynamics Holland and Bruyère (HB13) claim to have found a signal of greenhouse gas emissions on global tropical cyclone behavior. They use data from Weinkle et al. 2012 -- our recent paper on global landfalls -- as part of this argument. They write:
Weinkle et al. (2012) examined the global number of hurricanes that actually make landfall in each of the Saffir–Simpson categories. The proportion of Cat 4–5 at landfall to all landfall hurricanes in their data set has increased with ACCI [Anthropogenic Climate Change Index -- a measure of global temperature increase] at a rate of  ~21 % per C (p < 0.01).
Wow! 21% per degree sounds like a lot.

Setting aside issues of data uncertainty for classification of storms, I was skeptical about their finding off of our data because -- as you can see in the graph above from our dataset -- there is not an obvious signal in proportion of global Category 4 and 5 landfalls 1970 to 2010. The presence of any multi-decadal trend in this metric (significant or not) is very sensitive to start and end date. Urged on by a persistent but ultimately helpful commenter on this blog, I can report that I am able to replicate this finding and it is . . . interesting. (Note that the following discussion focuses only on the claims of HB13 related to Weinkle et al. 2012, but that is plenty enough for me.)

HB13 explain of their methodology:
For the global hurricane analysis we choose to focus on the slightly shorter period from 1975 to 2010, as this is a reasonably homogeneous period of global satellite data . . .

[P]revious studies have reported a marked upward trend in intense hurricanes (Webster et al. 2005; Emanuel 2005, 2007; Elsner et al. 2008), one that is closely related to increasing SST (Hoyos et al. 2010). This trend in intense hurricanes is the focus of the remainder of our analysis.

We bin all hurricanes into the five Saffir–Simpson categories and take annual proportions of each relative to the total number of hurricanes. These are smoothed with a 5-year running mean to remove short-term variability. . .
So I followed this procedure and produced the following graph of the smoothed data for the period 1975 to 2010 with trend taken, as described in the paper, and shown in red along with the linear regression equation. 
Applying the linear regression shown in the graph above over the 35 years of data, the proportion of Category 4 and 5 hurricanes increases at a rate of 0.33% per year (from the regression equation).

The metric that HB13 report is percentage increase in proportion per degree Celsius global temperature change. Global temperatures have increased by approximately 0.5-0.6 degrees C from the late 1970s to the late 2010s (I do not have the precise values of their ACCI, but this value is consistent with HB13 Figure 2a, 6a and also this dataset).

So the increase (the slope of the regression line) in of Category 4 and 5 tropical cyclones in the Weinkle et al. 2012 dataset (using the trend off of the smoothed data) is 0.0033/(.55/35) or an increase of ~21% per degree Celsius -- exactly what they report. QED. 

I will leave it as an exercise for the reader to offer a critique of methods or interpretation of the results. For my part, I do not find the analysis compelling. But it is interesting!

20 March 2013

If I Taught a Statistics Course

Today I am guest lecturing in a graduate seminar here on Quantitative Methods of Policy Analysis, being taught by Jason Vogel. The subject of today's class is statistics. In preparing for the class I rounded up a set of books that I have found to be particularly useful and I thought I'd share them here, just in case I ever teach a stats class down the road.

These four books would be at the top of my required reading list:

S. Sigler, 2002. Statistics on the Table: A History of Statistical Concepts and Methods, Harvard University Press.

S. Senn, 2003. Dicing with Death: Chance, Risk and Health, Cambridge University Press.

W. Briggs, 2008. Breaking the Law of Averages: Real Life Probability and Statistics in Plain English, LuLu Marketplace (and here, free!).

M. Mauboussin, 2012. The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing, Harvard University Press.

A few of the cases that we will discuss today will include the NCAA tournament (and Nate Silver's skill), hurricane trends (of course), and a few puzzlers from the books above. It'll be fun. The cases for exploration of statistical questions and methods are infinite of course, and run up against important questions of research design, epistemology and philosophy of science among other topics.

What other books, readings would you recommend?

I Remain "Roughly" 18 Feet Tall

UPDATE: Grinsted engages in the comments, do have a look!

UPDATE2: I see that Grinsted is changing his "rebuttal" in real time and without acknowledgment of the changes (not good blog etiquette;-). So please do note that the post below is a response to the rebuttal as it stood March 19th, which now has evolved. In the latest version, Grinsted grants my major points (great!), so I think this debate has come to a close. Readers are invited to judge for themselves whether the Grinsted et al. surge index should be preferred in any way to existing datasets on US hurricane landfall frequencies and intensities, as a measure of past hurricane incidence.

Aslak Grinsted, the lead author of a new paper in PNAS this week which predicts a hurricane Katrina every other year in the decades to come, has just responded to my earlier critique of their methods. The new paper depends on that earlier work, and I am afraid suffers the same faults. My earlier critique can be found here and Grinsted's response is here. I welcome the exchange. Here I explain why Grinsted's response is off base and repeats the problematic analysis found in the original paper.

In his response to my critique, Grinsted claims to see a marked increase in the number of damaging storms from our normalized loss dataset. This "surprising" discovery apparently supports the conclusion of ever worsening hurricanes, and it has been missed all this time. Grinsted shows the following graph.
Grinsted explains the graph as follows (emphasis added):
I find it interesting to plot the frequency of extreme normalized damage events. I have a chosen to define extreme damage so that so that it corresponds to roughly the same number of events as UScat1-5. Surprisingly there is a strong trend. The same clear trend is not in total normalized damage per year. It clearly shows that the distribution is not stationary.
Once again we see how the notion of "roughly" introduces some problems for the analysis. Let me explain -- precisely.

On the graph (at the top) Grinsted says that he has included the top 205 most damaging events from our dataset for 1900 to 2005 (our paper and dataset can be found here). Our dataset has 217 total events, which includes landfalling storms of tropical storm strength (those which are named storms but at less than hurricane strength) as well as landfalling Category 1 to 5 hurricanes.  Grinsted's black curve shows Category 1-5 landfalling hurricanes and the red curve shows what he claims to be "roughly the same number of events as UScat1-5." This claim is wrong.

By using the top 205 damaging events, that means that Grinsted dropped off the bottom 12 least-damaging events. The bottom 12 events include 8 Tropical Storms: Fay (2002), Beryl (1988), Chris (1982), Isidore (1984), Allison (1995), Chris (1988), Dean (1995) and Gustav (2005) plus 4 Category 1 hurricanes: Bonnie (1986), Alex (2004), Florence (1988) and Floyd (1987).

Grinsted missed 50 other storms of less-than-hurricane strength. Put another way, there are only 155 damaging events of Category 1-5 strength from 1900 to 2005, yet Grinsted graphed 205 events. So 205 is "roughly" 155, and I once again am "roughly" 18 feet tall.

Why does this matter?
From 1945 to present there were 57 tropical storms which caused damage. Before 1945 there was just 1. The graph above shows how these storms show up in our dataset over time. Is global warming causing more damaging storms with winds less than 74 mph?

I am sorry to disappoint. The actual reason for the increasing number of damaging tropical storms has to do with the reporting of damages. Typically, such storms have very low damages and simply were included less frequently in the official records of the National Hurricane Center. Today, every storm comes with a damage estimate -- small or large.

The neglect of past tropical storms in the NHC dataset does introduce a very small bias in our results -- from 1900 to 2005 the normalized losses (in 2005 dollars) from all landfalling storms of tropical storm (i.e., less than Category 1) strength are about 2% of the total losses. Of note there were also 8 storms of hurricane strength which made landfall prior to 1940 which had no loss estimates (and thus these also do not appear in our dataset or in Grinsted's graph). Adding in past storms with missing loss estimates would have the effect of making the damage estimates of the distant past as much as several percent higher annually from 1900-1940. That wouldn't change our results in any meaningful way (and works against those laboring to find a trend in our so-far trend-free dataset).

What the small bias will do instead is perhaps confuse someone who looks at our dataset without understanding it, much less someone who treats it "roughly." Obviously this error also confounds Grinsted's efforts to create correlations bewteen our dataset and others.

When you correctly compare the historical record of US hurricane landfalls to our damage record you will find a perfect match (assuming that the 8 hurricanes prior to 1940 with zero damage would cause damage today), as every landfalling Category 1-5 storm since 1940 has a damage estimate. Tellingly, in his rebuttal, Grinsted has committed the exact same type of error that was committed in his original paper -- he has looked at data and seen in it something which it does not hold.

The bottom line here is clear. If you want to look at trends in hurricanes, there is absolutely no need to construct abstract indices as there is actually good data on hurricanes themselves. Look for yourself:

16 March 2013

The Advocate's Dilemma

In today's FT John McDermott has lunch with Noam Chomsky. If you don't know who Chomsky is McDermott explains:
Chomsky is arguably the world’s most prominent political activist. To his opponents, he is a crank who sees evil as made in America. To his supporters, he is a brave truth-teller and unrelenting huma­nist; a latter-day Bertrand Russell. . . Some of Chomsky’s critics have accused him of going easy on the faults of autocrats so long as they are enemies of the US.
Chomsky pushes back on this characterization and in the process points to a fundamental dilemma faced by the advocate - a role distinguished by its focus on reducing the scope of choice available to a decision making, typically to a single preferred outcome.

Chomsky explains the dilemma as follows:
“Suppose I criticise Iran. What impact does that have? The only impact it has is in fortifying those who want to carry out policies I don’t agree with, like bombing.” He argues that any criticisms about, say, Chávez, will invariably get into the mainstream media, whereas those he makes about the US will go unreported. This unfair treatment is the dissident’s lot, according to Chomsky.
What we have here is the old "ends-justify-the-means" challenge. For scientists and other experts this dilemma is particularly acute, because the authority of the expert lies in their claim to integrity and credibility. It is one thing for a political commentator to cherry pick or otherwise make arguments selectively, as their authority does not necessarily rest on the fidelity of their claims.  It is quite another thing for an expert to engage in the same sort of sly tactics, because they risk the very basis for the experts' claim to political authority. It may not seem fair, but that is how it is.
 Chomsky explains that experts often serve political power rather obediently:
Intellectuals like to think of themselves as iconoclasts, he says. “But you take a look through history and it’s the exact opposite. The respected intellectuals are those who conform and serve power interests."
If you think that political battles are mostly about competing power interests, then you will probably have little concern about experts who decide to enlist their authority in service of advancing those interests. In power politics, the ends, of course, justify the means. However, if you think that the substance of alternative courses of action matter as much or even more than power interests, then you will view politicized expertise as not just of concern, but deeply pathological.

Back in 2005, when I was working on the first edition of The Honest Broker I explained on Prometheus how this dynamic was playing out in the climate debate:
[I]f the climate science community were to simply ignore such misuse of their authority for purposes of advocacy, it raises legitimate questions about the role that climate scientists wish to play in the political debate. Context matters here as many climate scientists have shown little reluctance to speak out in response to certain commentators (compare, e.g., reaction to Michael Crichton). . . Under this scenario, letting misstatements stand while selectively correcting others contributes to the conflation of climate science and climate politics.

These dynamics help to illustrate how an observer of the political debate on climate might come to (or even seek) the conclusion that climate science and politics are one and the same. From this vantage point, climate scientists become issue advocates whether they like it or not. For some climate scientists this outcome may be perfectly acceptable (see earlier reference to Madisonian democracy), but if climate policy needs consideration of new and innovative options (see earlier reference to Schattschneiderian democracy) then the climate community's collective actions may limit its future contributions to the climate debate to simply a tool of marketing for agendas now on the table. For issue advocates this may be a desirable outcome, but the question that I have for scientists is - is this the direction that you really want science to go?
The risk of advocacy within the expert community is not so much the consequences for the individual -- after all experts are people too, and each of us has to decide what role we wish to play. In a democracy advocacy is not just fundamental, but a noble calling. There are of course consequences for the individual expert of deciding to become an overt advocate, but taking such a course of action is not problematic. This is one reason that I have long supported Jim Hansen's overt advocacy on climate change -- more power to him. Same goes for Noam Chomsky - his advocacy is welcomed in policy debates, whether I agree with his politics or not. No one would likely confuse Hansen or Chomsky with an honest broker institution or a reliable arbiter of technical questions.

A risk of advocacy to expertise is when it becomes systemic to the degree that alternative perspectives beyond advocacy are not welcomed or even denigrated -- if you are not with us then you must be with the enemy. When an expert community becomes dominated by advocacy you may find yourself playing power politics in the absence of policy substance. At that point the battles may be intense and symbolic, but they won't mean much in terms of achieving the advocate's policy goals. (Ring any bells?) In other words, it wouldn't be desirable if all of our experts followed the path chosen by Noam Chomsky.

The advocate's dilemma is thus not simply about whether to be an advocate or not -- the dilemma is whether to respect and include independent expertise, or to denigrate or even try to silence it when you find it politically inconvenient. Such a situation becomes problematic when policy debate needs a constant influx of new and creative options for action, which can serve both political and policy objectives, or alternatively, when the credibility of expertise actually matters.

The Bush administration found out that sacrificing credibility and options may help achieve short term political successes, but ultimately saw that strategy fail in the longer run (both in terms of policy success and political credibility see my discussion in The Honest Broker). For advocates (experts or other) who disapprove of independent, uncompliant experts, it is a lesson worth learning.

14 March 2013

Thou Shall Not Critique the Australian Climate Commission

UPDATE: John McAneney responds to the Climate Commission press release here.

Today brings another case study in the self-destructive intolerance of the climate movement, and the challenges of expertise in highly politicized debates. Writing at The Conversation, a widely-read commentary site in Australia, Ryan Crompton and John McAneney of Macquarie University provide an update on their database of normalized insured disaster losses for Australia. (Note: I am affiliated with the group headed by McAneney and have collaborated with both.)

The occasion for their update is a recent report by the Australian Climate Commission, a government body put established to advocate for action on climate change. The report -- The Angry Summer -- provides a tabulation of various weather records broken Down Under during the remarkable summer that has just ended. The report makes a number of very strong claims, including this one:
Australia’s Angry Summer shows that climate change is already adversely affecting Australians. The significant impacts of extreme weather on people, property, communities and the environment highlight the serious consequences of failing to adequately address climate change.
As experts on damage to property caused by extreme events Crompton and McAneney subsequently wrote their piece at The Conversation to put one metric of impacts experienced in Australia this past summer into a bit of historical perspective. They explain by explicitly referring to the claim made by the Climate Commission, noting that it was just one of several claims that were made:
The report refers to, amongst other things, how the significant impacts of extreme weather on property highlights the serious consequences of failing to adequately address climate change.

So has property damage during 2012-2013 been higher than normal?

The answer, in terms of insured losses from weather-related disasters, is no.
You can see the losses for 2012/2013 in the figure at the top of this post (note: the data is June-May, so the data for the current year will still need several months for a full year). The data comes from peer-reviewed research that I have discussed occasionally. Crompton and McAneney conclude:
The long-term average annual normalised insured loss from weather-related disasters is around $1.1 billion. To date, insured losses during the 2012-13 financial year from bushfires in Tasmania and Coonabarabran and flooding in Queensland and New South Wales currently total almost $1 billion. This loss is certainly not “angry”.
Lest they be misinterpreted, the authors conclude their piece by emphasizing the importance of responding to climate change and maintaining scientific credibility in that effort:
Climate change is an important concern, and deserves policy attention. However, making supportable scientific claims is important as well. Those who point to increasing disaster losses as a signal of human-caused climate change are doing no favours for those working to address growing losses and accumulating greenhouse gases in the atmosphere.
So far, so normal -- at least as far as the climate debate goes. The urge to associate climate change with property damage is tempting and common, of course, and when such claims are made responsible researchers place them into context by pointing to what the peer-reviewed research actually says. In the annals of the discussions of climate change and property damage, where debates can get intense, Crompton and McAneney are respectful and stick to the science (unlike this jerk).

So it is bizarre that at this point this story goes off the rails.

Upon publication of the piece at The Conversation, the Australian Climate Commission issued a rambling and vicious press release attacking Crompton and McAneney. Apparently, the sin they committed was not in being wrong in their scientific claims, but in daring to offer a critique of the Commission in the first place.

The Commission press release states:
Today in the Conversation Ryan Crompton and John McAneney badly misrepresent the Climate Commission’s recent report, The Angry Summer.

Commissioner Professor Will Steffen has released the following statement responding to the article. He commented that the article is “opportunistic and unbecoming of a research institution”.

Crompton and McAneney assert that, from an insurance loss perspective, this summer was not the worst. They compare the costs of extreme weather events this summer to other extreme weather events and assert that the Commission wrongly used insurance losses in the Angry Summer report.
Badly misrepresent? At no point do Crompton and McAneney ever "assert that the Commission wrongly used insurance losses in the Angry Summer report." Sorry, but this is a bald-faced lie from the Commission. Crompton and McAneney accurately state that the report refers to the "significant impacts of extreme weather on property" during the "angry summer." You can see a tweet below from the Commission which refers uncritically to another recent piece at The Conversation, one that is chock full of scientific inaccuracies. Apparently being completely wrong did not merit a press release, but I digress.

The statement by Will Steffen, a scientist speaking for the government in his official capacity on the climate commission, beggars belief -- "opportunistic and unbecoming of a research institution." The comment, which explicitly makes reference to the university as a whole, brings to mind the time that Richard Nixon demanded that all funding for MIT be cut of because he didn't like its politics.
The Commission press release rambles on about how much the Commission actually agrees with Crompton and McAneney, and explains that in the "Angry Summer" report when they said impacts to property they were actually referring to unspecified and unquantified impacts to property other than those related to economic costs. Please. (And really, so what? Regardless of what the Commission really meant, Crompton/McAneney's piece adds valuable context not given by the Commission and should be welcomed.)

The press release says:
Crompton and McAneney correctly note that “making supportable scientific claims is important”. They would be well advised to take their own advice.
The Climate Commission offers not a single criticism of any claim made by Crompton and McAneney.

There is actually nothing more becoming of a research institution than researchers willing to engage the public with the results of their scientific research. In democracies, it is OK to crtiique government -- in fact, governance is improved through such critique. There is nothing more offensive than a government that attacks researchers for the temerity to offer legitimate critique. In the US, we saw how the Bush Administration learned that lesson the hard way. It looks like the Climate Commission hasn't taken note.

11 March 2013

Learning from China: Coal and its Nukes

Over the past few years I've given the NYT's Justin Gillis a (deserved) hard time for some of his reporting. Today I'm happy to given him some well-earned praise on the occasion of his first monthly column at the NYT Times on climate change. Gillis wisely chose to write his first column on energy innovation, with a focus on nuclear power and China:
We have to supply power and transportation to an eventual population of 10 billion people who deserve decent lives, and we have to do it while limiting the emissions that threaten our collective future.

Yet we have already poured so much carbon dioxide, the main greenhouse gas, into the atmosphere that huge, threatening changes to the world’s climate appear to be inevitable. And instead of slowing down, emissions are speeding up as billions of once-destitute people claw their way out of poverty, powered by fossil fuels.

Many environmentalists believe that wind and solar power can be scaled to meet the rising demand, especially if coupled with aggressive efforts to cut waste. But a lot of energy analysts have crunched the numbers and concluded that today’s renewables, important as they are, cannot get us even halfway there.

“We need energy miracles,” Mr. Gates said in a speech three years ago introducing his approach, embodied in a company called TerraPower.

A variety of new technologies might help. Bright young folks in American universities are working on better ways to store electricity, which could solve many of the problems associated with renewable power. Work has even begun on futuristic technologies that might cheaply pull carbon dioxide out of the air.

But because of the pressing need for thousands of large generating stations that emit no carbon dioxide while providing electricity day and night, many technologists keep returning to potential improvements in nuclear power.
The conclusion reached by Gillis is a logical consequence of doing the math on energy and carbon dioxide.  Gillis concludes, quite rightly:
In effect, our national policy now is to sit on our hands hoping for energy miracles, without doing much to call them forth. 
While we dawdle, maybe the Chinese will develop a nice business selling us thorium reactors based on our old designs.
Of course, "we" are doing a lot more than just siting on our hands -- we are fighting over a largely symbolic piece of pipe going across the Canadian border, we are waging battles over arcana of climate science, we are blaming every disaster on carbon dioxide and we use the climate issue to demonize our opponents (whatever their views). So Gillis is right the that we could be spending our efforts much more productively.

The Chinese are certainly not sitting on their hands. At the Washington Post, Brad Plumer has an excellent post on a new Deutsche Bank report (here in PDF) on China's growing pollution problem. Plumer writes:
Earlier this year, when Beijing was choking on record levels of smog, observers wondered whether China would ever get its pollution problem under control. It’s an insanely difficult question, with huge implications for everything from climate change to the global economy.

So here’s one stab at an answer, in the form of a big recent analysis (pdf) from three Deutsche Bank economists. The bad news: Most of China’s current attempts to curb pollution are failing badly — the country is on pace for ever-higher levels of smog that could throttle the nation’s economy and trigger out-of-control protests.

But there are reasons for optimism here, too: It’s still technically possible for China to get a handle on its smog problem without abandoning economic growth. The country will just have to revamp its energy and transportation policies entirely. Starting… now.
Plumer's analysis is worth reading in full, as is that of Kate Mackenzie at FT Alphaville.I agree with Mackenzie's skepticism about the pace at which China's energy intensity of GDP can be reduced. Nonetheless, the comparison by Deutsche Bank with the experience of the UK over the past 60 years is illustrative. The figure above, from the report, shows the evolution of the UK energy mix from 1948 to 2008, moving from almost total dependence on coal to an energy mix with gas as the leading source of energy.

There are reasons why China is not a good analogue to the UK. For instance, Deutsche Bank projects that China will increase its installed nuclear capacity by 10 times by 2030. That would mean something like 170 new nuclear power plants (China currently has 17). For its part, the Chinese government projects 255 new plants. Either way (I'll take the over, thank you), the rate of build works out to about 1 new plant per month from now until 2030. Sounds fantastic? I thought so too until I learned that China built 11 new nuclear plants in 2011. Apparently not content with its domestic build-out, China is planning to export its nuclear technologies in the near-term. (Perhaps the US Congress will take note as realted to US competitiveness, but I digress.)

China can be thought of as a microcosm of the global economy. As China becomes richer and further sees its energy intensive activities shift offshore, its pollution problems will migrate to the next generation of developing economies, perhaps elsewhere in Asia and eventually in Africa. As I have argued, the energy demands of the future are likely to be massive, and to meet this demand coal (and other dirty energy technologies) are just not going to work -- as the image at the top of this post shows -- despite its dominance in the energy mixes of China and India today.

The bottom line from these excellent reports and analyses should be abundantly clear: Looking to the energy future, one is necessarily either pro-nuclear and pro-gas (fracking) OR one is pro-carbon dioxide and pro-pollution. Which are you?

08 March 2013

No Superpowers for the EU Science Adviser


Writing in Science today, Kai Kupferschmidt has an excellent profile of Anne Glover, Europe's chief scientific advisor. The profile follows the typical arc of such discussions -- a longing for power, success in institutionalization, followed by disappointment in the realpolitik of the position. At the top of this post you can see Glover giving a keynote talk last month at the STEPS science policy conference in Sussex, UK.

An anecdote related by Kupferschmidt tells the tale:
[In] a magazine interview in July, Glover argued that eating genetically modified food was no riskier than eating conventionally farmed food—a stance at odds with the beliefs of many Europeans.

She says she wanted to give evidence a voice. “By all means, people can say, for ethical reasons, for philosophical reasons, for economical reasons, for political reasons, I am not keen on that,” she says. “But you cannot 1say it is dangerous, when it isn’t.”

The interview sparked a debate in the European Parliament and an official request by one of its members asking whether the commission agreed with Glover’s stance. The reply was telling. The chief science adviser, the commission wrote in its answer, “has a purely advisory function and no role in defining Commission policies. Therefore, her views do not necessarily represent the views of the Commission.”
Here is what José Manuel Barroso, president of the European Commission and to whom Glover reports, had to say in full (here as a DOC) about the controversy, from which Kupferschmidt quotes from:
The Commission wishes to use the opportunity to clarify to the Honourable Member the role of the Chief Scientific Adviser (CSA). The CSA reports directly to the President of the Commission and has the task to provide independent expert advice to the President on any aspect of science, technology and innovation and the potential opportunities and threats to the EU stemming from new scientific and technological developments. Likewise, the CSA has a role in enhancing public confidence in science and technology and to promote the European culture of science. In this context, the CSA has a role in stimulating societal debate on new technologies and to communicate the existing scientific evidence about such technologies. The CSA has a purely advisory function and no role in defining Commission policies. Therefore, her views do not necessarily represent the views of the Commission.
The science adviser has a role much like one would expect in the real world of politics. However, the image of the science adviser held by many, including some scientists, remains tied to the mythology of the position based on unrealistic expectations of truth speaking to power.

That mythology shows up in Kupferschmidt's profile as well.
Contrast Glover’s access with that of John Holdren, the latest in a long list of éminence grises tapped to advise U.S. presidents. At the annual meeting of AAAS (Science’s publisher) in Boston last month, Glover says that Holdren told her that he was in and out of Barack Obama’s office up to four times a day in the run-up to important decisions.
Four times a day!? Based on John Holdren's recent talk in Boulder in which he described his role in the Obama Administration, and the history of the position (here in PDF), this claim would seem to be best viewed as a bit of science-adviser-to-science-adviser braggadocio. The science adviser in the US system sits outside the circle of close presidential advisers, not least because the position is congressionally mandated and thus not subject to executive privilege. When important decisions are being made the science adviser is asked to leave the room. No science adviser that we interviewed (going back to JFK) met with the president up to 4 times per day - once a month might be more realistic.

Rather than taking from this a sense of access-envy, Glover should take some confort in knowing that her role as science adviser is actually not so different from that experienced by advisers in the US:
European Commission President José Manuel Barroso, the driving force behind the creation of her position, hasn’t bestowed superpowers on Glover, however. On the contrary, after years of discussions in Brussels, the science adviser’s office became a “casualty” of austerity measures, Glover says. She has no budget of her own and just five staff members—one-half of the size of her team in Scotland.
While there is a big difference between having superpowers and serving as a mid-tier bureaucrat, the key to the success of the science adviser role is to recognize the realities of the position and take full advantage of what is nonetheless a unique role in government.

05 March 2013

Graph of the Day: Global Weather Disasters and GDP

Data from Munich Re and United Nations. The graph shows a ratio of global weather-related disaster losses to global GDP expressed in 2011 dollars and calculated at market exchange rates. For a peer reviewed analysis which goes into some depth on this subject, see this Munich Re-funded study:

E. Neumayer and F. Barthel. 2011. Normalizing Economic Loss from Natural Disasters: A Global Analysis, Global Environmental Change, 21:13-24 (here in PDF).

They conclude: "there is no evidence so far that climate change has increased the normalized economic loss from natural disasters."