Given the great interest in my earlier post on the recent PNAS paper by Rahmstorf and Coumou (RC11), I have decided to summarize various issues for those who are interested and would like to continue the discussion.
NOAA has posted up some very interesting graphs of temperature change in Russia, which are extremely useful for documenting the extreme cherry picking found in RC11 which claimed as a top line result an 80% probability that the Russian heat wave was caused by a general warming trend.
The graphs above show linear trends (top) and statistical significance (bottom) for any combination of start and end date 1880 to 2010 for the GISS dataset. At the NOAA website linked above you'll find similar graphs for the other available datasets. These graphs are the opposite of cherry picking.
These graphs also help us to clearly identify the various cherries that fill the bowl that is RC11. Here is a quick summary:
1. Linear trend cherry pick. In western Russia (a large region that includes Moscow, defined as 50-60N and 35-55E) there are no statistically significant warming trends to present since 1880 (look at the huge area of white in the lower right hand part of the bottom graph) and actually since anytime before the 1930s.
RC11 are able to argue for a long-term linear trend by beginning their analysis of the Russian heat wave from 1910, which you can see from the top graph is one of the selected locations in the top graph where there is a positive trend. They explain that 1910 was chosen because it reflects 100 years, a nice round number.
2. Station cherry pick. But even that linear trend, though positive, is not significant in the region. So RC11 perform another example of slectivity by focusing attention on one station -- Moscow -- rather than the broader partial-continent-sized area that was the focus of the paper that RC11 seeks to refute. In a blog post (but not in the paper) Rahmstorf appears to want to discount the entire lack of warming over western Russia based on a claim of a single improper station adjustment in a single data set. It is not a coincidence that this analysis did not appear in the paper as it is a stretch, even for PNAS.
3. Data set cherry pick. RC11 look only at the NASA GISS dataset and its adjustment, even though there are multiple other datasets that use different adjustments. Do they really want to imply that based on claims about one station's adjustments in GISS data that all data sets for the entire region are flawed? Ironically enough, they may find some sympathy for such arguments in Anthony Watts Surface Stations work! ;-) By contrast, the information provided by NOAA shows that the lack of long-term warming can been seen in western Russia across the various temperature records, which utilize distinct adjustment procedures.
4. Non-linear trend cherry pick. With linear trends on a shaky foundation, RC11 adopt in their analysis an unconventional "non-linear trend" unique to the climate attribution literature. The "non-linear trend" is really just a highfalutin smoothing procedure that makes history irrelevant -- that outside 15 years before and after the year in question. The effect of the highfalutin smoothing is essentially equivalent to using the linear trends over a much shorter period (i.e., that appear in the top right corner of the top graph above) where there has been strong and significant warming.
The consequence of these various selective choices in the methodology of RC11 leads them to conclude:
We conclude that the 2010 Moscow heat record is, with 80% probability, due to the long-term climatic warming trend.As explained above, there is no long-term warming trend. There is a short-term warming trend, which may not even reach climate time scales of 30 years or longer.
What RC11 is, in a nutshell, is an analysis that does the following:
RC11 takes a short term trend, along with an estimate of variability, calculates the probability that particular thresholds will be exceeded over a 10 year time frame.That is it -- This is probability textbook ball and urn stuff, padded with a lot of faux complexity.
That some climate scientists are playing games in their research, perhaps to get media attention in the larger battle over climate politics, is no longer a surprise. But when they use such games to try to discredit serious research, then the climate science community has a much, much deeper problem.
Postscript: For those new here, I believe that the human influence merits our concern and we should be taking various actions. This post should be read in the context of issues of climate science policy, not climate policy per se.