I have a new column up at The Breakthrough Institute which asks, "How Much Energy Does the World Need?" In it I apply a methodology of "backcasting" to calculate how much energy consumption is implied by postulating that in 2035 global per capita energy consumption equals that of Bulgaria, Germany and the US in 2010. The results imply a big gap in energy access as you or I might understand it and (a) projections of where the world is headed and (b) official definitions of what constitutes "energy access" (see figure above).
Lots of numbers crunched and three bottom-line conclusions reached -- comments/critique welcomed. See it here and please feel free to return and comment.
27 February 2013
26 February 2013
Promises and Paradoxes of Scientific Authority
Earlier this month I participated in an excellent symposium on the global politics of scientific advice organized by the Steps Centre at the University of Sussex. They have now put videos from all of the sessions online. Above is my lecture.
You can see the rest (highly recommended), here.
24 February 2013
The Russian Meteor and Global Warming
I get a lot of email from readers of this blog. Feedback, comments, critique and pointers are always welcomed. So too are fun items like this which came yesterday, from an anonymous but clearly brilliant reader (Thanks;-).
Enjoy!
Enjoy!
Scientists indicate that the recent meteor strike in Russia may have been connected to global warming.
In comments sent to reporters, NCAR Distinguished Senior Scientist and Nobel Prize winner Kevin Trenberth stated that the 4% increase in water vapor in the atmosphere can lead to up to a 10% increase in precipitation, and probably even more if it all converges on one spot. "If this is not massive precipitation, then I don't know what is" said Trenberth. When pressed by reporters on the magnitude of the effect, Trenberth was reluctant to give an answer, noting only that it undoubtedly made the impact worse.
NCAR Senior Scientist Gerald Meehl added that the strike was like throwing a rock into a pond "on steroids." Sales and illicit use of steroids were reported to have surged following Meehl's remark.
Trenberth and Meehl's comments were quickly reported by blogger Joe Romm of "Climate Progress", who headlined his piece "Bombshell on Russian Meteor". The headline prompted Russian and U.S. forces to go to a higher state of military alert.
Not far behind Trenberth and Meehl, Jennifer Francis of Rutgers University stated to reporters that the meteor strike was precisely the kind of extreme event expected from climate change. Noting that the meteor strike was directly associated with a rapid decline of (lake) ice, Francis concluded that the ice loss had caused the meteor.
Jeff Masters of the Weather Underground provided an extensive summary of the event in a blog post. Masters emphasized the higher energy content of the atmosphere connected to global warming as a likely cause of this extreme event. While acknowledging that a much larger meteor strike had occurred over 100 years earlier, Masters stated "this recent occurrence shows a clear upward trend since the middle of the last century". He added that the fact that this upward trend coincides with the global warming trend - and particularly that it occurs at the end of the record - provides clear evidence of the role of human-caused climate change.
At a hastily called conference for reporters, Susan Hassol and colleagues at Climate Communication also emphasized the importance of climate warming, noting the possibility of species extinction as a result. This appears to be confirmed by statements from those living in the area near the meteor impact, who report that fish in the lake where the meteor impacted were "well poached". The consequences are not entirely negative - local residents have come together for a previously unplanned "fish fry" dinner.
Providing an alternative viewpoint, Roger Pielke, Jr. of the University of Colorado stated that the recent IPCC Special Report on Extreme Events makes no connection between human-induced climate change and meteor strikes. He also emphasized that there was no existing peer-reviewed literature supporting such a connection. Pielke Jr.'s statements were quickly denounced as being from a known skeptic and denier. Many noted that Pielke Jr. was not a climate scientist and therefore was totally unqualified on the subject. Qualifications of many of those denouncing Pielke, Jr. included watching the Al Gore movie "An Inconvenient Truth" at least five times and faithfully reading blogs by Joe Romm and Bill McKibben.
MIT scientist Kerry Emanuel acknowledged that Pielke, Jr. was correct on the absence of any evidence linking the meteor strike with global warming, However, Emanuel noted, "absence of evidence is not evidence of absence." There was general agreement on this point. Politicians quickly hailed this statement as eliminating any need for supporting evidence in speeches and policies pertaining to science.
Overall, there appears to be an overwhelming consensus that the meteor strike was associated with climate change. A survey conducted by the Yale University Climate Change Communication project found that 97% of climate scientists and 4 out of 5 dentists agreed that the Russian meteor strike is consistent with climate change.
23 February 2013
Interesting Update on the Sandy "Hurricane Deductible"
Further update 21 May 2013: The NWS Service Assessment is out (here in PDF) and it turns out that Sandy was a hurricane by NHC criteria in New Jersey as well.
Yesterday the NHC released an update to its hurricane reanalysis page which lists Sandy as a hurricane in the state of NY, but not NJ. The new update is consistent with the recent Sandy assessment, but far more black and white -- a storm is is or is not on the list as a hurricane.
In New York following Sandy this happened (emphasis added, and for more background go here):
In practice, there is unlikely to be a large public demand for enforcement of the hurricane deductible, and insurance companies may decide that it is not worth their effort, as most had their losses covered by reinsurers. Will any reinsurers take a closer look? I'd guess it is unlikely, and if they did we might never hear about it anyway.
As a case study in the application of "evidence-based policy" you won't find a better one than the hurricane deductible. Sometimes "evidence" matters, sometimes it does not. Sometime we care about whether it matters, sometimes we do not.
Yesterday the NHC released an update to its hurricane reanalysis page which lists Sandy as a hurricane in the state of NY, but not NJ. The new update is consistent with the recent Sandy assessment, but far more black and white -- a storm is is or is not on the list as a hurricane.
In New York following Sandy this happened (emphasis added, and for more background go here):
The New York State Department of Financial Services and Governor Cuomo informed the insurance industry that hurricane deductibles should not be applied. The decision was based on the fact that the storm did not sustain hurricane-force winds while over New York.The State and Governor were incorrect in that judgment, according to the NHC. What that implies is that a bunch of homeowers (technically/legally) may owe insurers a refund (if they have already been paid for losses) or are owed less in payments under their insurance policies (if they have not been paid).
In practice, there is unlikely to be a large public demand for enforcement of the hurricane deductible, and insurance companies may decide that it is not worth their effort, as most had their losses covered by reinsurers. Will any reinsurers take a closer look? I'd guess it is unlikely, and if they did we might never hear about it anyway.
As a case study in the application of "evidence-based policy" you won't find a better one than the hurricane deductible. Sometimes "evidence" matters, sometimes it does not. Sometime we care about whether it matters, sometimes we do not.
21 February 2013
How Fast Does Energy Access Occur?
The figure above was shown by Morgan Bazilian, Deputy Director of the Joint Institute for Strategic Energy Analysis, in his talk at our Center today (details plus a webcast of the talk here). It comes from Chapter 19 of the 2012 Global Energy Assessment by IIASA (here in PDF).
The graph shows for 10 countries the historical growth in energy access. In 1920, only 35% of Americans had energy access (here shown as "electricity access" defined as "household electrification"). This total reached 100% by the mid-1950s. In contrast, Mexico was at about 35% access in 1930, and has yet to get all the way to the 100% mark. China went from 35% in 1970 to 100% by about 2005, a very similar rate to that of the United States. You can see India is following a much shallower trajectory, going from about 25% in 1980 to 65% in 2010.
How fast can energy access occur? These data provide some insight into that question.
The graph shows for 10 countries the historical growth in energy access. In 1920, only 35% of Americans had energy access (here shown as "electricity access" defined as "household electrification"). This total reached 100% by the mid-1950s. In contrast, Mexico was at about 35% access in 1930, and has yet to get all the way to the 100% mark. China went from 35% in 1970 to 100% by about 2005, a very similar rate to that of the United States. You can see India is following a much shallower trajectory, going from about 25% in 1980 to 65% in 2010.
How fast can energy access occur? These data provide some insight into that question.
20 February 2013
What is Basic Research?
Basic research is a political symbol. What is a "political symbol"? I explain the notion of a political symbol in my recent paper on basic research (here in PDF):
So basic research is no one thing, and as such it can be fruitlessto argue over its exact meaning. Similarly,other political symbols -- 9/11, the American flag, a swastika -- carry with them many meanings, some contradictory and irreconcilable.
In the politics of science "basic research" -- and its variants in use today such as fundamental research and transformative research -- allow scientists to claim to be following their curiosity wherever it may lead, while policy makers can claim something quite different, namely that funding such work is "basic" or "fundamental" to economic growth and other societal benefits. The agreement to carry contradictory definitions at once is a key factor in the stability of post-World War II science policy (and how we got the linear model). However, as budgets get tighter and the public demands accountability and results that stability has shown signs of strain.
The popularization of "basic research" via Vannevar Bush's seminal 1945 report Science--The Endless Frontier was no accident. Again quoting from my paper:
It is motherhood, apple pie and all that is good. It is, as Brian Cox would say, axiomatically a good thing. As such, "basic research" as a political symbol has proven to be a big obstacle to science policy research. After all, why question something that is axiomatically good? What are you, anti-science? Further, the inherent virtue of basic research means that precise knowledge of mechanisms of that goodness are not needed.
If decisions about science and utilizing science are to be systematically evaluated and refined, it will be necessary to move beyond political symbols. But political symbols, obviously, are powerful things.
Elder and Cobb (1983) define a symbol as: ‘‘any object used by human beings to index meanings that are not inherent in, nor discernible from, the object itself.’’ They continue:Basic research serves as a symbol of identification, demand and expectation (see the paper for discussion, but I am sure that you can think of many examples). Its plasticity means that it fulfills multiple roles simultaneously, as shown in the following table (from this essay in PDF).
Literally anything can be a symbol: a word or a phrase, a gesture or an event, a person, a place, or a thing. An object becomes a symbol when people endow it with meaning value or significance.In his classic essay, Sapir (1934) distinguishes two types of symbols, referential and condensational. Referential symbols are ‘‘economical devices for purposes of reference.’’ So each of the following is an example of a referential symbol - @, ?, &, Z, WORD, ;-). A second type of symbol distinguished by Sapir is one that carries with it ‘‘emotional tension in conscious or unconscious form.’’ Examples of such symbols would include 9/11, a swastika, the American flag, and your name. Sapir asserts that ‘‘society is peculiarly subject to the influence of symbols in such emotionally charged fields as religion and politics.’’
Lasswell et al. (1952) define ‘‘key political symbols’’ as those which occur ‘‘in the flow of political statements,’’ and distinguish three types: symbols of identification (referring to people and groups), symbols of demand (referring to preferences and volitions) and symbols of expectation (referring to assumptions of fact).
So basic research is no one thing, and as such it can be fruitlessto argue over its exact meaning. Similarly,other political symbols -- 9/11, the American flag, a swastika -- carry with them many meanings, some contradictory and irreconcilable.
In the politics of science "basic research" -- and its variants in use today such as fundamental research and transformative research -- allow scientists to claim to be following their curiosity wherever it may lead, while policy makers can claim something quite different, namely that funding such work is "basic" or "fundamental" to economic growth and other societal benefits. The agreement to carry contradictory definitions at once is a key factor in the stability of post-World War II science policy (and how we got the linear model). However, as budgets get tighter and the public demands accountability and results that stability has shown signs of strain.
The popularization of "basic research" via Vannevar Bush's seminal 1945 report Science--The Endless Frontier was no accident. Again quoting from my paper:
Bush explained that he made an explicit decision to use the phrase ‘‘basic research’’ because of its malleability in political discourse. Bush wrote in his memoirs of this explicit decision (Bush 1970):So what is "basic research"?
To persuade the Congress of the pragmatically inclined United States to establish a strong organization to support fundamental research would seem to be one of the minor miracles… When talking matters over with some of these [people on Capitol Hill], it was well to avoid the word fundamental and to use basic instead.Bush’s semiotic innovation is well grounded in political theory. Lasswell (1969) observes ‘‘ambiguity is an aid to concerted action.’’ He continues, ‘‘A high degree of generality is essential to popular appeal; symbols must be sufficiently vague to enable the individual to transfer his private loves and hates and hopes and fears to the slogans and catchwords of the movement.’’
It is motherhood, apple pie and all that is good. It is, as Brian Cox would say, axiomatically a good thing. As such, "basic research" as a political symbol has proven to be a big obstacle to science policy research. After all, why question something that is axiomatically good? What are you, anti-science? Further, the inherent virtue of basic research means that precise knowledge of mechanisms of that goodness are not needed.
If decisions about science and utilizing science are to be systematically evaluated and refined, it will be necessary to move beyond political symbols. But political symbols, obviously, are powerful things.
Interesting Timing to be Removed from GEC Editorial Board
Five days ago I critiqued a shoddy paper by Brysse et al. 2013 which appeared in the journal Global Environmental Change. Today I received notice from the GEC editor-in chief and executive editor that I have been asked to "step down from the Editorial Board." They say that it is to "give other scientists the chance to gain experience of editorial duties."
Over the past 20 years I have served on the editorial boards of about a dozen or so academic journals. I have rolled off some when my term was up, and continued for many years with others. I have never received a mid-term request to step down from any journal. My 6 years with the GEC editorial board is not long in academia, and certainly much shorter than many other serving members.
Are my critique and the request to step down related? I can't say. It is interesting timing to be sure. Perhaps it is an odd coincidence. Perhaps not. I did reply by accepting their request and asking the following two questions which might help to clarify the terms of my release:
Over the past 20 years I have served on the editorial boards of about a dozen or so academic journals. I have rolled off some when my term was up, and continued for many years with others. I have never received a mid-term request to step down from any journal. My 6 years with the GEC editorial board is not long in academia, and certainly much shorter than many other serving members.
Are my critique and the request to step down related? I can't say. It is interesting timing to be sure. Perhaps it is an odd coincidence. Perhaps not. I did reply by accepting their request and asking the following two questions which might help to clarify the terms of my release:
Could you tell me which other members of the editorial board are being asked to step down at this time? And also, could you tell how many others have served on the board 6 years or longer and remain on the board?If I get a reply I will update this post.
UPDATE: I just checked the GEC editorial board from 2005, the year before I was invited to join ($ here). There are 13 members of the 2005 board who continue through 2013 ($ here). If those 13 members (of 38 total in 2013) have not all be asked to "step down" at this time, then yes, I am getting "special" treatment.
UPDATE 2: Neil Adger, editor of GEC, replies to explain, contrary to the earlier email, that I have been removed from the editorial board due to a perception of my "waning interest in the journal" citing my declining of 3 reviews last year (I'd guess overall that I declined 50 or more requests to review last year and took on about 12, welcome to academia;-). Of course, he could have asked about my interest before removing me from the Board. He did not comment on my critical blog post. I take his response to mean that I am indeed the only one who has been removed at this time. So there you have it, another climate ink blot. Coincidence? You be the judge.
UPDATE 3: Neil Adger has written a second email to me which has has asked me to post in the comments here. My response to him is here. And here is the original email from GEC dropping me from the Board. All info is in sight, people can make up their own minds about this academic tempest in a teapot.I am of course happy to make way for other scientists to "gain the experience of editorial duties." However, if my critique of a GEC paper is in any way related to my removal from the editorial board, then the message being sent to those other scientists is pretty chilling. For my part, I value my academic freedom to offer critique as I see things far more than being allowed into certain clubs.
16 February 2013
Faith Based Science Policy
Brian Cox is a physicist and largely due to his ubiquitous presence on the BBC, he is the Generation X face of British science. He also has more than a million Twitter followers. Today he tweeted this:
In a paper published in Minerva last year I explored the origins and symbolic significance of the phrase "basic research" (read it here in PDF). In that paper I argued that the phrase originated about 1920 in the context of the US Department of Agriculture, where "research was the basic work" of the agency. The phrase was shortened to "basic research" which ironically enough meant what we today call "applied research."
Over time the phrase became part of the linear model of innovation, shown in the figure at the top of this post. The model is faith based, meaning that the relationship of basic research funding to societal benefits is taken as an "axiom" which often finds its expression in a misreading of economics. Scientists often demand a privileged place for science in government budgets based on claims that in "basic research" lies the key to growth and prosperity for all.
Unfortunately, the relationship of so-called "basic research" and outcomes like economic growth and other societal benefits remains poorly understood. For instance, in 2007, Leo Sveikauskas of the Bureau of Economic Analysis surveyed the economy-wide returns on R&D (here in PDF) and found a complex picture at odds with the elegance of the linear model:
There is of course a parochial political dimension at work here as well, which limits a broader discussion of how to better relate research with societal objectives. Benoît Godin, the innovation scholar at the Institut National de la Recherche Scientifique in Montreal, explains (here in PDF):
Cox is certainly not alone in making a faith-based appeal for for science funding. Of course the great irony here is that scientists who appeal to the importance of evidence in the making of policy tend to forget that good advice when it comes to the public support of their work.
It’d be wonderful if Russian meteorite were enough to make Nobel Prize winner Andre Geim’s Davos dream come true ft.com/cms/s/0/ad8e9d…Readers here may recall my critique of Geim's piece, which centered on his wish for an asteroid to be on a collision course with the Earth in order to motivate policy makers to open their wallets for scientific research. Apparently someone must have called Cox's attention to my critique, because Cox responded to me today with a couple of tweets:
— Brian Cox (@ProfBrianCox) February 16, 2013
@rogerpielkejr your Geim critique says to me that social scientists can’t agree on analysis, not that R&D spend isn’t vital to growth.
— Brian Cox (@ProfBrianCox) February 16, 2013
@rogerpielkejr I think we should simply state that basic research is good as an axiom, and one day you’ll work out how to measure impact ;)Despite the smiley, Cox's tweets betrays two conceits of the scientific establishment that hold on both sides of the Atlantic. One is that "basic research" is desirable -- not as a matter of evidence, but as in Cox's words, as an axiom. The second is that social scientists, and science policy scholars in particular, exist to generate evidence in support of that axiom in order to keep public funds flowing. Both conceits are problematic in science policy.
— Brian Cox (@ProfBrianCox) February 16, 2013
In a paper published in Minerva last year I explored the origins and symbolic significance of the phrase "basic research" (read it here in PDF). In that paper I argued that the phrase originated about 1920 in the context of the US Department of Agriculture, where "research was the basic work" of the agency. The phrase was shortened to "basic research" which ironically enough meant what we today call "applied research."
Over time the phrase became part of the linear model of innovation, shown in the figure at the top of this post. The model is faith based, meaning that the relationship of basic research funding to societal benefits is taken as an "axiom" which often finds its expression in a misreading of economics. Scientists often demand a privileged place for science in government budgets based on claims that in "basic research" lies the key to growth and prosperity for all.
Unfortunately, the relationship of so-called "basic research" and outcomes like economic growth and other societal benefits remains poorly understood. For instance, in 2007, Leo Sveikauskas of the Bureau of Economic Analysis surveyed the economy-wide returns on R&D (here in PDF) and found a complex picture at odds with the elegance of the linear model:
Returns to many forms of publicly financed R&D are near zero . . . Many elements of university and government research have very low returns, overwhelmingly contribute to economic growth only indirectly, if at all, and do not belong in investment.The exceptions that he cites include federal R&D in health, agriculture and defense -- all instances of mission-oriented applied research. The issue is further complicated by the fact that economists don't really understand where economic growth comes from.
There is of course a parochial political dimension at work here as well, which limits a broader discussion of how to better relate research with societal objectives. Benoît Godin, the innovation scholar at the Institut National de la Recherche Scientifique in Montreal, explains (here in PDF):
The problem is that the academic lobby has successfully claimed a monopoly on the creation of new knowledge, and that policy-makers have been persuaded to confuse the necessary with the sufficient condition that investment in basic research would by itself necessarily lead to successful applications.The politics help to explain why public debates over science policy tend to devolve into simplistic appeals for more "basic research" funding for scientists, rather than a more sophisticated discussion of trade-offs within science, or even how it is that we expect that R&D funding will contribute to the promised societal benefits. Once you take the importance of basic research as an axiom, the need for science policy research on the role of science in society disappears, except as handmaiden to the science lobby.
Cox is certainly not alone in making a faith-based appeal for for science funding. Of course the great irony here is that scientists who appeal to the importance of evidence in the making of policy tend to forget that good advice when it comes to the public support of their work.
15 February 2013
Science is the Shortcut
UPDATE 20 Feb 2013: Five days after writing this critique I was asked to step down from the GEC editorial board.
There is a new paper out by Brysse et al. in Global Environmental Change (here $) which includes as co-authors Naomi Oreskes (author of Merchants of Doubt) and Michael Oppenheimer (long-time IPCC contributor and a contributing lead author for the AR5). The authors report a remarkable finding -- they have identified a shortcut to divining the true state of knowledge of the science of climate change.
As the authors' explain:
An important reason for this bias, the authors assert, is of course none other than those evil skeptics:
Let's take a closer look at the data claimed to support the ESLD hypothesis. The paper examines literature on IPCC predictions for temperature and sea level, hurricanes, and the role of greenhouse gases from permafrost melting in climate models.
Consider also the figure below on global temperatures, courtesy of Ed Hawkins (@ed_hawkins) at the University of Reading (for discussion see his post here). The figure shows that for the CMIP5 climate models (i.e., those to be used in IPCC AR5) observed temperatures are running at the low end of the ensemble of predictions, reflecting the recent "standstill" in global temperature (the figure also calls into question the Brysse et al. conclusion that global temperatures are consistent with all 4 previous IPCC reports, but I digress).
Least drama? Hardly.
What can be said from all this? Well pretty much nothing.
Consider that the 2007 IPCC report alone had 2,744 findings, almost all of which were reported in probabilistic fashion. Evaluating the accuracy of those findings comprehensively against the evolution of the climate system would be difficult if not impossible, both empirically and epistemologically. Further, one could easily pick out a few findings from the report which tell a different story: drought, methane emissions, flooding, disaster costs, Himalayan glacier melt and so on. In 2010, Robert Watson, a former Chair of the IPCC, noted of the errors discovered in the AR4 report: "The mistakes all appear to have gone in the direction of making it seem like climate change is more serious by overstating the impact. That is worrying." A Dutch assessment of the IPCC AR4 found much the same.
For some reason Brysse et al. neglected to consider a 2010 paper (co-authored by Michael Oppenheimer) which warned of the threat of a dramatic increase in poor Mexicans migrating north into the United States due to climate change. Talk about drama! However, an unnoticed 2012 paper in the same journal found that the original migration paper contained some serious methodological flaws, so never mind. This apparently was a case of Erring on the Side of Too-Much Drama (ESTMD - I can make up scientific-sounding concepts too). We could go on like this all day, and it would not provide any enlightenment.
Is there any evidence that climate scientists exhibit a systematic bias in their published work and assessments due to outside pressure or other factors? No.
As the ESLD paper shows, ironically enough, sometimes work that is badly off base gets into the literature and even scientific assessments are far from perfect. Nonetheless, science is the best route we have to gaining an understanding of the world that we live in. I'm sorry to say that there are no shortcuts. Or perhaps put another way, science is the shortcut.
There is a new paper out by Brysse et al. in Global Environmental Change (here $) which includes as co-authors Naomi Oreskes (author of Merchants of Doubt) and Michael Oppenheimer (long-time IPCC contributor and a contributing lead author for the AR5). The authors report a remarkable finding -- they have identified a shortcut to divining the true state of knowledge of the science of climate change.
As the authors' explain:
Evidence from recent analyses suggests that scientists, particularly acting in the context of large assessments, may have underestimated the magnitude and rate of expected impacts of anthropogenic climate change. We suggest that this underestimation reflects a systematic bias, which we label “erring on the side of least drama (ESLD)”.What ESLD therefore means is that when scientists make a claim about climate change, particularly via the IPCC and other assessments, the presence of a systematic bias indicates that the odds are that things are really much, much worse. ESLD therefore offers a short cut to anticipating where climate science is headed.
An important reason for this bias, the authors assert, is of course none other than those evil skeptics:
[O]ne possible reason why scientists may have underestimated the threat of anthropogenic warming is the fear that if they don’t, they will be accused by contrarians (as was Schneider) of being alarmist fear-mongers. That is to say, pressure from skeptics and contrarians and the risk of being accused of alarmism may have caused scientists to understate their results.Not only is the accusation of a systematic bias an insult to the integrity of practicing scientists, but the entire paper is built on an empirical foundation that does not touch the ground.
Let's take a closer look at the data claimed to support the ESLD hypothesis. The paper examines literature on IPCC predictions for temperature and sea level, hurricanes, and the role of greenhouse gases from permafrost melting in climate models.
- It finds that sea level is running well below the 1990 IPCC prediction and above the 1995 and 2001 predictions (if anyone can make sense of 2007 IPCC predictions then you get a bonus point).
- It finds that observed temperature increases are consistent with the predictions of all 4 IPCC assessments.
- It finds that the IPCC accurately reflects the community understanding on hurricanes.
- It finds that peramfrost melting is not included in climate models, representing a "potentially profound bias in the climate projections—not toward overestimation of climate change, but toward its underestimation."
- They also cite Arctic sea ice and some science on rainfall.
Consider also the figure below on global temperatures, courtesy of Ed Hawkins (@ed_hawkins) at the University of Reading (for discussion see his post here). The figure shows that for the CMIP5 climate models (i.e., those to be used in IPCC AR5) observed temperatures are running at the low end of the ensemble of predictions, reflecting the recent "standstill" in global temperature (the figure also calls into question the Brysse et al. conclusion that global temperatures are consistent with all 4 previous IPCC reports, but I digress).
Least drama? Hardly.
What can be said from all this? Well pretty much nothing.
Consider that the 2007 IPCC report alone had 2,744 findings, almost all of which were reported in probabilistic fashion. Evaluating the accuracy of those findings comprehensively against the evolution of the climate system would be difficult if not impossible, both empirically and epistemologically. Further, one could easily pick out a few findings from the report which tell a different story: drought, methane emissions, flooding, disaster costs, Himalayan glacier melt and so on. In 2010, Robert Watson, a former Chair of the IPCC, noted of the errors discovered in the AR4 report: "The mistakes all appear to have gone in the direction of making it seem like climate change is more serious by overstating the impact. That is worrying." A Dutch assessment of the IPCC AR4 found much the same.
For some reason Brysse et al. neglected to consider a 2010 paper (co-authored by Michael Oppenheimer) which warned of the threat of a dramatic increase in poor Mexicans migrating north into the United States due to climate change. Talk about drama! However, an unnoticed 2012 paper in the same journal found that the original migration paper contained some serious methodological flaws, so never mind. This apparently was a case of Erring on the Side of Too-Much Drama (ESTMD - I can make up scientific-sounding concepts too). We could go on like this all day, and it would not provide any enlightenment.
Is there any evidence that climate scientists exhibit a systematic bias in their published work and assessments due to outside pressure or other factors? No.
As the ESLD paper shows, ironically enough, sometimes work that is badly off base gets into the literature and even scientific assessments are far from perfect. Nonetheless, science is the best route we have to gaining an understanding of the world that we live in. I'm sorry to say that there are no shortcuts. Or perhaps put another way, science is the shortcut.
A Case for Playing it Straight
I have just participated in a lengthy Twitter exchange with Marshall Shepherd (@DrShephard2013), a professor at the University of Georgia and President of the American Meteorological Society. The occasion for the exchange was Dr. Shepherd's presentation yesterday at a Congressional Briefing sponsored by the Senate Committee on Environment and Public Works (his prepared remarks can be found here in PDF). The briefing focused on "the latest trends and scientific evidence related to the growing impacts associated with climate change."
Three other scientists testified at the briefing, but I am not interested in what they had to say. Shepherd's remarks are of interest because he is the President of a major scientific society. He was not at the briefing to present his personal opinions, but rather in his role as a leader and representative of the scientific community. Thus, in my view of the obligations of such a role, he had a duty to play it straight.
Unfortunately, as is so often a case when leaders in the climate science community find themselves before an audience of policy makers, on extreme events they go rogue, saying all sorts of things with little or no scientific basis. Even if the scientist includes many accurate statements in his/her remarks (such as the reality of significant risks of human-caused climate change), the presence of horsemeat ruins the lasagne.
Let's take a step back. The science on climate change, extreme events and disaster costs is clear and unambiguous. You don't need to take my word for it, you can find the science well summarized in the IPCC SREX. And if you don't like the IPCC you can find an array of peer-reviewed literature. I am happy to debate this topic with all comers as the data and analyses overwhelming support the claims below.
In a nutshell here is the state of the science (here I focus on the US as Shepherd did):
What did Shepherd say in his briefing?
As President of the AMS Shepherd does not have the luxury of using that platform to share his personal opinions on climate science that may diverge from that of the community which he represents, much less stretch or misrepresent broader findings. Leaders of important institutions of science -- like the AMS -- speak for more than themselves when presenting science in public fora. They also represent the credibility of their institution and climate science more generally.
In formal settings such as the briefing yesterday where experts meet politicians, I fully expect Democrats and Republicans to cherrypick experts convenient to the arguments they wish to see made. That is politics as usual. Leading scientific institutions play that same game with some considerable risk to their credibility.
My advice? On extreme events, please, just play it straight.
Three other scientists testified at the briefing, but I am not interested in what they had to say. Shepherd's remarks are of interest because he is the President of a major scientific society. He was not at the briefing to present his personal opinions, but rather in his role as a leader and representative of the scientific community. Thus, in my view of the obligations of such a role, he had a duty to play it straight.
Unfortunately, as is so often a case when leaders in the climate science community find themselves before an audience of policy makers, on extreme events they go rogue, saying all sorts of things with little or no scientific basis. Even if the scientist includes many accurate statements in his/her remarks (such as the reality of significant risks of human-caused climate change), the presence of horsemeat ruins the lasagne.
Let's take a step back. The science on climate change, extreme events and disaster costs is clear and unambiguous. You don't need to take my word for it, you can find the science well summarized in the IPCC SREX. And if you don't like the IPCC you can find an array of peer-reviewed literature. I am happy to debate this topic with all comers as the data and analyses overwhelming support the claims below.
In a nutshell here is the state of the science (here I focus on the US as Shepherd did):
- US floods have not increased over a century or longer (same globally).
- US hurricane landfall frequency or intensity have not increased (in US for over a century or longer).
- US intense hurricane landfalls are currently in the longest drought (7 years+) ever documented.
- US tornadoes, especially the strongest ones, have not increased since at least 1950.
- US drought has decreased since the middle of the past century.
- US East Cost Winter Storms show no trends (here also).
- Disaster losses normalized for societal changes show no residual trends (US, other regions or globally).
- Trends in the costs of disasters are not a proxy for trends in climate phenomena.
What did Shepherd say in his briefing?
- He said his spouse can see the impacts of recent extremes
- Weather is now on steroids, like baseball players
- Weather is like your mood, climate is like your personality
- He cites two scientists quoted in The Guardian saying that all weather is affected by climate change
- The impact of climate change can be seen in the price of Cheerios
- The recent Northeast blizzard is related to climate change
- Suggests that climate change is "loading the dice for extremes"
As President of the AMS Shepherd does not have the luxury of using that platform to share his personal opinions on climate science that may diverge from that of the community which he represents, much less stretch or misrepresent broader findings. Leaders of important institutions of science -- like the AMS -- speak for more than themselves when presenting science in public fora. They also represent the credibility of their institution and climate science more generally.
In formal settings such as the briefing yesterday where experts meet politicians, I fully expect Democrats and Republicans to cherrypick experts convenient to the arguments they wish to see made. That is politics as usual. Leading scientific institutions play that same game with some considerable risk to their credibility.
My advice? On extreme events, please, just play it straight.
14 February 2013
Graph of the Day: Life Expectancy vs. Energy Use
The graph above shows energy use (expressed as kilograms of oil equivalent per capita) versus life expectancy at birth (expressed in years) for 151 countries in the World Bank Development indicators database that had data for both variables in 2010.
For the countries in the lowest fiftieth percentile of life expectancy the average life expectancy is 66 years and per capita energy use is 74% of the global average. For those countries in the top fiftieth percentile life expectancy at birth is 78 years and per capita energy use is 212% of the global average.
There are no doubt some distortions introduced into an analysis of this type due to the use of countries as the unit of analysis. Each country of course has a distribution of life expectancy and energy use within its borders. It would be ideal to have a dataset with the individual as the unit of analysis, but I suspect that such data is not soon forthcoming.
Nonetheless these data carry a powerful message -- Energy poverty is not the only factor which contributes to below-average life expectancies, but it is clearly a very important factor.
For the countries in the lowest fiftieth percentile of life expectancy the average life expectancy is 66 years and per capita energy use is 74% of the global average. For those countries in the top fiftieth percentile life expectancy at birth is 78 years and per capita energy use is 212% of the global average.
There are no doubt some distortions introduced into an analysis of this type due to the use of countries as the unit of analysis. Each country of course has a distribution of life expectancy and energy use within its borders. It would be ideal to have a dataset with the individual as the unit of analysis, but I suspect that such data is not soon forthcoming.
Nonetheless these data carry a powerful message -- Energy poverty is not the only factor which contributes to below-average life expectancies, but it is clearly a very important factor.
12 February 2013
The NHC Sandy Report
The National Hurricane Center has issued its final report on Hurricane Sandy (here in PDF). Kudos to the scientists at NHC for their fast and comprehensive work.
As I guessed, Sandy will remain categorized as a "post-tropical cyclone" upon making landfall in the US. This determination had already been established in a New Jersey executive order and widely implemented in insurance payouts. To determine otherwise would have wreaked havoc, regardless of the scientific justifications for the decision.
Interestingly however, future Sandy-like storms may be classified differently, at least in terms of how watches and warnings are issued:
If NWS adopts the first proposal, then it would mean that a future storm identical to Sandy in every respect would trigger the so-called "hurricane deductible" for insurance policies that are based on the issuance of a hurricane warning in the affected state. So from the perspective of insurance, Sandy was not a hurricane but a future Sandy might well be. Homeowners and insurance companies take note.
The other important news in the report is the preliminary US damage estimate, at $50 billion, which is the apples to apples number to use in comparing Sandy to the losses from our normalized loss database. It is also, coincidentally, the value I used in the figure below updating the normalized loss dataset through 2012:
As I guessed, Sandy will remain categorized as a "post-tropical cyclone" upon making landfall in the US. This determination had already been established in a New Jersey executive order and widely implemented in insurance payouts. To determine otherwise would have wreaked havoc, regardless of the scientific justifications for the decision.
Interestingly however, future Sandy-like storms may be classified differently, at least in terms of how watches and warnings are issued:
[T]he NWS is exploring two proposals that, if adopted, would result in some changes to NWS products and warnings. The first proposal originates from the unique situation posed by Hurricane Sandy; it would give the NHC the option to continue issuing formal advisories on post-tropical cyclones as long as those systems pose a significant threat to life and property, and it would give the NWS the option to keep hurricane and tropical storm watches and warnings in place for those systems.The second proposal focuses on developing an explicit warning product for storm surges.
If NWS adopts the first proposal, then it would mean that a future storm identical to Sandy in every respect would trigger the so-called "hurricane deductible" for insurance policies that are based on the issuance of a hurricane warning in the affected state. So from the perspective of insurance, Sandy was not a hurricane but a future Sandy might well be. Homeowners and insurance companies take note.
The other important news in the report is the preliminary US damage estimate, at $50 billion, which is the apples to apples number to use in comparing Sandy to the losses from our normalized loss database. It is also, coincidentally, the value I used in the figure below updating the normalized loss dataset through 2012:
10 February 2013
The Horsemeat in Your Lasagna
I just returned from the UK, where last week a scandal erupted involving horsemeat and lasagne. It turns out that packages of Findus lasagne labelled as beef turned out to contain from 60% to 100% horsemeat. As a result the product was pulled and tough questions are being asked about food safety. The scandal is spreading to other food products and other countries.
Some might say, so what? Meat is meat, right?
Well, there are two objections. One is a safety issue. It turns out that horsemeat may contain chemicals - such as phenylbutazone used to treat inflammation -- that are harmful to humans. Unlike with cattle, horse medication and diet are not regulated with an eye on human consumption.
A second objection might seem a bit more obvious -- it is just wrong to label a product as containing beef when it does not. Who can argue with that, right?
So what does this have to do with climate change, I am sure you are asking. The horsemeat scandal came to mind this morning when I was reading David Leonhardt's opinion piece in the New York Times. There is a lot to like in the piece about innovation and clean energy. But there is also horsemeat. (I'm picking on Leonhardt's piece, but it is by no means unique or the most egregious example.)
Leonhardt opens by citing the increasing costs of extreme weather as one of the reasons "for a major government response to climate change" and concludes with:
A common reaction to my critique of this argument is to invoke a ends-justify-the-means sort of logic. For instance, right after I commented on Leonhardt's piece on twitter, ASU professor and colleague Clark Miller responded via a Tweet:
This means/end debate has occurred too many times to count on this blog (and its predecessor), and my usual response is to be careful -- Dick Cheney used similar logic when linking 9/11 to Saddam Hussein. What did it matter, the argument went, if people wrongly associated 9/11 with Saddam? He was a bad guy, and if people supported getting rid of him for the wrongs reasons, so what?
Climate campaigners often adopt a similar logic. What does it matter if people wrongly associate recent extreme events and disaster costs with climate change? Responding to it is a good thing, and if people support mitigation action for the wrong reasons, so what?
There are three objections here.
First, an argument that mitigation of greenhouse gases makes sense in terms of decreasing the future costs of extreme events is not a strong one: Even under the assumptions of IPCC, Stern Review, etc. the future costs of extreme events under the most aggressive scenarios of climate change actually decrease as a proportion of GDP.
The second objection is that the discovery of a little horsemeat in lasagne ruins the entire product. You might cite the tasty (and safe) noodles and tomato sauce, but the presence of horsemeat in the product defeats your argument. The science is just not there to connect increasing costs of disasters to climate change, much less individual phenomena like drought, floods and storms. It is horsemeat -- and don't put it into your product lest you compromise the whole package.
The third reason should be obvious but often appears to escape the calculus of many campaigners and journalists. Telling people that their lasagne contains beef, when it actually contains horsemeat is just wrong.
Some might say, so what? Meat is meat, right?
Well, there are two objections. One is a safety issue. It turns out that horsemeat may contain chemicals - such as phenylbutazone used to treat inflammation -- that are harmful to humans. Unlike with cattle, horse medication and diet are not regulated with an eye on human consumption.
A second objection might seem a bit more obvious -- it is just wrong to label a product as containing beef when it does not. Who can argue with that, right?
So what does this have to do with climate change, I am sure you are asking. The horsemeat scandal came to mind this morning when I was reading David Leonhardt's opinion piece in the New York Times. There is a lot to like in the piece about innovation and clean energy. But there is also horsemeat. (I'm picking on Leonhardt's piece, but it is by no means unique or the most egregious example.)
Leonhardt opens by citing the increasing costs of extreme weather as one of the reasons "for a major government response to climate change" and concludes with:
In the end, the strongest economic argument for an aggressive response to climate change is not the much trumpeted windfall of green jobs. It’s the fact that the economy won’t function very well in a world full of droughts, hurricanes and heat waves.The extreme weather meme has taken off -- there is no doubt -- but advocates for action take some risk by arguing that the most important reason for action is future extreme weather. The reasons for this have been detailed at length on this blog and in The Climate Fix.
A common reaction to my critique of this argument is to invoke a ends-justify-the-means sort of logic. For instance, right after I commented on Leonhardt's piece on twitter, ASU professor and colleague Clark Miller responded via a Tweet:
@rogerpielkejr @dleonhardt Climate events have people thinking. Now maybe think mitigation. Social not natural causality. Whats not to like?
— Clark Miller (@clarkamiller) February 10, 2013
This means/end debate has occurred too many times to count on this blog (and its predecessor), and my usual response is to be careful -- Dick Cheney used similar logic when linking 9/11 to Saddam Hussein. What did it matter, the argument went, if people wrongly associated 9/11 with Saddam? He was a bad guy, and if people supported getting rid of him for the wrongs reasons, so what?
Climate campaigners often adopt a similar logic. What does it matter if people wrongly associate recent extreme events and disaster costs with climate change? Responding to it is a good thing, and if people support mitigation action for the wrong reasons, so what?
There are three objections here.
First, an argument that mitigation of greenhouse gases makes sense in terms of decreasing the future costs of extreme events is not a strong one: Even under the assumptions of IPCC, Stern Review, etc. the future costs of extreme events under the most aggressive scenarios of climate change actually decrease as a proportion of GDP.
The second objection is that the discovery of a little horsemeat in lasagne ruins the entire product. You might cite the tasty (and safe) noodles and tomato sauce, but the presence of horsemeat in the product defeats your argument. The science is just not there to connect increasing costs of disasters to climate change, much less individual phenomena like drought, floods and storms. It is horsemeat -- and don't put it into your product lest you compromise the whole package.
The third reason should be obvious but often appears to escape the calculus of many campaigners and journalists. Telling people that their lasagne contains beef, when it actually contains horsemeat is just wrong.
06 February 2013
How Not to Argue for Increased R&D Funding
It a remarkably tone-deaf and obtuse op-ed in the FT, physicist and Nobel prize winner Andre Geim writes the following about science investments by government:
Until 40 years ago, the threat of hot and cold wars forced countries to look for long-term advantages over potential enemies. At the risk of being simplistic, let me say this threat fed back into investment in science, which allowed the industrial revolution to continue. Today, the threat from global warming, overpopulation and scarce natural resources seems insufficiently scary.I can't imagine that science will win additional support from governments by wishing for a the possibility of global apocalypse -- even with brazen certainty that Bruce Willis can rocket up and save us -- as an excuse to boost government science funding. This is the tone-deaf part.
Western governments have been folding their blue-sky programmes in response to financial restraints and voters’ negative view of academic research. Davos persuades me there is little hope of change. This would require a change in human nature. I fear economists might be right this time, and Japanese-style stagnation will look like a best-case scenario. But I have a dream. The industrial revolution and economic growth continue. This is because astrophysicists find a huge cosmic rock on course to hit Earth in 50 years. This should be scary enough. The world can surely deflect this threat but will need to develop new knowledge and technologies.
The obtuse part is Geim's equating of advances in basic research as both necessary and sufficient for economic growth, when neither may be the case. Geim writes:
Without new knowledge, only derivative technologies are possible – and, however important, they are incapable of sustaining the sorts of economic growth rates the world has enjoyed since the coming of the industrial revolution.In reality the world currently is spending collectively about $1.5 trillion on R&D in the public and private sectors, growing at a rate of 5.2% and 6.5% in the past two years.
More fundamentally, R&D is no doubt important to innovation and economic growth, but the connections are not direct. I recently explained:
The integration of post-war science policy with a misinterpretation of neo-classical economic theory led to the creation of a mythology of innovation that persists today. Benoît Godin, the innovation scholar at the Institut National de la Recherche Scientifique in Montreal, explains (here in PDF) that this mythology has practical consequences:Economic scare stories from scientists coupled with dreams of disaster are not a useful approach to science policy.
The problem is that the academic lobby has successfully claimed a monopoly on the creation of new knowledge, and that policy-makers have been persuaded to confuse the necessary with the sufficient condition that investment in basic research would by itself necessarily lead to successful applications. Be that as it may, the framework fed policy analyses by way of taxonomies and classifications of research and, above all, it was the framework most others compared to.So in our public debates, rather than examining innovation policies and the complexities of securing economic growth, our discussions typically devolve into simplistic appeals for more federal R&D. (The full essay is here.)
05 February 2013
Lincoln and the Expert
I am putting the finishing touches on my talk for tomorrow at the STEPS conference on science policy at the University of Sussex. I decided to open with the following anecdote.
In 1863 at the height of the Civil War Abraham Lincoln was approached by a weather forecaster named Francis Capen who said that he could predict the weather days in advance (imagine!). Such predictions, Capen argued, might help prosecute the war. In a note, Lincoln shared his opinion of Mr. Capen:
In 1863 at the height of the Civil War Abraham Lincoln was approached by a weather forecaster named Francis Capen who said that he could predict the weather days in advance (imagine!). Such predictions, Capen argued, might help prosecute the war. In a note, Lincoln shared his opinion of Mr. Capen:
It seems to me that Mr. Capen knows nothing about the weather, in advance. He told me three days ago that it would not rain again till the 30th of April or 1st of May. It is raining now [April 28th] and has been for ten hours. I cannot spare any more time to Mr. Capen. (Whitnah 1961, pp. 114-15, as cited in this chapter, PDF).The complex relationship of science and politics has a long history in both Europe and North America. My talk will highlight some of the important tensions that arise when a desire for authority meets demands for legitimacy.
03 February 2013
NOAA Slays the Billion-Dollar Disaster Meme
The National Oceanic and Atmospheric Administration keeps a tally of "billion dollar disasters" which have occurred in the United States. The number of disasters which exceed the billion dollar threshold has increased since 1980. This increase has been often cited as evidence that the climate has become more extreme and is attributable to emissions of carbon dioxide. A new peer-reviewed paper from NOAA pours cold water on both claims.
Adam Smith, of NOAA, and Rick Katz, a statistician and former colleague of mine at NCAR, have a paper just out in the journal Natural Hazards in which they take a close look at the index. What they find reinforces conclusions found in earlier work on disasters and climate change. Specifically, if you are looking for climate signals in extreme events, look first at climate data. If you are looking at loss data, avoid aggregated, non-normalized loss records.
Here is what the new analysis finds:
1. The number of billion dollar disasters has increased by about 5% per year since 1980, but (perhaps surprisingly) the loss per event has not:
2. The dataset is dominated by hurricane, drought and thunderstorm losses which together account for more than two thirds of all events and more than 80% of all losses. As readers here will know well, there are no long-term trends in normalized US hurricane losses, North American drought has decreased and there has been no increase in tornado damage (note that thunderstorm damage also includes hail and straightline winds). This alone provides a strong caution to using the NOAA index for purpose of claims about extremes in general.
3. The new paper warns on the use of crop losses, which are included in the NOAA data in the loss categories of tropical cyclone, severe storm, winter storm, wildfire, drought/heat, flooding and crop freeze -- that is to say, in every category. The warning is that:
The billion dollar loss dataset includes no adjustments for changes in crop value, yield or other relevant factors, nor does the widely cited Munich Re data. The implication of course is that some significant but unquantified portion of the increase in billion dollar losses reflects non-climatic trends in crop production and value.
4. Their paper concludes by noting the the billion dollar disaster database is adjusted only for inflation and has not been normalized to account for other changes. They explain that
Adam Smith, of NOAA, and Rick Katz, a statistician and former colleague of mine at NCAR, have a paper just out in the journal Natural Hazards in which they take a close look at the index. What they find reinforces conclusions found in earlier work on disasters and climate change. Specifically, if you are looking for climate signals in extreme events, look first at climate data. If you are looking at loss data, avoid aggregated, non-normalized loss records.
Here is what the new analysis finds:
1. The number of billion dollar disasters has increased by about 5% per year since 1980, but (perhaps surprisingly) the loss per event has not:
[T]here is no apparent time trend in economic loss from individual disasters. In fact, a least squares trend analysis estimates a very slight decreasing trend of about 0.5 % per year.So they find more events that exceed the billion dollar threshold, but not more severe events.
2. The dataset is dominated by hurricane, drought and thunderstorm losses which together account for more than two thirds of all events and more than 80% of all losses. As readers here will know well, there are no long-term trends in normalized US hurricane losses, North American drought has decreased and there has been no increase in tornado damage (note that thunderstorm damage also includes hail and straightline winds). This alone provides a strong caution to using the NOAA index for purpose of claims about extremes in general.
3. The new paper warns on the use of crop losses, which are included in the NOAA data in the loss categories of tropical cyclone, severe storm, winter storm, wildfire, drought/heat, flooding and crop freeze -- that is to say, in every category. The warning is that:
Given the increasing trends in [crop] yields attributable to technological innovation and given fluctuations in price, it is difficult to attribute any part of the trends in losses to climate variations or change, especially in the case of billion-dollar disasters.In fact, one of the most important contributions of the paper might be its finding that crop losses have not increased as a proportion of liability (as shown in the figure above from the paper), or as it concludes, "for these three major crops, the trends in losses are comparable in magnitude to the trends in liability."
The billion dollar loss dataset includes no adjustments for changes in crop value, yield or other relevant factors, nor does the widely cited Munich Re data. The implication of course is that some significant but unquantified portion of the increase in billion dollar losses reflects non-climatic trends in crop production and value.
4. Their paper concludes by noting the the billion dollar disaster database is adjusted only for inflation and has not been normalized to account for other changes. They explain that
The magnitude of such increasing trends is greatly diminished when applied to data normalized for exposure (Pielke et al. 2008).Smith and Katz have properly identified the severe limitations to the NOAA billion dollar loss database. Any bets as to whether NOAA will issue a press release as a corrective to its earlier hyping of the index and promoting its misuse? Even if they don't, the new paper is a valuable contribution.
Subscribe to:
Posts (Atom)