I have an essay up at the Bulletin of the Atomic Scientists as part of a roundtable on the subject, "When politicians distort science." My essay begins, "Here we go again . . ."
Please have a look and please come back here with comments, critique or questions. Along with Robert Socolow and Randy Olson we'll be engaging this roundtable discussion for the next month or so, with new pieces coming out every week.
27 October 2011
Germany's Energy Transition: Carbon-Free to Carbon-Full
Der Speigel asks whether Germany's ambitious energy transition is going according to plan. From the graph above, which is for Bavaria, it looks like Germany had better stock up on carbon offsets, because something looks to give and I suspect that it won't be the lights going out.
Bill Clinton on Intelligent Design
The ever-quotable Bill Clinton surveys the recent debates among the field of Republicans vying for the 2012 nomination and renders his verdict:
I believe in God and know what they mean about intelligent design, but looking at those debates I had to wonder.Masterful.
25 October 2011
The Games Climate Scientists Play
[UPDATE #3 11/2: A follow-up post is here.]
[UPDATE #2: I will be moving on to more (less?) fruitful topics. But let me wrap up this interesting episode by restating that I stand by everything in this post and the discussion in the comments here and elsewhere. The RC11 methodology does not make any use of data prior to 1910 insofar as the results are concerned (despite suggestions to the contrary in the paper). If there is a criticism of this post to be leveled it is, as several professional colleagues have observed in emails to me, that 1911 is not the right cutoff for the cherrypick, but it is more like 1980 (i.e., they argue that no data before 1980 actually matters in the methodology). This is a fair criticism. I'll be using the RC11 paper in my graduate seminar next term as an example of cherry picking in science -- a clearer, more easily understandable case you will not find.]
[UPDATE: At Real Climate Stefan Rahmstorf has a long and laborious post trying to explain not only the 1911 cherry pick, but several others that defy convention in attribution studies. In the comments below I publish Stefan's response to my query -- They used "trends" (using a new definition of that term in climate science) such that the "trend" from 1911 is the same as that from 1880. Look at the graph below and ask yourself how that can be -- Climate science as ink blot.]
Here is another good example why I have come to view parts of the climate science research enterprise with a considerable degree of distrust.
A paper was released yesterday by PNAS, by Stefan Rahmstorf and Dim Coumou, (also freely available here in PDF) which asserts that the 2010 Russian summer heat wave was, with 80% probability, the result of a background warming trend. But if you take a look at the actual paper you see that they made some arbitrary choices (which are at least unexplained from a scientific standpoint) that bias the results in a particular direction.
Look at the annotated figure above, which originally comes from an EGU poster by Dole et al. (programme here in PDF). It shows surface temperature anomalies in Russia dating back to 1880. I added in the green line which shows the date from which Rahmsdorf and Coumou decided to begin their analysis -- 1911, immediately after an extended warm period and at the start of an extended cool period.
Obviously, any examination of statistics will depend upon the data that is included and not included. Why did Rahmsdorf and Coumou start with 1911? A century, 100 years, is a nice round number, but it does not have any privileged scientific meaning. Why did they not report the sensitivity of their results to choice of start date? There may indeed be very good scientific reasons why starting the analysis in 1911 makes the most sense and for the paper to not report the sensitivity of results to the start date. But the authors did not share that information with their readers. Hence, the decision looks arbitrary and to have influenced the results.
Climate science -- or at least some parts of it -- seems to have devolved into an effort to generate media coverage and talking points for blogs, at the expense of actually adding to our scientific knowledge of the climate system. The new PNAS paper sure looks like a cherry pick to me. For a scientific exploration of the Russian heat wave that seems far more trustworthy to me, take a look at this paper.
[UPDATE #2: I will be moving on to more (less?) fruitful topics. But let me wrap up this interesting episode by restating that I stand by everything in this post and the discussion in the comments here and elsewhere. The RC11 methodology does not make any use of data prior to 1910 insofar as the results are concerned (despite suggestions to the contrary in the paper). If there is a criticism of this post to be leveled it is, as several professional colleagues have observed in emails to me, that 1911 is not the right cutoff for the cherrypick, but it is more like 1980 (i.e., they argue that no data before 1980 actually matters in the methodology). This is a fair criticism. I'll be using the RC11 paper in my graduate seminar next term as an example of cherry picking in science -- a clearer, more easily understandable case you will not find.]
[UPDATE: At Real Climate Stefan Rahmstorf has a long and laborious post trying to explain not only the 1911 cherry pick, but several others that defy convention in attribution studies. In the comments below I publish Stefan's response to my query -- They used "trends" (using a new definition of that term in climate science) such that the "trend" from 1911 is the same as that from 1880. Look at the graph below and ask yourself how that can be -- Climate science as ink blot.]
Here is another good example why I have come to view parts of the climate science research enterprise with a considerable degree of distrust.
A paper was released yesterday by PNAS, by Stefan Rahmstorf and Dim Coumou, (also freely available here in PDF) which asserts that the 2010 Russian summer heat wave was, with 80% probability, the result of a background warming trend. But if you take a look at the actual paper you see that they made some arbitrary choices (which are at least unexplained from a scientific standpoint) that bias the results in a particular direction.
Look at the annotated figure above, which originally comes from an EGU poster by Dole et al. (programme here in PDF). It shows surface temperature anomalies in Russia dating back to 1880. I added in the green line which shows the date from which Rahmsdorf and Coumou decided to begin their analysis -- 1911, immediately after an extended warm period and at the start of an extended cool period.
Obviously, any examination of statistics will depend upon the data that is included and not included. Why did Rahmsdorf and Coumou start with 1911? A century, 100 years, is a nice round number, but it does not have any privileged scientific meaning. Why did they not report the sensitivity of their results to choice of start date? There may indeed be very good scientific reasons why starting the analysis in 1911 makes the most sense and for the paper to not report the sensitivity of results to the start date. But the authors did not share that information with their readers. Hence, the decision looks arbitrary and to have influenced the results.
Climate science -- or at least some parts of it -- seems to have devolved into an effort to generate media coverage and talking points for blogs, at the expense of actually adding to our scientific knowledge of the climate system. The new PNAS paper sure looks like a cherry pick to me. For a scientific exploration of the Russian heat wave that seems far more trustworthy to me, take a look at this paper.
24 October 2011
Lessons from the L'Aquila Lawsuit
In my latest Bridges column I discuss the lessons from the lawsuit involving scientists and the L'Aquila earthquake. Please have a look and if you have questions or comments, feel free to come back and submit them here.
Are US Floods Increasing? The Answer is Still No.
A new paper out today in the Hydrological Sciences Journal shows that flooding has not increased in the United States over records of 85 to 127 years. This adds to a pile of research that shows similar results around the world. This result is of course consistent with our work that shows that increasing damage related to weather extremes can be entirely explained by societal changes, such as more property in harm's way. In fact, in the US flood damage has decreased dramatically as a fraction of GDP, which is exactly whet you get if GDP goes up and flooding does not.
I do not expect research to change anyone's views on the topic or alter the debate over climate change and extreme events. The debate has moved well beyond that which can be resolved empirically.
I do not expect research to change anyone's views on the topic or alter the debate over climate change and extreme events. The debate has moved well beyond that which can be resolved empirically.
Bipartisan Criticism of NOAA Fishery Policies
"Your testimony cherry picks, you include what's good, but leave out what's not good"The quote above comes from Representative Barney Frank (D-MA) criticizing the testimony of Jane Lubchenco, the administrator of NOAA and pictured testifying above, on the impacts of federal fishery policies. In fact, a bipartisan group of members of the US Congress have called for Lubchenco's resignation.
At issue are the effects of NOAA policies for management of fisheries on the economic health of the region's fishing economy. Lubchenco has alleged success and the locals say she is wrong. The local media sums up the issue as follows:
But the thumbs-down [Republican Senator] Brown gave Lubchenco here Saturday — before about 100 onlookers — underscored a pre-existing bipartisan judgment against her by Democratic Congressmen John Tierney and Barney Frank who represent the ports of the Gloucester and New Bedford, respectively.A third congressman who works closely with Tierney and Frank on fisheries issues and shares their view of Lubchenco as a failure is Republican Walter Jones of North Carolina.That trio first asked for her removal from office in the summer of 2010. And a fifth thumbs down to Lubchenco has been delivered by New Bedford Mayor Scott Lang, with the final straw coming out her appearance before the Senate Commerce Committee, meeting on Oct. 3 in Boston.Lang, along with Brown, Tierney and Frank described her performance as patronizing and dishonest, after she credited her administration of the fisheries as restorative to the resource and the industry.Kirk, a Democrat, has taken chosen a different tact — focusing her disappointment on President Obama and urging him to come to Gloucester to see for himself the harm done by administration fisheries policies.Facing a cenotaph with the names of the more than 5,000 Gloucester fishermen lost at sea over the centuries, the Man at the Wheel — erected in 1923 to celebrate Gloucester's and the U.S. commercial fishing industry's 300th anniversary — made a fitting backdrop for Brown's call."Just a few weeks ago, Administrator Lubchenco told us ... in Boston that the fishing industry is on the rebound," Brown said in explaining his decision. "That incredible statement demonstrated a total lack of understanding of the situation in Gloucester, New Bedford and across New England."He also pointed to her decision to leave the hearing before the last of the witnesses — distinguished academic, marine scientist and critic of Lubchenco policies Brian Rothschild — began testifying as a sign of her disrespect for fishermen.
Rothschild's testimony can be found here in PDF, in which he provides a scathing evaluation of NOAA's performance:
It appears that fisheries management is being prosecuted at a great cost to the Nation in terms of jobs, food security, and welfare. There have been many suggestions of ways to get the system back on track. But these suggestions have never seen the light of day. We conclude that the agency, when it does respond, reiterates the problems rather than provides solutions . . .The management of the nation's fisheries by NOAA is an important policy issue of national importance. The performance of NOAA deserves a wider examination, especially by experts who are not in the region and thus a bit further from the local politics of the issue.
The issue of NOAA and New England fisheries is obviously bipartisan, and yet the national media and especially those who focus on science and the environment appear to have completely missed this issue (please do correct me if I am mistaken). Is this another instance of willful blindness when it comes to issues of science and politics under the Obama Administration? (One can imagine the froth had these exact events transpired under GWB.) If so, then such blindness merely reinforces a partisan divide rather than opening up these complex issues to a deeper discussion.
I will email some journalist colleagues for their views, and report back.
DISCLAIMER: I am a Fellow of CIRES here at the University of Colorado, a NOAA-afilliated research institute.
Politics and Science: A Coming Essay and Talk Today
These days, I'm spending a bit more time these days on my old beat. Later this week I'll have a piece up at the Bulletin of the Atomic Scientists, as part of their periodic exchanges, focused on responding to the following prompt:
Also, today I am giving a talk here at CU on this same topic. Here are the details:
Republican presidential candidate and Texas Gov. Rick Perry recently questioned the science of climate change in ways so unsupported by evidence that Glenn Kessler, the "Fact Checker" columnist at The Washington Post, gave him a rating of "four Pinocchios." Perry's is but one scientific misstatement among many that regularly roil the US political scene. What is the proper scientific response to the political distortion -- or even outright rejection -- of science? In coming weeks, three Bulletin experts will offer authoritative and at times provocative analysis.I suppose that last bit is supposed to refer to me ;-) My piece starts out, "Here we go again ..." I'll post a link on Thursday.
Also, today I am giving a talk here at CU on this same topic. Here are the details:
Scientists in Policy and Politics
COSI Seminar
Monday, October 24, 2011 - 4:00pm - 5:00pm
Engineering Bldg. - ECCR 1B55
Roger Pielke Jr.
Professor - Center for Science and Technology Policy Research - University of Colorado at Boulder
Scientists, and experts more generally have choices about the roles that they play in today's political debates on topics such as global warming, genetically modified foods, and food and drug safety, just to name a few. This talk is about understanding these choices, their theoretical and empirical bases, what considerations are important to think about when deciding, and the consequences for the individual scientist and the broader scientific enterprise.
Roger A. Pielke, Jr. has been on the faculty of the University of Colorado since 2001 and is a Professor in the Environmental Studies Program and a Fellow of the Cooperative Institute for Research in Environmental Sciences (CIRES). At CIRES, Roger served as the Director of the Center for Science and Technology Policy Research from 2001-2007. Roger's research focuses on the intersection of science and technology and decision making. In 2006 Roger received the Eduard Brückner Prize in Munich, Germany for outstanding achievement in interdisciplinary climate research. Before joining the University of Colorado, from 1993-2001 Roger was a Scientist at the National Center for Atmospheric Research. Roger is a Senior Fellow of the Breakthrough Institute. He is also author, co-author or co-editor of seven books, including The Honest Broker: Making Sense of Science in Policy and Politics published by Cambridge University Press in 2007. His most recent book is The Climate Fix: What Scientists and Politicians Won't Tell You About Global Warming.
20 October 2011
See No Evil
Why have a number of areas of US science become so politicized?
One answer to this question is that those concerned about science in politics have ceded discussion of issues of science policy to the most overtly partisan, many of whom see science as nothing more than a convenient tool to extract political advantage. This dynamic manifests itself in the overwhelming selectivity of attention among those who purport to be concerned about science in politics.
Consider a few examples:
Remember when James Hansen was told that his access to the media would be limited and controlled by minders at NASA? Of course you do. It has been a talking point for years.
But what about when the Obama Administration recently muzzled scientists and other officials at the Department of Health and Human Services? If you frequent the science corner of the blogosphere you might have missed it (though if you visit the conservative-o-sphere you may have seen it). Here is what one long-time journalist said about the policy:
Remember when the Bush Administration was accused of couching its ideological preferences in the name of science in order to prohibit research on stem cells? Well, of course you do.
But what about the Obama Administration's hiding its decision to close Yucca Mountain behind science? As President Obama's spokesman explained:
Remember when Congressman Henry Waxman compiled a laundry list of examples where Bush Administration had violated standards of scientific integrity? Yes, yes, I know you do.
Well just today Senator David Vitter and two colleagues compiled their own list of alleged transgressions of scientific integrity by the Obama Administration and fired it off to John Holdren, the president's science advisor, demanding a response to a long list of questions.
I received a copy of the letter by email. The only media coverage that I am aware of is Fox News, who have their own agenda.
Science bloggers? AGU? Nothing.
Among those in the scientific community and those who like to pal around with the scientific community, the selective ignorance of issues associated with scientific integrity fits politics as usual, but ultimately will only reinforce the pathological politicization of science. Of course, many scientists and scientific organizations are willing to allow science to be used in this instrumental fashion because their own political preferences align with those who are exploiting them.
As I have long argued the issues raised by the Bush Administration's and now the Obama Administration's ham-handed efforts at the intersection of science and politics do not have a partisan solution. Rather, they involve mundane, messy and complicated issues of bureaucracy, governance, and accountability -- policy rather than politics.
Those who seek to extract partisan advantage from debates involving science are not really friends of science. The politicization of science will not improve until the scientific community itself takes charge of this issue and returns it to the realm of science policy rather than partisan politics.
One answer to this question is that those concerned about science in politics have ceded discussion of issues of science policy to the most overtly partisan, many of whom see science as nothing more than a convenient tool to extract political advantage. This dynamic manifests itself in the overwhelming selectivity of attention among those who purport to be concerned about science in politics.
Consider a few examples:
Remember when James Hansen was told that his access to the media would be limited and controlled by minders at NASA? Of course you do. It has been a talking point for years.
But what about when the Obama Administration recently muzzled scientists and other officials at the Department of Health and Human Services? If you frequent the science corner of the blogosphere you might have missed it (though if you visit the conservative-o-sphere you may have seen it). Here is what one long-time journalist said about the policy:
The new formal HHS Guidelines on the Provision of Information to the News Media represent, to this 36-year veteran of reporting FDA news, a Soviet-style power-grab. By requiring all HHS employees to arrange their information-sharing with news media through their agency press office, HHS has formalized a creeping information-control mechanism that informally began during the Clinton Administration and was accelerated by the Bush and Obama administrations.AAAS? Chris Mooney? Crickets.
Remember when the Bush Administration was accused of couching its ideological preferences in the name of science in order to prohibit research on stem cells? Well, of course you do.
But what about the Obama Administration's hiding its decision to close Yucca Mountain behind science? As President Obama's spokesman explained:
"I think what has taken Yucca Mountain off the table in terms of a long-term solution for a repository for our nuclear waste is the science. The science ought to make these decisions."Of course, the science. The Bulletin of the Atomic Scientists hints there may be more than just science at play:
In 2002 the Energy Secretary issued a formal finding of Yucca Mountain's scientific suitability, but the White House press corps didn't question Gibbs on what "science" he was talking about. Instead, most coverage focused on Obama's ties to Senate Majority Leader Harry Reid of Nevada, an Obama supporter who was heading into a tough re-election campaign in a state where there is widespread public opposition to Yucca Mountain. With the stroke of a pen, the president leveled a mountain of research that had taken 20 years and $10 billion to build.Defenders of scientific integrity? Silence.
Remember when Congressman Henry Waxman compiled a laundry list of examples where Bush Administration had violated standards of scientific integrity? Yes, yes, I know you do.
Well just today Senator David Vitter and two colleagues compiled their own list of alleged transgressions of scientific integrity by the Obama Administration and fired it off to John Holdren, the president's science advisor, demanding a response to a long list of questions.
I received a copy of the letter by email. The only media coverage that I am aware of is Fox News, who have their own agenda.
Science bloggers? AGU? Nothing.
Among those in the scientific community and those who like to pal around with the scientific community, the selective ignorance of issues associated with scientific integrity fits politics as usual, but ultimately will only reinforce the pathological politicization of science. Of course, many scientists and scientific organizations are willing to allow science to be used in this instrumental fashion because their own political preferences align with those who are exploiting them.
As I have long argued the issues raised by the Bush Administration's and now the Obama Administration's ham-handed efforts at the intersection of science and politics do not have a partisan solution. Rather, they involve mundane, messy and complicated issues of bureaucracy, governance, and accountability -- policy rather than politics.
Those who seek to extract partisan advantage from debates involving science are not really friends of science. The politicization of science will not improve until the scientific community itself takes charge of this issue and returns it to the realm of science policy rather than partisan politics.
UK Fuel Poverty Solved!
Last week I noted the projected increase in "fuel poverty" in the United Kingdom and speculated that such a trend might have political consequences. The UK coalition government and their creative policy analysts have come up with a solution for this difficult situation -- they are proposing to redefine "fuel poverty" in a manner that shows it to be decreasing, not increasing (see figure above from the FT).
Voila, problem solved!
Voila, problem solved!
19 October 2011
Time to Start Thinking About the Consequences of a Breathrough on Malaria
The New England Journal of Medicine has released the preliminary results of a field test of a new malaria vaccine by GlaxoSmithKline. An editorial that accompanies the study explains its significance and the author of that editorial notes the unusual step of publishing results beofre the study is actually completed:
With these cautions in mind, the implications are potentially huge -- most significantly for improving the lives of Africans who live under the burden of the disease. Here is what I wrote in 2008:
For example, successful deployment of a vaccine would have large implications for analysts whose work depends upon making assumptions about the future. Consider that the assumptions used to project future emissions of greenhouse gases from Africa generally do not factor in the GDP boosting-potential of malaria vaccines (in the sense that GDP growth is assumed to be at recent levels or lower). Again from my posting on this topic back in 2008:
The consequences of the eradication of malaria should be received as great news, but such a discovery has the potential to send scenario authors back to the drawing board as it could add to existing scenarios as much as 5 Chinas (! in 2011) worth of emissions in 2050. Even viewing this idealized value as an upper bound, the lesson here is that emissions scenarios have disproportionately large errors on the high side (PDF). There are no doubt a wide range of similar types of assumptions and scenarios about Africa that will require rethinking (e.g., food production, uurbanization, conflict, etc.).
Eradication of malaria would be a monumental accomplishment with profound consequences for Africa and the world, beyond the obvious benefits to human health. It is not too early to begin to think through what those might be.
It's been a long time coming, and indeed we are still not there yet, but it is becoming increasingly clear that we really do have the first effective vaccine against a parasitic disease in humans. . . It is not usual practice to publish the results of trials in pieces, and there does not seem to be a clear scientific reason why this trial has been reported with less than half the efficacy results available.It appears that the NEJM timed released to coincide with a major meeting of the Gates Foundation on malaria. The hyping of research results cannot be considered good practice. In the study itself concerns are expressed about side effects, such as an apparent increase in meningitis and seizures.
With these cautions in mind, the implications are potentially huge -- most significantly for improving the lives of Africans who live under the burden of the disease. Here is what I wrote in 2008:
Why does malaria matter? According to Jeffrey Sachs:Clearly, a successful malaria vaccine, even at 50% effectiveness, would have profound and far-reaching consequences for Africa and the broader world.
The numbers are staggering: there are 300 to 500 million clinical cases every year, and between one and three million deaths, mostly of children, are attributable to this disease. Every 40 seconds a child dies of malaria, resulting in a daily loss of more than 2,000 young lives worldwide. These estimates render malaria the pre-eminent tropical parasitic disease and one of the top three killers among communicable diseases.The Economist reported a few weeks ago on efforts to eradicate malaria. The article referenced a study by McKinsey and Co. on the “business case” (PDF) for eradicating malaria. Here are the reported 5-year benefits:
• Save 3.5 million lives
• Prevent 672 million malaria cases
• Free up 427,000 hospital beds in sub-Saharan Africa
• Generate more than $80 billion in increased GDP for Africa
For example, successful deployment of a vaccine would have large implications for analysts whose work depends upon making assumptions about the future. Consider that the assumptions used to project future emissions of greenhouse gases from Africa generally do not factor in the GDP boosting-potential of malaria vaccines (in the sense that GDP growth is assumed to be at recent levels or lower). Again from my posting on this topic back in 2008:
In case you are curious the IPCC SRES assumes future GDP growth rates in Africa (combined with Latin America) of 3.8% to 5.5% per year to 2050.So what are the implications of eradicating malaria for future greenhouse gas emissions from Africa?
To answer this question I obtained data on African greenhouse gas emissions from CDIAC, and I subtracted out South Africa, which accounts for a large share of current African emissions. I found that the average annual increase from 1990-2004 was 5.2%, which I will use as a baseline for projecting business-as-usual emissions growth into the future.
The next question is what effect the eradication of malaria might have on African GDP. The McKinsey & Co. report referenced a paper by Gallup and Sachs (2001, link) which speculates (and I think that is a fair characterization) that complete eradication could boost GDP growth by as much as 3% per year. This would take African emissions growth rates to 8.2%, which is still well short of what has been observed in China this decade, and thus not at all unreasonable. So I’ll use this as an upper bound (not as a prediction, to be clear). So if we graph future emissions under my definition of business-as-usual and also the Gallup/Sachs upper bound, we get the following curves to 2050.
The figure shows that by eradicating malaria, it is conceivable that there will be an corresponding increase in annual African emissions of more than 11 GtC above BAU. . .
The implications should be obvious: If a goal of climate policy is simply to “reduce emissions” then this goal clearly conflicts with efforts to eradicate malaria, which will inevitably lead to an increase in emissions. But if the goal is to modernize the global energy system — including the developing the capacity to provide vast quantities of carbon-free energy, then there is no conflict here.
The consequences of the eradication of malaria should be received as great news, but such a discovery has the potential to send scenario authors back to the drawing board as it could add to existing scenarios as much as 5 Chinas (! in 2011) worth of emissions in 2050. Even viewing this idealized value as an upper bound, the lesson here is that emissions scenarios have disproportionately large errors on the high side (PDF). There are no doubt a wide range of similar types of assumptions and scenarios about Africa that will require rethinking (e.g., food production, uurbanization, conflict, etc.).
Eradication of malaria would be a monumental accomplishment with profound consequences for Africa and the world, beyond the obvious benefits to human health. It is not too early to begin to think through what those might be.
17 October 2011
The Cost of Dread Risk
"Dread risk" was characterized by Paul Slovic in his classic 1987 article in terms of its "perceived lack of control, dread, catastrophic potential, fatal consequences, and the inequitable distribution of risks and benefits." The key terms in that description is perceived. Slovic provides evidence that expert judgments of risk sometimes run counter to judgments by lay people, and he provides nuclear power as a canonical example. Most experts find nuclear power to be a relatively low risk technology, whereas the public finds it to be high risk. The difference has to do with "dread risk."
Former Japanese Prime Minister Nato Kan this week provides a clear example of "dread risk" this week:
But such a characterization is likely to miss the point -- dread risk is real, and every bit as meaningful as the quantitative risk assessments so often provided by experts. For instance, this study (PDF) conducted in the aftermath of 9/11 looked at the consequences of fear of flying ("dread risk") in automobile fatalities for the 3 months after the September 11, 2011. With less people flying and more driving, and because driving is riskier than flying (in terms of mortality rates), the study argues that there were 353 additional highway deaths than would have otherwise been the case.
In this case the cost of mitigating "dread risk" was tangible -- 353 lives. Was that trade-off worth it to the public? One answer is yes, because dread risk is real, people will be willing to pay some price to reduce it. Another answer is no, because the public never really had a chance to make such a trade-off, at least not explicitly.
Many experts will argue that the answer here would be to better educate the public. But in his 1987 article Paul Slovic suggests caution,
Former Japanese Prime Minister Nato Kan this week provides a clear example of "dread risk" this week:
Former prime minister of Japan, Naoto Kan, said he experienced a “spine-chilling” feeling when he thought that Tokyo might have had to be evacuated in the aftermath of the Fukushima nuclear accident.This scenario does not seem to jibe with any expert assessments of risk that I am aware of, and much of the discussion of the Fukashima disaster has focused on trying to accurately characterize the risks associated with the aftermath of the tsunami.
In an interview published on Tuesday, Kan said he feared Tokyo and its vicinities might have been rendered uninhabitable by the nuclear catastrophe, and that it would have been “impossible” to evacuate the 30-million population living in the area.
“Deserted scenes of Tokyo without a single man around came across my mind,” he was quoted as saying in the interview published by Tokyo Shimbun newspaper. “It really was a spine-chilling thought”.
The former prime minister said that he thought nuclear plants were safe, thanks to Japan’s technology. “I changed my mind” after this spring’s disaster, he added.
If the uninhabitable zone around the Fukushima plant had to spread out to 100 or 200 kilometers, “Japan wouldn’t stand as a country”, Kan said.
His conclusion was that, taking into account the risk, there is no choice but to become independent of nuclear power plants. If an accident that could make half the country uninhabitable is possible, that risk cannot be taken, “even if it was once in a century”, Kan said.
But such a characterization is likely to miss the point -- dread risk is real, and every bit as meaningful as the quantitative risk assessments so often provided by experts. For instance, this study (PDF) conducted in the aftermath of 9/11 looked at the consequences of fear of flying ("dread risk") in automobile fatalities for the 3 months after the September 11, 2011. With less people flying and more driving, and because driving is riskier than flying (in terms of mortality rates), the study argues that there were 353 additional highway deaths than would have otherwise been the case.
In this case the cost of mitigating "dread risk" was tangible -- 353 lives. Was that trade-off worth it to the public? One answer is yes, because dread risk is real, people will be willing to pay some price to reduce it. Another answer is no, because the public never really had a chance to make such a trade-off, at least not explicitly.
Many experts will argue that the answer here would be to better educate the public. But in his 1987 article Paul Slovic suggests caution,
Attempts to "educate" or reassure the public and bring their risk perceptions in line with those of industry experts appear unlikely to succeed because the low probability of serious reactor accidents make empirical demonstrations of safety difficult to achieve. Because nuclear accidents are perceived as unknown and potentially catastrophic, even small accidents will be highly publicized and may produce large ripple effects.Slovic continues:
Perhaps the most important message from this research is that there is wisdom as well as error in public attitudes and perceptions. Lay people sometimes lack certain information about hazards. However, their basic conceptualization of risk is much richer than that of the experts and reflects legitimate concerns that are typically omitted from expert risk assessments. As a result, risk communication and risk management efforts are destined to fail unless they are structured as a two-way process. Each side, expert and public, has something valid to contribute. Each side must respect the insights and intelligence of the other.Dread risk is real and has costs. In the case of nuclear power, to the extent that nations such as Japan and Germany eschew nuclear power due to concerns about dread risks, therr will inevitably be a price to pay. Such costs will be in the form of greater reliance on fossil fuels and more expensive energy.
14 October 2011
Unstoppable?
Matt Ridley, The Rational Optimist, has posted up a forthcoming op-ed on UK energy policy. It lays out the choices faced by UK policy makers in stark fashion. In the piece, he quotes Jesse Ausubel:
Are we now entering a natural gas renaissance? Many people I come across seem to think so and few are willing to make the case that we are not. I don't think that the answer to this question will be resolved by debate, but by what happens in the marketplace. And there we find strong indications as well.
UPDATE: Link to Mark Perry's (UMich) blog (h/t Les Johnson) and this image:
Jesse Ausubel is a soft-spoken academic ecologist at Rockefeller University in New York, not given to hyperbole. So when I asked him about the future of gas, I was surprised by the strength of his reply. “It’s unstoppable,” he says simply. Gas, he says, will be the world’s dominant fuel for most of the next century. Coal and renewables will have to give way, while oil is used mainly for transport. Even nuclear may have to wait in the wings.Ausubel was profiled in the NYT earlier this year:
And he is not even talking mainly about shale gas. He reckons a still bigger story is waiting to be told about offshore gas from the so-called cold seeps around the continental margins. Israel has made a huge find and is planning a pipeline to Greece, to the irritation of the Turks. The Brazilians are striking rich. The Gulf of Guinea is hot. Even our own Rockall Bank looks promising. Ausubel thinks that much of this gas is not even “fossil” fuel, but ancient methane from the universe that was trapped deep in the earth’s rocks – like the methane that forms lakes on Titan, one of Saturn’s moons.
He was involved in planning the first Intergovernmental Panel on Climate Change meeting but has viewed the panel’s subsequent reports with reserve. Climate change went from being a small to a major issue. “And then the expected happened,” he said. “Opportunists flowed in. By 1992 I stopped wanting to go to climate meetings.”Natural gas isn't the only thing Ausubel was 20 years ahead of his time on!
Are we now entering a natural gas renaissance? Many people I come across seem to think so and few are willing to make the case that we are not. I don't think that the answer to this question will be resolved by debate, but by what happens in the marketplace. And there we find strong indications as well.
UPDATE: Link to Mark Perry's (UMich) blog (h/t Les Johnson) and this image:
12 October 2011
The Iron Law in the UK
From the FT earlier this week is this remarkable graph on energy costs in the UK. Will there be a political consequence? If so, and it seems inevitable, when and what?
Does the US Lack Skilled Workers?
ABC News has this interesting perspective on US issues associated with immigration of skilled workers and the national skill base:
The State Department might sound like a strange place for the President’s Council on Jobs and Competitiveness to hold a listening session with the heads of foreign companies that invest in the United States, but hearing their comments it all made sense.And on skilled workers (emphasis added):
The business executives said they want to invest more in the United States, but cited ongoing concerns about weak U.S. infrastructure, their difficulties in finding a skilled workforce and resolving issues with visas for their employees.
“The issue that we have is finding skilled workers,” said Christian Turnig from ThyssenKrupp, a German company.
According to Turnig, his company had to send hundreds of employees from its new plant in Alabama to Germany for several months of training. He said his company would have preferred to do the training in the United States, but it was unable to get visas for their German employees to enter the U.S.
Martin Daum, the head of Daimler Trucks North America, told the gathering that he felt he had better skilled workers at his plants in Mexico than in the United States, where some workers have to be taught proper math and writing skills.Meanwhile in Florida Governor Rick Scott seems to have heard these sorts of messages and suggests a remedy:
He said that America produces highly educated professionals, but knowledge can be lacking when it comes to hiring for vocational jobs. According to Daum, the better skill sets of Mexican workers makes it easier to ramp up production at his company’s factories in Mexico than those in the United States.
“We have to bring in educators,” he said.
[Gov.] Scott said Monday that he hopes to shift more funding to science, technology, engineering and math departments, the so-called “STEM” disciplines. The big losers: Programs like psychology and anthropology and potentially schools like New College in Sarasota that emphasize a liberal arts curriculum.I think that STEM vs. liberal arts is to miss the real problem here.
“If I’m going to take money from a citizen to put into education then I’m going to take that money to create jobs,” Scott said. “So I want that money to go to degrees where people can get jobs in this state.”
“Is it a vital interest of the state to have more anthropologists? I don’t think so.”
Why is Energy Access Not a MDG?
The UN Millennium Development Goals (pictured above) focus attention on helping improve the lives of poor people around the world. In the words of the UN:
[The eight goals] form a blueprint agreed to by all the world’s countries and all the world’s leading development institutions. They have galvanized unprecedented efforts to meet the needs of the world’s poorest.Question: Why is energy access not among the goals?
11 October 2011
CATF on Wigley on Gas
The Clean Air Task Force has posted up a convincing critique of Tom Wigley's recent paper which discussed the global-average effects of a large switch to gas generation from coal. In a nutshell:
Our bottom line conclusion on the Wigley paper, then, is as follows:In any case, a monumental and global dash for gas appears to be coming whatever these various academic studies say.
- Sulfates, which are mainly produced by burning coal, are certainly cooling the planet; removing them will result in warming. However, sulfate removal has been and will be driven by air quality concerns and not by retiring coal plants and replacing them with natural gas.
- The greenhouse gas footprint of coal is still worse than that of natural gas, even when the harmful methane emissions from natural gas wells and pipelines are factored in.
Energy Not Yet for All
The IEA has released a preview of its 2011 International Energy Outlook (here in PDF). In it is describes the challenge of providing energy access to people around the world and how current policies are falling well short.
The IEA observes that energy access is actually a process:
Modern energy services are crucial to human well‐being and to a country’s economic development; and yet globally over 1.3 billion people are without access to electricity and 2.7 billion people are without clean cooking facilities. More than 95% of these people are either in sub‐Saharan Africa or developing Asia and 84% are in rural areas.Remarkably, the Millennium Development Goals of the United Nations do not even include energy access among their priorities. Thus, it is no surprise that the IEA places making energy access a political prority at the top if its recommendations:
In 2009, we estimate that $9.1 billion was invested globally in extending access to modern energy services. In the absence of significant new policies, we project that the investment to this end between 2010 and 2030 will average $14 billion per year, mostly devoted to new on‐grid electricity connections in urban areas. This level of investment will still leave 1.0 billion people without electricity and, despite progress, population growth means that 2.7 billion people will remain without clean cooking facilities in 2030. To provide universal modern energy access by 2030 annual average investment needs to average $48 billion per year, more than five‐times the level of 2009. The majority of this investment is required in sub‐Saharan Africa.
Adopt a clear and consistent statement that modern energy access is a political priority and that policies and funding will be reoriented accordingly. National governments need to adopt a specific energy access target, allocate funds to its achievement and define their strategy for delivering it.In case you are curious, what does "energy access" actually mean? The IEA defines energy access contextually, and it starts here:
The initial threshold level of electricity consumption for rural households is assumed to be 250 kilowatt‐hours (kWh) per year and for urban households it is 500 kWh per year. In rural areas, this level of consumption could, for example, provide for the use of a floor fan, a mobile telephone and two compact fluorescent light bulbs for about five hours per day. In urban areas, consumption might also include an efficient refrigerator, a second mobile telephone per household and another appliance, such as a small television or a computer.I am sure that readers of this blog would hesitate to call such a level of consumption "energy access." The average US household uses 20-40 times as much energy! Even if US households were to cut their consumption by half (unlikely) under aggressive assumptions about efficiency, it would still vastly exceed the initial threshold defined by the IEA.
The IEA observes that energy access is actually a process:
Once initial connection to electricity has been achieved, the level of consumption is assumed to rise gradually over time, attaining the average regional consumption level after five years. This definition of electricity access to include an initial period of growing consumption is a deliberate attempt to reflect the fact that eradication of energy poverty is a long‐term endeavour. In our analysis, the average level of electricity consumption per capita across all those households newly connected over the period is 800 kWh in 2030.In anything, the IEA has underestimated future demand for energy, as 800 kWh per year is just the start. The world needs more energy -- much more energy.
10 October 2011
Sarewitz on Consensus
Writing in Nature this week, Dan Sarewitz reflects on his recent participation on the BPC Geoengineering Climate Remediation task force and why efforts to achieve consensus in science may leave out some of the most important aspects of science. Here is an excerpt:
The very idea that science best expresses its authority through consensus statements is at odds with a vibrant scientific enterprise. Consensus is for textbooks; real science depends for its progress on continual challenges to the current state of always-imperfect knowledge. Science would provide better value to politics if it articulated the broadest set of plausible interpretations, options and perspectives, imagined by the best experts, rather than forcing convergence to an allegedly unified voice.Not surprisingly, Dan and I have come to similar conclusions on this subject. Back in 2001 in Nature I wrote (PDF):
Yet, as anyone who has served on a consensus committee knows, much of what is most interesting about a subject gets left out of the final report. For months, our geoengineering group argued about almost every issue conceivably related to establishing a research programme. Many ideas failed to make the report — not because they were wrong or unimportant, but because they didn't attract a political constituency in the group that was strong enough to keep them in. The commitment to consensus therefore comes at a high price: the elimination of proposals and alternatives that might be valuable for decision-makers dealing with complex problems.
Some consensus reports do include dissenting views, but these are usually relegated to a section at the back of the report, as if regretfully announcing the marginalized views of one or two malcontents. Science might instead borrow a lesson from the legal system. When the US Supreme Court issues a split decision, it presents dissenting opinions with as much force and rigour as the majority position. Judges vote openly and sign their opinions, so it is clear who believes what, and why — a transparency absent from expert consensus documents. Unlike a pallid consensus, a vigorous disagreement between experts would provide decision-makers with well-reasoned alternatives that inform and enrich discussions as a controversy evolves, keeping ideas in play and options open.
[E]fforts to reduce uncertainty via ‘consensus science’ — such as scientific assessments — are misplaced. Consensus science can provide only an illusion of certainty. When consensus is substituted for a diversity of perspectives, it may in fact unnecessarily constrain decision-makers’ options. Take for example weather forecasters, who are learning that the value to society of their forecasts is enhanced when decision-makers are provided with predictions in probabilistic rather than categorical fashion and decisions are made in full view of uncertainty.In addition to leaving behind much of the interesting aspects of science, in my experience, the purpose of developing a "consensus" is to to quash dissent and end debate. Is it any wonder that policy discussions in the face of such a perspective are a dialogue of the like minded? In contrast, as Sarewitz writes, "a vigorous disagreement between experts would provide decision-makers with well-reasoned alternatives that inform and enrich discussions as a controversy evolves, keeping ideas in play and options open."
As a general principle, science and technology will contribute more effectively to society’ needs when decision-makers base their expectations on a full distribution of outcomes, and then make choices in the face of the resulting — perhaps considerable — uncertainty.
09 October 2011
Cornucopians vs. Malthusians
Back in 1980, Paul Ehrlich and Julian Simon made their famous bet, recounted by John Tierney at its culmination 10 years later:
But recent goings on with commodity prices have some people asking whether Simon's timing was just lucky and perhaps Ehrlich views will ultimately triumph. In August, The Economist reported that had the famous bet extended to 2011, Ehrich would have won, as shown in the graph below.
In its survey of the global economy from a few weeks ago The Economist presented its entire time series for global commodity prices. On that graph I have superimposed a red line showing the dates of the Ehrlich-Simon bet.
Note that the original bet covered five commodities, and the graph from 1845 covers a much larger set. Of course, the original bet was meant to be representative of such broader trends.
In its survey The Economist explains the recent spike in commodity prices as follows:
In 1980 an ecologist and an economist chose a refreshingly unacademic way to resolve their differences. They bet $1,000. Specifically, the bet was over the future price of five metals, but at stake was much more -- a view of the planet's ultimate limits, a vision of humanity's destiny. It was a bet between the Cassandra and the Dr. Pangloss of our era.What was the result? Simon won handily.
They lead two intellectual schools -- sometimes called the Malthusians and the Cornucopians, sometimes simply the doomsters and the boomsters -- that use the latest in computer-generated graphs and foundation-generated funds to debate whether the world is getting better or going to the dogs. The argument has generally been as fruitless as it is old, since the two sides never seem to be looking at the same part of the world at the same time. Dr. Pangloss sees farm silos brimming with record harvests; Cassandra sees topsoil eroding and pesticide seeping into ground water. Dr. Pangloss sees people living longer; Cassandra sees rain forests being decimated. But in 1980 these opponents managed to agree on one way to chart and test the global future. They promised to abide by the results exactly 10 years later -- in October 1990 -- and to pay up out of their own pockets.
But recent goings on with commodity prices have some people asking whether Simon's timing was just lucky and perhaps Ehrlich views will ultimately triumph. In August, The Economist reported that had the famous bet extended to 2011, Ehrich would have won, as shown in the graph below.
In its survey of the global economy from a few weeks ago The Economist presented its entire time series for global commodity prices. On that graph I have superimposed a red line showing the dates of the Ehrlich-Simon bet.
Note that the original bet covered five commodities, and the graph from 1845 covers a much larger set. Of course, the original bet was meant to be representative of such broader trends.
In its survey The Economist explains the recent spike in commodity prices as follows:
The Economist’s index of non-oil commodity prices has trebled in the past decade. The recent surge has reversed a downward trend that had lasted a century. Industrial raw-material prices fell by around 80% in real terms between 1845, when The Economist began collecting data, and their low point in 2002 (see chart 3). But much of the ground lost over 150 years has been recovered in the space of just a decade.OK brave forecasters, here is your chance. Where are commodity prices headed? Will Simon's optimism continue to win out and demand create new supply, again pushing prices down, as they did through the last century? Or will Ehrlich's pessimism finally have its day, and are we entering an new era of scarcity?
This has raised the incomes of commodity-rich countries such as Brazil and Australia as well as parts of Africa. It has also caused even sober analysts to speak of a “new paradigm” in commodity markets. . .
What accounts for this turnaround? The price spikes over the past century were linked to interruptions in supply, notably during the first world war. But recent price rises have been too broad-based and long-lasting to be adequately explained by frost or bad harvests. Nor is it obvious that producers are hoarding supplies. . .
The demand side has been boosted by industrial development unprecedented in its size, speed and breadth, led by China but not confined to it. Growth in emerging markets is both rapid and resource-intensive. The IMF estimates that in a middle-income country a 1% rise in GDP increases demand for energy by the same percentage. Rich economies are far less energy-hungry: the oil intensity of OECD countries has steadily fallen in recent years.
China’s appetite for raw materials is particularly voracious because of the country’s size and its high investment rate. Though it accounts for only about one-eighth of global output, China uses up between a third and half of the world’s annual production of iron ore, aluminium, lead and other non-precious metals (see chart 5). Most of the energy for Chinese industry comes from coal—a dirty fuel that contributes to China’s poor air quality. Its consumption of oil roughly tallies with the economy’s size but is likely to grow faster than GDP as China gets richer and buys more cars.