31 October 2012

Op-Ed in the WSJ

I have an op-ed in tomorrow's WSJ, just put online.  Here is an excerpt:
[W]ith respect to disasters we really do make our own luck. The relatively low number of casualties caused by Sandy is a testament to the success story that is the U.S. National Weather Service and parallel efforts of those who emphasize preparedness and emergency response in the public and private sectors. Everyone in the disaster-management community deserves thanks; the mitigation of the impacts from natural disasters has been a true national success story of the past century.

But continued success isn't guaranteed. The bungled response and tragic consequences associated with Hurricane Katrina tell us what can happen when we let our guard down.

And there are indications that we are setting the stage for making future disasters worse. For instance, a U.S. polar-satellite program crucial to weather forecasting has been described by the administrator of the federal agency that oversees it—the National Oceanic and Atmospheric Administration—as a "dysfunctional program that had become a national embarrassment due to chronic management problems." The lack of effective presidential and congressional oversight of this program over more than a decade can be blamed on both Republicans and Democrats. The program's mishandling may mean a gap in satellite coverage and a possible degradation in forecasts.
The op-ed is not about climate change, though I do mention the topic in 2 paragraphs near the end. I am sure the "climate change people" (Thanks Candy!) will want to make it all about that subject.

More importantly is the success story that is the NWS and emergency management in the US, and the scary thought that
In the proper context, Sandy is less an example of how bad things can get than a reminder that they could be much worse.
There is therefore reason to believe we are living in an extended period of relatively good fortune with respect to disasters.

Comments and questions welcomed.

How Much Sea Level Rise Would be Avoided by Aggressive CO2 Reductions?

UPDATE 11/25: Here is another recent paper, by Schaeffer et al. in Nature Climate Change (PDF), with conclusions along the lines of the others summarized here: "mitigation measures, even an abrupt switch to zero emissions, have practically no effect on sea level over the coming 50 years and only a moderate effect on sea level by 2100."

UPDATE 11/2: At the Washington Post @bradplumer follows up this post with his own, providing a nice summary of the issue. He concludes: "aggressive steps to cut emissions could reduce the amount of sea-level rise by somewhere between 6 and 20 inches in 2100, compared with our current trajectory" -- which is just about exactly where I came out in the dicussions with him and several others (thanks JG), 10 inches +/- 10 inches.

UPDATE: Via Twitter @bradplumer points me to a newer paper that suggests perhaps 7 inches is the difference in seal  level rise to 2100 between the highest and lowest RCP scenarios. It is not apples to apples with the number presented below, but still a very small number. And another paper here, with perhaps 10 inches between RCP scenarios, a number lower than the projection uncertainties.

One of the more reasonable discussion points to emerge from efforts to link Hurricane Sandy to the need to reduce carbon dioxide emissions focuses on the role that future sea level rise will have on making storm impacts worse. Logically, it would seem that if we can "halt the rise of the seas" then this would reduce future impacts from extreme events like Sandy.

The science of sea level rise, however, tells us that to 2100 (at least) our ability to halt he rise of the seas is extremely limited, even under an (unrealistically) aggressive scenario of emissions reduction. Several years ago, in a GRL paper titled "How much climate change can be avoided by mitigation?" Warren Washington and colleagues asked how much impact aggressive mitigation would have on the climate system. Specifically, they looked at a set of climate model runs assuming stabilization of carbon dioxide at 450 ppm.

Here is what they concluded for sea level rise:
[A]bout 8 cm of the sea level rise that would otherwise occur without mitigation would be averted. However, by the end of the century the sea level rise continues to increase and does not stabilize in both scenarios due to climate change commitment involving the thermal inertia of the oceans ...
Eight cm is about three inches. Three inches. Then sea level rise continues for centuries.

Though it seems logical to call for emissions reductions as a way to arrest sea level rise to reduce the impacts of hurricanes, recent research suggests that our ability to halt the rise of the seas is extremely limited. With respect to hurricanes, we have little option but to adapt, and improved adaptation makes good sense.

Efforts to use future hurricane damages to justify emissions reductions just don't make much sense. Fortunately, there are far better reasons to focus on emissions reductions than hurricanes.

Postscript: This post was inspired by Michael Levi's discussion here. Thanks!

30 October 2012

Does Federal R&D Drive Economic Growth?

Writing in the NYT earlier this week, Neal Lane, former science advisor to President Bill Clinton had this to say:
If our country is to remain strong and prosperous and a land of rewarding jobs, we need to understand this basic investment principle in America’s future: no science, no growth.
Even discounting for hyperbole, does the formula "no science, no growth" make sense? I have a new column up at The Breakthrough Institute on this exact subject, and I argue that such arguments actually make little sense.

Here is how my column begins:
It is a claim that you hear often in discussions of the role of research and development in the economy: “Federal investments in R&D have fueled half of the nation’s economic growth since World War II.” This particular claim appeared in a recent Washington Post op-ed co-authored by a member of the US Congress and the chief executive of the American Association for the Advancement of Science. It would be remarkable if true. Unfortunately, it is not.
To read the rest head over here, and please feel free to come back and tell me what you think.

Sandy and the Top 20 Normalized US Hurricane Losses

UPDATE #4 13 Feb 2013: Yesterday the NHC issued its final report on Hurricane Sandy, and includes $50 billion as the total US damages from Sandy, which would place it 7th in the table below.

UPDATE #3 11/24: There is still a lot of uncertainty in Sandy damage estimates. PCS, which tabulates actual insured losses, has released a first estimate of $11 billion in insured losses. This is at the low end of the range, bu is very likely to rise. How far? We'll have to wait for that. Once better numbers are in, I'll do a new post with apples to apples numbers from Sandy to compare to our normalized tabulation.

UPDATE #2 11/1: Moody's has published an estimate of $30 billion in direct total damages due to Sandy. That would place Sandy at #10 all time in the normalization table below.

UPDATE 11/1: EQECAT has published updated estimates of Sandy's losses today, $10-20 billion insured and $30-50B total. Depending on the amount of flood damage include in the total (the NWS hurricane losses separate out flood) the new estimates, if they hold up near the high end of the range, would push Sandy into the top 10 all-time losses in the normalization table below.

Here is a table showing the top 20 hurricane losses 1900 to 2011, normalized to 2012 dollars. In other words, the figures show an estimate of what the losses would be were historical storms to occur in 2012. The numbers come from ICAT based on an extension of Pielke et al. 2008.

Great Miami Sep 18,1926 1 180,220,000,000
Galveston Sep 08,1900 2 105,570,000,000
Galveston Aug 17,1915 3 84,910,000,000
Katrina Aug 29,2005 4 84,620,000,000
Andrew Aug 24,1992 5 64,410,000,000
Storm 11 in 1944 Oct 19,1944 6 53,940,000,000
Donna Sep 10,1960 7 49,810,000,000
New England Sep 21,1938 8 46,840,000,000
Lake Okeechobee Sep 16,1928 9 44,890,000,000
Wilma Oct 24,2005 10 25,960,000,000
Hazel Oct 18,1954 11 24,260,000,000
Diane Aug 19,1955 12 24,110,000,000
Camille Aug 17,1969 13 23,040,000,000
Charley Aug 13,2004 14 20,380,000,000
Ike Sep 13,2008 15 20,370,000,000
Hugo Sep 21,1989 16 20,020,000,000
Carol Aug 31,1954 17 19,290,000,000
Agnes Jun 22,1972 18 19,010,000,000
Ivan Sep 16,2004 19 18,590,000,000
Storm 2 in 1949 Aug 26,1949 20 18,510,000,000

While it will be some time until we have apples to apples estimates from Sandy, the current estimates of $20 billion would place Sandy at #17 all time out of 242 loss-producng storms 1900 to present (in the top 10%). If the damage gets to $30 billion it would crack the top 10 and (top 5%). Right now it seems unlikely that Sandy will climb any higher on the table. (Note that inland flood damage is not included in the tabulations above.)

In historical context, Sandy sits alongside Carol, Diane and Hazel. One big difference however -- Carol, Diane and Hazel hit the US Atlantic coast within a single 13 month period in 1954-1955.  Imagine that. 

29 October 2012

Top 10 Damaging Hurricanes Within 50 Miles of Sandy's Landfall

From the ICAT Damage Estimator, here is a table showing the damage from the top 10 historical storms which had a track that passed within the NHC forecast "cone" of Hurricane Sandy's current projected track.

New England Sep 21,1938 8  $  46,840,000,000
Diane Aug 19,1955 12  $  24,110,000,000
Carol Aug 31,1954 17  $  19,290,000,000
Agnes Jun 22,1972 18  $  19,010,000,000
Storm 7 in 1944 Sep 14,1944 31  $  10,600,000,000
Bob Aug 19,1991 66  $    3,620,000,000
Edna Sep 11,1954 67  $    3,230,000,000
Gloria Sep 27,1985 76  $    2,530,000,000
Donna Sep 14,1960 117  $        850,000,000

Some notes and caveats:

1. The estimates include hurricane damage as defined by NOAA. Most importantly that does not include inland flood damage that may occur when a storm moves inland (that damage is recorded under flood damage). Several of these storms would have much higher damage if inland flood damage were to be included.

2. So when comparing damage estimates that you may see for Sandy to those here, to compare apples to apples, a rough method will be to take estimated insured losses and multiply by two.

3. These data are based on Pielke et al. 2008, updated by ICAT.

4. As you can see in the map at the top of this post, none of the historical storm tracks make for good analogues for Sandy. All of the top 10 were category 3s at landfall except Bob (2), Agnes (1) and Diane (TS). And all were in Aug or Sept except Agnes (June).

5. Large, damaging storms are not unprecedented in the second half of October, with Storm 11 (1944, ~$54 billion), Wilma (2005, $26 billion) and Hazel (1954, $24 billion).

26 October 2012

ICAT on Hurricane Sandy Damage Benchmarks

ICAT just sent out this summary (here in PDF) of an analysis using the ICAT Damage Estimator (based on our normalized hurricane loss database):
The ICAT Damage Estimator (www.icatdamageestimator.com) can be used to obtain statistics regarding historic storms that have followed similar paths to Hurricane Sandy’s current forecast path. For this analysis, the Active Storms search feature was used to select all historic storms that have made landfall within the current range of computer model forecasts. This range includes the coastline from near the MD/VA border on the DelMarVa Peninsula to the eastern edge of Long Island, NY. The ICAT Damage Estimator shows that there have been 7 damaging tropical cyclones that have made landfall along this segment of coastline since 1900. The tool shows the storm parameters, the damage at the time of landfall, and the estimated damage if the storms were to make landfall in 2012. The 2012 damage estimations are made by “normalizing” the data by adjusting for population change, inflation, and change in wealth per capita.

The most damaging storm to make landfall within the current range of computer model forecasts was the New England hurricane of 1938, which would cause an estimated ~$47B in damage today. However, this storm was a category 3 hurricane when it made landfall, while Sandy is only expected to have category 1 force winds. Of the 7 storms selected, only two made landfall as category 1 hurricanes. Hurricane Agnes of 1972 made landfall with 85 mph sustained winds near New York City and would cause an estimated $19B in damage today. Agnes initially made landfall over the FL Panhandle, then moved NE and emerged off the NC coast. As it approached New England, the
storm strengthened as it underwent extratropical transition, which is also expected to occur with Hurricane Sandy. Hurricane Agnes’s impacts were felt across a very wide area of the Northeast. Hurricane Belle of 1976 also had 85 mph winds, but moved much more quickly than Agnes and was weakening as it made landfall. It is estimated that Belle would cause less than $1B in damage today.

As can be seen from the storms selected by the ICAT Damage Estimator, the sample size of category 1 hurricanes making landfall along the Northeast is not very large. While Agnes appears to be the most similar to Sandy, it made landfall near New York City, which explains why the damage estimates are so high. Hurricane Irene of 2011 officially made landfall further south, but impacted a similar area that will be affected by Hurricane Sandy. That storm caused ~$7B in damage, but was not quite as strong as Sandy is expected to be. This data can be used as a benchmark to assess the range of possibilities for Sandy’s impact.
Full report here in PDF.

What is Wrong with Embellishing Science?

embellishingpresent participle of em·bel·lish (Verb)

  1. Make (something) more attractive by the addition of decorative details or features: "blue silk embellished with golden embroidery".
  2. Make (a statement or story) more interesting or entertaining by adding extra details, esp. ones that are not true.
Yesterday, before heading back to the National Hurricane Center to help deal with Sandy, Chris Landsea gave a great talk here at CU on hurricanes and climate change (we'll have a video up soon). In Chris' talk he explained that he has no doubts that humans affect the climate system through the emission of greenhouse gases, and this influence may affect tropical cyclones. He then proceeded to review theory and data from recent peer-reviewed publications on the magnitude of such an influence. Chris argued that any such influence is expected to be small today, almost certainly undetectable, and that this view is not particularly controversial among tropical cyclone climatologists. He concluded that hurricanes should not be the "poster" representing a human influence on climate.

After his talk someone in the audience asked him what is wrong with making a connection between hurricanes and climate change if it gives the general public reason for concern about climate change. Chris responded that asserting such a connection can be easily shown to be incorrect and thus risks some of the trust that the public has in scientists to play things straight.

This exchange came to mind as I came across the latest exhibit in the climate science freak show, this time in the form of a lawsuit brought by Michael Mann, of Penn State, against the National Review Online and others for calling his work "intellectually bogus" and other mean things (the actual filing can be seen here). I will admit that for a moment I did smile at the idea of a professor suing a critic for lying (Hi Joe!), before my senses took back over and I rejected it as an absurd publicity stunt. But within this little tempest in a teapot is a nice example of how it is that some parts of climate science found itself off track and routinely in violation of what many people would consider basic scientific norms.

In Mann's lawsuit he characterizes himself as having been "awarded the Nobel Peace Prize." Mann's claim is what might be called an embellishment -- he has, to use the definition found at the top of this post, "made (a statement or story) more interesting or entertaining by adding extra details, esp. ones that are not true." An accurate interpretation is that the Intergovernmental Panel on Climate Change did win the 2007 Nobel Peace Prize, and the IPCC did follow that award by sending to the AR4 authors a certificate noting their contributions to the organization. So instead of being a "Nobel Peace Prize Winner" Mann was one of 2,000 or so scientists who made a contribution to an organization which won the Nobel Peace Prize.

Here we might ask, so what?

I mean really, who cares if a scientist embellishes his credentials a bit? We all know what he means by calling himself a "Nobel Peace Prize Winner," right? And really, what is an organization except for the people that make it up? Besides, climate change is important, why should we worry about such things? Doesn't this just distract from the cause of action on climate change and play right into the hands of the deniers?  Really now, is this a big deal?

Well, maybe it was not a big deal last week, but with the filing of the lawsuit, the embellishment now has potential consequences in a real-world decision process. A journalist contacted the Nobel organization and asked them if it was appropriate for Mann as an IPCC scientist to claim to be "Nobel peace prize winner." Here is what the Nobel organization said in response:
Michael Mann has never been awarded the Nobel Peace Prize.
Mann's embellishment has placed him in a situation where his claims are being countered by the Nobel organization itself. Mann's claim, rather than boosting his credibility actually risks having the opposite effect, a situation that was entirely avoidable and one which Mann brought upon himself by making the embellishment in the first place. The embellishment is only an issue because Mann has invoked it as a source of authority is a legal dispute. It would seem common sense that having such an embellishment within a complaint predicated on alleged misrepresentations may not sit well with a judge or jury.

This situation provides a nice illustration of what is wrong with a some aspects of climate science today -- a few scientists motivated by a desire to influence political debates over climate change have embellished claims, such as related to disasters, which then risks credibility when the claims are exposed as embellishments. To make matters worse, these politically motivated scientists have fallen in with fellow travelers in the media, activist organizations and in the blogosphere who are willing not only to look past such embellishments, but to amplify them and attack those who push back. These dynamics are reinforcing and have led small but vocal parts of the climate scientific community to deviate significantly from widely-held norms of scientific practice.

Back in 2009, Mann explained why the title of his climate book - Dire Predictions -- was an embellishment, and this explanation helps to explain why a small part of the community thinks that such embellishments are acceptable:
Often, in our communication efforts, scientists are confronted with critical issues of language and framing. A case in point is a book I recently co-authored with Penn State colleague Lee Kump, called Dire Predictions: Understanding Global Warming. The purists among my colleagues would rightly point out that the potential future climate changes we describe, are, technically speaking, projections rather than predictions because the climate models are driven by hypothetical pathways of future fossil fuel burning (i.e. conceivable but not predicted futures). But Dire Projections doesn’t quite roll off the tongue. And it doesn’t convey — in the common vernacular — what the models indicate: Climate change could pose a very real threat to society and the environment. In this case, use of the more technically “correct” term is actually less likely to convey the key implications to a lay audience.
So long as some climate scientists are willing to talk about their work as being "correct" in scare quotes in the context of a desire to shape public opinion, they are going to face credibility problems. Think Dick Cheney linking Al Qaeda to Saddam Hussein, and you'll understand why such efforts are not good for either science or democracy.

The late Stephen Schneider gained some fame for observing that when engaging in public debates scientists face a difficult choice between between honesty and effectiveness (as quoted in TCF pp. 202-203):
On the one hand, as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but—which means that we must include all the doubts, the caveats, the ifs, ands, and buts. On the other hand, we are not just scientists but human beings as well. And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climatic change. To do that we need to get some broad-based support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. This “double ethical bind” we frequently find ourselves in cannot be solved by any formula. Each of us has to decide what the right balance is between being effective and being honest.
Often overlooked is what Schneider recommended about how to handle this "double ethical bind":
 I hope that means being both [effective and honest].
That is a bit of advice that nowadays a few in the climate science community seem to have forgotten as they make embellishments for a cause. Such action have consequences for how the whole field is perceived -- which is comprised mostly of hard-working, honest scientists who deserve better.

24 October 2012

Legal Liability for Bad Scientific Forecasts in the United States

The verdict in the lawsuit brought against scientists L'Aquila has prompted much discussion and debate. Following the initial poor reporting of the case and its context has followed some much better coverage. For instance, the Chronicle of Higher Education, the New Scientist, David Ropeik writing at Scientific American, and some colleagues of mine at Macquarie University writing at The Conversation have all added valuable context and nuance to this discussion.

Yet, some in the US scientific community, especially officials at AAAS and AGU continue to wax hyperbolic on this issue. For example,
"You risk losing valuable information from experts who would rather spend time in the lab than in prison," said Mark S. Frankel, director of the scientific-responsibility program of the American Association for the Advancement of Science.

"We are hearing concerns from our members," added Christine W. McEntee, executive director of the American Geophysical Union. "This isn't just about earthquakes but about hurricanes and tsunamis too." . . .

But even though the decision will be appealed, doling out convictions and prison time will have a chilling effect on scientists, said Mr. Frankel of the AAAS.

"There is so much uncertainty in science that the best advice still might not be the right advice," he said. With a ruling like this, "some scientists may fear participating in any public-policy process. I already hear from scientists who say that policy is so difficult that they'd be better off in the lab."
Such concerns are vastly overblown, and it is curious to see US scientific societies (mis)appropriating the narrative of the L'Aquila saga as part of their own political battles.

In 2002, Bobbie Klein, a lawyer by training and long-time colleague here at the University of Colorado, and I wrote two legal-review style papers on legal liability for forecasts under US law. One paper was focused on the public sector and the second on the private sector.

With respect to the public sector,  here is what we concluded (for the full paper, which includes a discussion of a range of interesting cases, see it here in PDF):
The decisions reviewed above indicate that the [Federal Tort Claims Act] likely would preclude most if not all claims against the federal government based on inaccurate weather forecasts, especially given the Gaubert decision and the cases applying it in lawsuits against the NWS for forecast-related claims. Bergquist, Monzon, and Taylor all recognize that policy factors, such as cost and the desire not to overwarn, enter into NWS forecasting and warning decisions. However, it would be too strong a statement to say that the federal government will never face a liability risk in its forecasting enterprise. In instances where all discretion has been removed, if other FTCA requirements were met, the government’s failure to follow a mandatory statute, regulation, or policy could expose it to liability. Of course, the Supreme Court could alter the Gaubert test to make it more difficult for the government to seek refuge in the discretionary function exception.
In short, do good science and follow the law and there is essentially no risk of legal liability. Precedent indicates that an inaccurate forecast is not a basis under US law for a claim of legal liability.

With respect to the private sector we concluded that the situation is a bit more complicated (full paper here in PDF):
Lawsuits against private sector weather forecasters for forecasts may increase in the coming years as the private sector expands its forecasting activities, especially if companies make inflated or unfounded claims of their ability to forecast the weather accurately. Thus, companies have to be cognizant of the trade-offs between using claims of forecast accuracy as a marketing tool and the exposure to liability that will result from unfounded claims of capabilities. Absent statutory immunity or a valid limitation of liability clause, private sector forecasters who are sued will have to defend lawsuits for inaccurate forecasts on their merits.

The decisions discussed above provide some guidance as to how such lawsuits could be resolved. At one end of the spectrum, forecasters will not be found liable simply because a forecast is erroneous. Courts in the past have recognized that forecasts are fallible and people who rely on such forecasts assume the risk that a particular prediction may not be realized. The issuance of probabilistic forecasts introduces additional technical considerations (e.g., in evaluating forecast “goodness”) but does not appear to contradict these general findings related to liability. At the other end of the spectrum, a forecaster should be found liable if a forecast was based on a deliberate, knowing falsehood or withholding of information.

Most cases likely will fall in a gray area, where it will be alleged that, while the forecast may have been made in good faith, it strayed from established professional standards. Rigorous verification of forecasts would offer valuable information to producers of forecasts about how to market their products honestly, to users of forecasts about how to use predictive information effectively, and to the court system when it is called upon to evaluate whether a forecast or forecast process meets the applicable professional standard.

In summary, our forecast for the legal liability of private sector forecast providers is “partly cloudy.” Private sector forecasters should take steps to ensure that their exposure is limited and insured as much as possible, so that those partly cloudy skies do not develop into powerful storms.
Even with the additional complications, it is highly unlikely that a forecaster who relies on established professional standards in the issuance of a forecast will be found liable for a poor forecast.

Of course, as in always the case in legal matters, history may not be a guide to the future and those future outcomes may deviate from that represented in past US case law. Even so, in the US scientists would appear to have exceedingly little reason for concern about legal liability for the issuance of forecasts that do not verify. The legal situation will of course be different across different countries.

While I certainly understand the concerns of scientists about legal exposure, the more important issues raised by the L'Aquila case involve the roles of scientists in policy and politics, where legal exposure in the Italian context derives from that larger context, as the more nuanced characterizations of the case have revealed. The L'Aquila lawsuit is to be appealed, so there will continue to be opportunities to discuss the particulars of the case and its broader significance.

Papers cited:

Klein, R and RA Pielke Jr. 2002, Bad weather? Then sue the weatherman! Part I: Legal liability for public sector forecasts. Bull. Amer. Meteorol. Soc. 83:1791-1799. (PDF)

Klein, R and RA Pielke Jr. 2002, Bad weather? Then sue the weatherman! Part II: Legal liability for private sector forecasts. Bull. Amer. Meteorol. Soc. 83:1801-1807. (PDF)

22 October 2012

Mischaracterizations of the L'Aquila Lawsuit Verdict

Today in Italy 6 scientists were convicted of manslaughter in an Italian court based on their provision of allegedly faulty information. Nature reports:
At the end of a 13-month trial, six scientists and one government official have been found guilty of manslaughter and sentenced to six years in prison. The verdict was based on how they assessed and communicated risk before the earthquake that hit the city of L'Aquila on 6 April 2009, killing 309 people.
There is a popular misconception in circulation that the guilty verdict was based on the scientists' failure to accurately forecast the devastating earthquake.

For instance, in an article with the headline, Italian court convicts 7 for no quake warning, the AP reports:
Defying assertions that earthquakes cannot be predicted, an Italian court convicted seven scientists and experts of manslaughter Monday for failing to adequately warn residents before a temblor struck central Italy in 2009 and killed more than 300 people.

The court in L'Aquila also sentenced the defendants to six years each in prison. All are members of the national Great Risks Commission, and several are prominent scientists or geological and disaster experts.

Scientists had decried the trial as ridiculous, contending that science has no reliable way of predicting earthquakes. So news of the verdict shook the tightknit community of earthquake experts worldwide.
The Christian Science Monitor went further:
An Italian court sentenced scientists to jail time for not having a functioning crystal ball ahead of the 2009 earthquake in L'Aquila. The arguments of science and reason fell on deaf ears.
Similar interpretations of what the lawsuit was about were published by the New York Times and Fox News.

Based on such characterizations members of the scientific community are offering strong reactions:
"We are deeply concerned. It's not just seismology which has been put on trial but all science," Charlotte Krawczyk, president of the seismology division at the European Geosciences Union (EGU), told AFP. 
The verdict struck at scientists' right to speak honestly and independently, she said in a phone interview from Germany.

"All scientists are really shocked by this," said Krawczyk. "We are trying to organise ourselves and come up with a strong statement that could help so that the scientists do not have to go to jail.

"People are asking, 'Is this really true?' 'What does it mean for us?' And, 'What does it mean for talking in public about risks?'"

"People are stunned," said Mike Bickle, a professor of earth sciences at the University of Cambridge.

Roger Musson at the British Geological Survey (BGS) said the verdict was "unbelievable". He and other seismologists said it was impossible to forecast an earthquake, and scientists pressed to give a black-or-white answer could unleash panic or lose all credibility if nothing happened.
Unfortunately, such characterizations of the he lawsuit are simply wrong: the scientists were not on trial for their failure to predict the earthquake.

In an article published October 12, Science explained that the trial is far more complex than that:
The trial in L'Aquila has drawn huge international attention, as well as outrage and protests. In 2010, more than 4000 scientists from Italy and around the world signed an open letter to Italian President Giorgio Napolitano, calling the allegations “unfounded,” because there was no way the commission could reliably have predicted an earthquake. Alan Leshner, CEO of AAAS (the publisher of Science) called the indictments “unfair and naïve” in a 2010 letter to Napolitano.

Yet as the trial unfolded here over the past year, a more complex picture has emerged. Prosecutors didn't charge commission members with failing to predict the earthquake but with conducting a hasty, superficial risk assessment and presenting incomplete, falsely reassuring findings to the public. They have argued in court that the many tremors that L'Aquila experienced in the preceding months did provide at least some clues about a heightened risk.

Meanwhile, a recorded telephone conversation made public halfway through the trial has suggested that the commission was convened with the explicit goal of reassuring the public and raised the question of whether the scientists were used—or allowed themselves to be used—to bring calm to a jittery town.
I discussed some of the dynamics at play in my Bridges column of October, 2011 (here in PDF):
On March 31, 2009, in L’Aquila, six days before a deadly magnitude 6.3 earthquake killed 308 people, Bernardo De Bernardinis, then deputy chief of Italy’s Civil Protection Department , and six scientists who were members of a scientific advisory body to the Department (the Major Risks Committee) participated in an official meeting and press conference in response to public concerns about short-term earthquake risks. The public concerns were the result of at least two factors: One was the recent occurrence of a number of small earthquakes. A second factor was the prediction of a pending large earthquake issued by Gioacchino Giuliani, who was not a seismologist and worked as a technician at Italy’s National Institute of Nuclear Physics.

The deputy chief and scientists held a short one-hour meeting and then a press conference, during which they downplayed the possibility of an earthquake. For instance, De Bernardinis went so far as to claim that the recent tremors actually reduced earthquake risks: "[T]he scientific community continues to confirm to me that in fact it is a favourable situation, that is to say a continuous discharge of energy." When asked directly by the media if the public should sit back and enjoy a glass of wine rather than worry about earthquakes, De Bernardinis acted as sommelier: "Absolutely, absolutely a Montepulciano doc. This seems important." . . .

. . . in L’Aquila, the government and its scientists seemed to be sending a different message to the public than the one that was received. Media reports of the Major Risk Committee meeting and the subsequent press conference seem to focus on countering the views offered by Mr. Giuliani, whom they viewed as unscientific and had been battling in preceding months. Thus, one interpretation of the Major Risks Committee’s statements is that they were not specifically about earthquakes at all, but instead were about which individuals the public should view as legitimate and authoritative and which they should not.

If officials were expressing a view about authority rather than a careful assessment of actual earthquake risks, this would help to explain their sloppy treatment of uncertainties.
The case is likely to be appealed, so the current verdict is not the last word. While the verdict rests on finer points of Italian law and jurisprudence, the issues at play are not accurately characterized as a failure to accurately predict an earthquake, or even more broadly as science vs. anti-science. The public responsibilities of government officials and the scientists that they depend upon are too important to characterize in such cartoonish fashion.

19 October 2012

Responsible Advocacy by Scientists

The InterAcademy Council, a multi-national association of national science academies incorporated in the Netherlands, released a report this week titled "Responsible Conduct in the Research Enterprise." The report seeks to set forth areas of international consensus on what constitutes responsible research, amid changes to the global research enterprise such as scale, collaborations, an apparent increasing incidence of "irresponsible" research and demands for relevance from the public and politicians.

The report explains of its focus:
This report uses the words science and research very broadly. The guide posits that research encompasses many forms of disciplined human thought, including the natural sciences, the social sciences, and the humanities, along with the archives of that knowledge. These forms of knowledge and the methods used to arrive at this knowledge can be very different. Yet all researchers, whether in the sciences or in other forms of scholarship, are expected to adhere to the fundamental values that underlie good research.
The report highlights the importance of trust -- among researchers and between researchers and the public. Trust is secured through respecting seven core values:
Responsible conduct in research is based on many of the same human values that apply in daily life, but these values have specific implications in the context of research. The discussion in this guide draws on seven overlapping fundamental values:

• Honesty
• Fairness
• Objectivity
• Reliability
• Skepticism
• Accountability
• Openness

In research, being honest implies doing research and communicating about research results and their possible applications fully and without deception, whether of others or oneself.

Being fair means treating others with respect and consideration, whether in citing a colleague’s ideas in a paper or mentoring a student in the proper conduct of research. In research—as in life—scientists and scholars should treat others as they hope and expect to be treated in return.

Objectivity implies that researchers try to look beyond their own preconceptions and biases to the empirical evidence that justifies conclusions.Researchers cannot totally eliminate the influence of their own perspectives from their work, but they can strive to be as objective as possible.

Research communities over many years have developed methods to enhance the reliability of the results they obtain, and researchers have an obligation to adhere to these methods or demonstrate that an alternative approach does not reduce the reliability of research results.

An allegiance to empirical evidence requires that researchers maintain a degree of skepticism toward research results and conclusions so that results and explanations are continually reexamined and improved.

Researchers are accountable to other researchers, to the broader society,and to nature. If challenged, they cannot appeal to authority but must demonstrate that their results or statements are reliable.

Finally, researchers need to be open with others for research to progress. All researchers deserve to work independently as they balance the competing considerations of “what if?” and “what if I am wrong?” But they ultimately need to convey to others their conclusions and the evidence and reasoning on which their conclusions are based so that those conclusions can be examined and extended. This requires careful storage of data and making data available to colleagues whenever possible.

The primacy of these seven values explains why trust is a fundamental characteristic of the research enterprise. Researchers expect that their colleagues will act in accord with these values. When a researcher violates one of the values, that person’s trustworthiness is diminished among other researchers. In addition, the public’s trust in research can be damaged, with harmful effects on the entire research community.
The report includes an interesting passage on scientists as advocates:
The public’s trust in research depends on the honesty, openness, and objectivity of researchers in communicating the results of research to those outside the research community. This responsibility can take time away from research, but public communication is essential given the pervasive influence of research on the broader society.
Researchers have the same rights as all other people in expressing their opinions and seeking to influence public policy. But researchers must be especially careful to distinguish their roles as specialists and as advocates.
Researchers who choose to be advocates have a special responsibility to themselves and to the research community to be very open and honest about the support for the statements they make. Researchers should resist speaking or writing with the authority of science or scholarship on complex, unresolved topics outside their areas of expertise. Researchers can risk their credibility by becoming advocates for public policy issues that can be resolved only with inputs from outside the research community. . .

At the same time, all researchers have information of value that they can convey to policy makers and the public, and researchers are particularly well suited to act as honest brokers to untangle basic facts from economic, social, and political considerations.
The report offers two recommendations on science in policy:
  • Researchers need to communicate the policy implications of their results clearly and comprehensively to policy makers and the public—including a clear assessment of the uncertainties associated with their results—while avoiding advocacy based on their authority as researchers.
  • Scientific policy advice to governments, industry, or nongovernmental organizations should undergo peer review and should not be made from an advocacy perspective.
While the sentiment here is undoubtedly correct, an admonition against advocacy offers little practical guidance to the scientist or scientific organization.  The idea that scientists are in a position to "untangle basic facts from economic, social and political considerations" puts scientists in an impossible position, as on most highly politicized issues, appeals to "just the facts" can actually lead to a further politicization of the science.

Instead, scientists need to develop skills in integrating science with economic, social and political considerations, and appreciate that in doing so, they have choices in how to engage policy and politics. The route to responsible research practices in policy relevant science is through enlightened engagement, not artificial distance.

16 October 2012

New Bridges Column: The Origins of "Basic Research"

My latest column for Bridges is out and it is titled, The Origins of "Basic Research." Here is how the column starts out:
In any discussion of government science policies, it rarely seems to take long for someone to invoke the notion of "basic research." For instance, writing in The Washington Post last month, Alan Leshner, CEO of the American Association for the Advancement of Science, and US Representative Jim Cooper (D-TN) announced the "Golden Goose Awards" to "recognize the often-surprising benefits of science to society."

In their appeal for more funding for scientific research, Leshner and Cooper argued that: "Across society, we don't have to look far for examples of basic research that paid off." They cite the creation of Google as a prime example of such payoffs: "Larry Page and Sergey Brin, then a National Science Foundation [NSF] fellow, did not intend to invent the Google search engine. Originally, they were intrigued by a mathematical challenge ..."

The appealing imagery of a scientist who simply follows his curiosity and then makes a discovery with a large societal payoff is part of the core mythology of post-World War II science policies. The mythology shapes how governments around the world organize, account for, and fund research. A large body of scholarship has critiqued postwar science policies and found that, despite many notable successes, the science policies that may have made sense in the middle of the last century may need updating in the 21st century.

In short, investments in "basic research" are not enough. Benoit Godin has asserted (PDF) that: "The problem is that the academic lobby has successfully claimed a monopoly on the creation of new knowledge, and that policy makers have been persuaded to confuse the necessary with the sufficient condition that investment in basic research would by itself necessarily lead to successful applications." Or as Leshner and Cooper declare in The Washington Post: "Federal investments in R&D have fueled half of the nation's economic growth since World War II."

A closer look at the actual history of Google reveals how history becomes mythology. The 1994 NSF project that funded the scientific work underpinning the search engine that became Google (as we know it today) was conducted from the start with commercialization in mind: "The technology developed in this project will provide the 'glue' that will make this worldwide collection usable as a unified entity, in a scalable and economically viable fashion." In this case, the scientist following his curiosity had at least one eye simultaneously on commercialization.
To read the rest head here.  To read the underlying research paper see it here in PDF. All of my past Bridges columns can be found here. And to read the latest full issue of the always-excellent Bridges, go here.

14 October 2012

Gordon vs. Gordon on Growth

This post is a follow up to the discussion on this blog a few weeks ago of economist Robert Gordon's provocative paper which asked, "Is US economic growth over?". Gordon's paper has been much discussed in the pages and blogs of the New York Times, Washington Post, Financial Times and elsewhere. Remarkably, the wide-ranging discussion took place with no one figuring out that Gordon's claims are in error and based on a selective and non-representative parsing of the data. (The FT did publish a letter of mine, however.)

Let's review. Gordon claims that US per capita economic growth has "steadily" declined from 1950 to 2007 (prior to the recent economic crisis) and uses this history as the basis for projecting a continued decline to 2100. Gordon goes further into the past to suggest that the economic growth of the past century was a "one time event."

As I am in the early stages of working on a book on innovation and growth, upon reading Gordon's analysis I sought to replicate it as the claims being made were counter to data that I have been working with. So I wanted to understand why. Gordon explained in his paper that he arrived at a stair step decline in US per capita GDP growth as follows:
Each of the successive periods after 1950 exhibits a downward step in per-capita real GDP growth, with steps downward marked at 1964, 1972, and 1987.
Gordon confirmed these breakpoints in an email response to a query, explaining that they represented periods of "normal" business cycle utilization. However, when I sought to replicate this analysis I came up with these numbers for per capita GDP growth rates:
1950 to 1964 - 2.1%
1964 to 1972 - 2.9%
1972 to 1987 - 2.1%
1987 to 2006 - 1.9%
Clearly, this was not a stair step decline. In fact, aside from a brief increase in growth rates in the period 1964 to 1972, there was exceedingly little change in per capita growth rates since 1950 (and in fact, since 1870).

In a further email exchange, which included Martin Wolf of the FT, Gordon admitted that his paper had an error. There was in fact no breakpoint in 1964. He had instead combined the first two periods in his figure, which, when appended to his pessimistic forecast for 2007-2027 give the appearance of a stair-step decline, even though the data from 1950 to 2007 shows exceedingly little decline. The error in the paper leads to a replication which counters the claims being made -- unfortunate but true.

After waiting more than a week, I asked Gordon if he was going to correct the error in his paper and he replied that it was not worth making a fuss about. Academics can be prickly about critique, especially from outside the home discipline, but it is standard practice to acknowledge and correct errors, especially in work that is widely discussed. In Gordon's case, correcting the error would likely require an explanation why the decline only exists with the choice of certain breakpoints and not others -- which takes Gordon's analysis down a difficult, perhaps unrecoverable path.

On this issue I will let a very prominent and widely respected economist have the last word. The graph and conclusions below are perfectly in sync with my own critique of Gordon (2012). They come from a 2004 paper titled,"Two Centuries of Economic Growth: Europe Chasing the American Frontier." There this respected economist wrote (emphases added):
Plotted on a log scale for selected years that are ʺneutralʺ to the business cycle, i.e., excluding the influence of recessions, depressions, and wars, the most striking fact displayed in Figure 1 is the steady and inexorable advance of U. S. real GDP per capita. While the growth rate is slower in the first 50 years, between 1820 and 1870, subsequent to 1870 there is surprisingly little variation around the 1870-2000 average growth rate of 1.81 per year, a rate sufficient to double the standard of living every 38 years. The major acceleration above the long-run trend comes, surprisingly, not in the early postwar years but in the decade 1963-73, followed by a retardation back to trend in 1987-94.
That prominent economist? Robert Gordon.

12 October 2012

Op-Ed on Climate Change and Disasters in The Denver Post

After Munich Re announced earlier this week for the eleventy-third time that it had found the elusive "footprint" of climate change is disaster loss data (it had apparently been hiding in the tail section of Amelia Earhart's long-lost plane, who knew?), I was invited by The Denver Post to write an op-ed.

Here is how it starts:
Over the years, the political debate over climate change has been waged on many fronts. At various times at the center of the debate, we've seen green jobs, SUVs, Al Gore and climate "deniers." The latest front in this battle is extreme weather.

Earlier this week, Munich Re, a large German reinsurance company, fueled this debate with a report claiming that it has identified "the first climate change footprint in the data from natural catastrophes" in the damage caused by thunderstorms in the United States since 1980. USA Today put the claim on steroids by announcing on its front page, "Climate change behind rise in weather disasters."

A big problem with the claim by Munich Re and its amplification by the media is that neither squares with the actual science of climate change and disasters.
To read the rest, see it here, and please feel welcome to come back here and comment.

08 October 2012

New Paper: Normalized Tornado Damage in the United States: 1950-2011

I am a co-author on a new paper just accepted for publication in the journal Environmental Hazards titled "Normalized Tornado Damage in the United States: 1950-2011." My co-authors are Kevin Simmons (Austin College) and Daniel Sutter (Troy University). This blog post provides an FAQ on the paper, which the journal's editor tells us is expected to be formally published no later that first quarter 2013.

What is the title and abstract of this new paper?
Normalized Tornado Damage in the United States: 1950-2011

in press, Environmental Hazards

Kevin M. Simmons, Daniel Sutter and Roger Pielke, Jr.


In 2011, thunderstorms in the United States resulted in 550 deaths from tornadoes and more than $28 billion in property damage, according to data from the US National Oceanic and Atmospheric Administration, with the vast majority of economic losses resulting from tornadoes. This paper normalizes U.S. tornado damage from 1950 to 2011 using several methods. A normalization provides an estimate of the damage that would occur if past events occurred under a common base year's societal conditions. We normalize for changes in inflation and wealth at the national level and changes in population, income and housing units at the county level. Under several methods, there has been a sharp decline in tornado damage. This decline corresponds with a decline in the reported frequency of the most intense (and thus most damaging) tornadoes since 1950. However, quantification of trends in tornado incidence is made difficult due to discontinuities in the reporting of events over time. The normalized damage results are suggestive that some part of this decline may reflect actual changes in tornado incidence, beyond changes in reporting practices. In historical context 2011 stand out as one of the most damaging years of the past 61 and provide an indication that maximum damage levels have potential to increase should societal change lead to increasing exposure of wealth and property.
Why is this paper important?
This is the first paper to comprehensively "normalize" historical economic losses from US tornadoes. Normalization methods have been widely applied to phenomena around the would including US hurricanes and Australian bushfires. We use damage data collected by the National Oceanic and Atmospheric Administration, a US government agency which houses the Storm Prediction Center. We use three methods to adjust the data to a common base year (2011).The analysis allows us to say something about damage over time, in this case since 1950. The Figure below shows normalized damage 1950-2011 for one of our adjustment methods.
What do you find?
  • Overall we find a decrease in damages since 1950.
  • Even so 2011 was one of the 3 most costly years in our dataset.
  • Our dataset includes 56,457 tornadoes, of which 33,746 caused some recorded damage.
  • Since 1950, in round numbers, tornadoes resulted in about half the normalized damage as did hurricanes and twice that of earthquakes 
  • The strongest two categories of tornadoes (called EF4 and EF5) represent about 1% of all reported events but have caused almost 45% of all normalized damage.
  • The most damage per sqaure mile from1950-2011 has occurred in Massachusetts and Connecticut.
  • The most damage overall  from 1950-2011 has occurred in Texas and Alabama.
  • During the calendar year 80% of damage occurs January-June.
  • The most damaging months are April (31%), May (20%) and June (16%).
What about long-term trends in damage and tornado incidence?
Here is what we say in the paper:

 "The normalized results are also suggestive that the long-term decrease in reported tornado incidence may also have a component related to actual, secular changes in tornado incidence beyond reporting changes. To emphasize, we do not reach any conclusion here that stronger that “suggestive” and recommend that this possibility be subject to further research, which goes beyond the scope of this study.

On climate time scales there is no indication of increasing incidence of tornadoes, and the increases documented over the short (sub-climate) period 2000-2011 are strongly influenced by the large number of events documented in 2011. However, the decreased frequency of high damage events in recent decades as compared to previous decades is a notable feature in the time series and provides strong counter-evidence to claims found in the scientific literature that the atmospheric environment that spawns tornadoes has intensified leading to more intense events on climate time scales (e.g., Trenberth, 2012). Such claims are commonly found in the popular media and also in the insurance/reinsurance sector where they influence public opinion and decision making in business and government. The most recent review by the IPCC found no basis for claiming an increase (or decrease) in tornado incidence or intensity (IPCC, 2012)."
How can I get an advance copy?
If you would like a copy of the accepted paper (that is, before the final revisions and pre-proof) you can send an email request to me, rpielkejr@gmail.com.

04 October 2012

A Letter in the FT in Response to Martin Wolf

The FT has published a letter from me responding to Martin Wolf's column earlier this week on economist Robert's Gordon's claims of a stair step decline in per capita US GDP growth rates since 1950 as evidence supporting a thesis of an looming collapse in growth.

Here is an excerpt from my letter (the link to the graph is added here):
From 1950 to 2006, the period covered by Prof Gordon’s analysis, the rate of growth in US per capita GDP changed very little if at all, according to the Bureau of Economic Analysis. Debating the prospects for future economic growth in the context of innovation and productivity is important; however, such arguments should be grounded in accurate data. Prof Gordon describes his thesis as a “provocative fantasy” – a characterisation that I find to be completely accurate.
The full letter is here. Background on this issue can be found here and here and here.

02 October 2012

Nullius in Verba Works for Economics Too

Writing in the Financial Times, Martin Wolf, one of my favorite economists, is the latest to uncritically accept Northwestern economist Robert Gordon's claims that there was no per capita economic growth before 1750 and that US GDP growth rates plummeted from 1950 to 2006. As we've seen, both claims are incorrect.

Wolf writes:
For most of history, next to no measurable growth in output per person occurred. What growth did occur came from rising population.
This is not the case. Per capita economic growth was significant and economically transformative prior to 1750, according to the research cited by Gordon for the United Kingdom (PDF).

Far more importantly than what happened before 1750, Wolf republishes Gordon's "stair step decline" graph which shows rates of per capita GDP growth dropping from about 2.5% per year in 1950 to about half that a half-century later. As I have shown from multiple datasets, that trend is not supported by the data.
A linear trend on rates of change in per capita GDP from 1950 to 2006 shows very little deviation from a 2.0% growth rate, as indicated in the graph above.

How might one fool oneself into thinking otherwise?
Consider the graph above which shows change in per capita growth rates for each decade from 1885 (that is, the first point shows per capita GDP growth from 1885-1895, the second point 1895-1905, etc, based on Maddison data cited by Gordon). The data might arguably be used to support a claim of a slowdown in growth rates since 1950.
But hold on a second. Consider the graph immediately above which shows the same Maddison data, except the data begins in 1880 (i.e., the first point shows the growth rate in per capita GDP 1880-1890, the second point 1890-1900). This way of analyzing the data shows what seems to be an increase in per capita GDP growth rates from mid-century.

Which is correct? Well, they both are "correct" from a mathematical perspective. From an economic perspective, what the two graphs tell you is that the data is noisy and start and end points for analysis make a big difference. The different pictures also offer a warning about the perils of taking trends off of smoothed data.
Above is all of the data at annual resolution from 1870 to 2006. The data show a remarkable lack of trend (red line) over 136 years. There is a notable change in variability following World War II which presumably reflects the impacts of explicit monetary policies.

But over the long term, is there evidence of a great slowdown in rates of per capita economic growth? The answer is no.

Martin Wolf has promised me that he'd take a look at the data himself, which I appreciate. As the motto of Royal Society says, nullius in verba. Mistakes can easily occur in any analysis. The important thing is what is done once they are recognized.

01 October 2012

Is US Per Capita GDP Growth in Decline?

UPDATE 10/2: Mislabeled graph fixed at the bottom of this post. Thanks to Mark Bahner in the comments!

In my column last week at The Breakthrough Institute on Robert Gordon's analysis of US per capita economic growth, I identified what I believe to be an error in his calculation of post-1950 growth rates. Such an error matters because Gordon's recent discussion paper has been called “the summer’s most talked about working paper in economics” and argues a data-based case for US decline. My view is that such discussions should at least start with a solid empirical basis and re-checking assertions grounded in data claims is fair game (not all agree, however).

Gordon has written the following to explain his main graph, which appears just below:
"Each of the successive periods after 1950 exhibits a downward step in per-capita real GDP growth, with steps downward marked at 1964, 1972, and 1987."
I cannot reconcile Gordon's claim with the actual data on the US economy. The data on growth rates (from BEA and US Census, which Gordon has confirmed to me as the correct datasets to use) shows the following rates of change in per capita GDP growth at the break points identified by Gordon:
1928 to 1950 - 2.4%
1950 to 1964 - 2.1%
1964 to 1972 - 2.9%
1972 to 1987 - 2.1%
1987 to 2006 - 1.9%
These numbers do not show successive downward steps in US GDP per capita growth rates from 1950. In fact, the peak is right in the middle of the periods, and otherwise the growth rates are very similar through the period. The calculation of trends on the time series is very sensitive to the start and end point chosen for analysis -- which should raise a red flag for chartist-based arguments.

This discrepancy between our analyses of the data explains why the linear trend I calculate from 1950 to 2006 shows a tiny decrease, essentially no decline, contrary to his sharp decline graphed in red in the figure above. The graph at the top of this post shows data from FRED from 1960 (start of that dataset) on US per capita GDP (plotted on a log scale).  Similarly, it shows no evidence of any stair-step decline in growth rates, certainly not a decrease by half illustrated in Gordon's graph.

I have invited Gordon to address the discrepancy in our datasets and help me to identify where I have made a mistake or perhaps if his data is in error. Readers are welcomed to chime in.

UPDATE: In the original graph posted at The Breakthrough Institute I showed ten-year growth rates for the Maddison dataset rather than the annual values as stated in the graph title. The annual values are shown below. There are no implications for the analysis -- the data is the same. Sorry for the error.