30 November 2009

Evaluation of RMS Hurricane Damage Forecast 2006-2010

In the spring of 2006 (and annually since), a company called Risk Management Solutions (RMS) issued a five year forecast of hurricane activity (for 2006-2010). RMS predicted that U.S. insured hurricane losses would be 40% higher than average the historical average. RMS is the global leader in so-called "catastrophe modeling." Their loss models are used by insurance companies to set rates charged to homeowners, by reinsurance companies to set rates they charge to insurers, by ratings agencies for evaluating risks, and others.

In 2007 I produced an initial verification of the RMS forecast based on comparing actual losses over two hurricane seasons with the prediction, and suggested that the forecast was already off track. Wih the end of the 2009 North Atlantic hurricane season today, we now have 2 more years of data (for a totoal of 4 years) to use in evaluating the 5-year 2006 RMS forecast. The figure below shows the RMS forecast in the context of the historical average (insured) losses and the actual losses, all expressed on an annual basis. All data comes from the ICAT Damage Estimator and insured losses are calculated as 50% of total losses. (Note that 2009 had essentially no losses.) The figure at the top of this post shows the same data on a cumulative basis.

For the first four years the RMS 2006 forecast has obviously performed poorly, when compared to the historical average. If the forecast is to be evaluated on an annual basis -- which is how I interpret the intent of RMS -- then the forecast is a bust regardless of what happens in 2010, as the historical record has already proved superior in 3 of the five years 2006-2010. If the forecast is to be viewed cumulatively over five years (which I understand is not the intent of RMS) the forecast can still be interpreted as a success if 2010 sees $23 billion or more in insured losses, or a $46 billion season. There are 7 of 110 years in our dataset that saw this much or more damage, giving about a 6% chance of such an event based on climatology.

For further reading:

I participated in the 2008 RMS expert elicitation and provided a critique of it here and the expert elicitor responded (not on behalf of RMS though) here. I argue why it is that 5-year predictions are a poor substitute for the historical record in this peer-reviewed paper:
Pielke, Jr., R.A., 2009. United States hurricane landfalls and damages: Can one-to five-year predictions beat climatology?, Environmental Hazards, Vol. 8, pp. 187-200.


  1. I could have saved the insurance companies a lot of money. For the price of a beer I could have told them that the best predictor of next year's hurricane season is this year's hurricane season. I'm pretty confidant that my prediction would be superior to the pseudo-science gobbledygook produced by RMS.

    This is a classic problem of business management. Lower level management wants to use "science" to cover their asses in case things go wrong. So they hire consultants with their color glossy brochures and pseudo-science jargon, and when the company looses money, they can alway say that they did their best and relied on the latest "science." It's no different from hiring handwriting experts to help you hire employees - when an employee goes bad, Human Resources can always say they did their best - they relied on "experts."

  2. "I could have saved the insurance companies a lot of money. For the price of a beer I could have told them that the best predictor of next year's hurricane season is this year's hurricane season."

    For free (I don't even need a beer), I could have told them that an even better predictor than the last hurricane year would be the average of all hurricane years in the modern period (i.e., since airplane/satellite data were available) adjusted for inflation and property values, ala the work RP Jr. et al. have done.

    I would have an even better free suggestion--better for society, not necessarily the insurance companies--that they lobby the federal government or state governments bordering the Gulf and East coasts to develop portable hurricane mitigation measures (e.g. storm surge protection system measures, and portable inland flooding prevention involving inflatable walls).

    If one looks at the "actual damage" graph in the second graph, the only year with any significant damage is 2008. There is essentially no damage in 2006, 2007, and 2009. One beauty of portable systems is that they do not need to be deployed when they are not needed. This can be contrasted to levees, for example, that must be built and stay in place regardless of whether a storm even ever hits the place where the levee is located.

  3. Judith in 2006 advising the US Congress (aka The Money Pit) . . . wonder is she still feels so confident about the AR4 contents.

    “In addition to my own personal research experiences in the Arctic, a series of national and international assessments undertaken by the Intergovernmental Panel on Climate Change (IPCC),the U.S. National Academies, and the U.S. Climate Change Science have made it very difficult to maintain a credible position of scientific skepticism regarding the influence of humans on global warming. The past year has seen striking resolutions to two controversies involving the data record of climate change that support anthropogenic greenhouse warming: the synthesis report on the surface temperature reconstructions over the past two millennia the (NAS, 2006) and the synthesis and assessment report on temperature trends in the lower atmosphere (CCSP, 2006).
    Further, the draft IPCC 4th Assessment Report presents climate model simulations that are far more sophisticated and accurate than were available in prior assessments, substantially increasing the credibility of such simulations and the associated projections. The cautious conclusions of the large body of scientists contributing to these assessment reports by evaluating a large body of published research are extremely important in providing a balanced overview of the state of knowledge in the scientific research community. Based upon these assessments, our understanding of how the climate system works, while incomplete, is more than sufficiently robust to afford a basis for rational action.”

  4. True to your word as always Roger. Thanks for the write-up. I'd still love to see the inner workings of the RMS models. It still amazes me states are allowing people to be charged via a secret... um, I mean proprietary computer model.

  5. Roger.
    My understanding is that any catastrophe model provides a probability distribution for annual losses. In one case this means to infer a distribution with 5 observations (if their forecast is for annual losses) or with one single observation comprising the 5 years of losses. In both cases it is meaningless to provide a forecast (in my opinion even to call it a forecast) or to compare results with observation for such a short record. Anyway I look I see no reason why any probabilistic model should have an average annual loss differing from the historical record. Effort should be put into inferring the "correct" loss distribution (assuming it doesn't change with time) and/or provide probability distribution for losses with specific assumptions of how things will change. Anyway, this is a quite personal view and I would appreciate to know what you think.

  6. -5-Felipe

    Thanks, if I understand your comments correctly you are talking about how such models should be used, as opposed to how they are actually used.

    When RMS came out with an approach distinct from the historical record it was (and still is) quite controversial. See:


  7. Yes, but I'm also talking about how a model should be compared with experience. My opinion is that the only way of correctly evaluating RMS (or any other model) is to - hypothetically, of course - observe a 100 years from 2005 (all else the same) and then check whether the observed distribution matches the model. In this view I agree with you that past experience is the best guess we have for the future.

    Evidently, if RMS is calling their prediction a forecast it should be compared with real data. But a true forecast must use the present state of nature as initial condition to equations that evolve in time, in which case RMS (or any model that calls itself a forecast) should say in which year the chance of a loss is greater. This amounts to say what's the chance of raining in November 1st 2014. From my experience no model works this way and there are little chance they might ever work like this. The thought that opinion would be a way of doing it makes me feel worse than watching Twilight. But again, this is just my personal opinion.

  8. It should be obvious, but the vertical axes in the graphs are 2009 billions USD.

  9. And the roll continues.

    I have STILL yet to see a forecast that incorporates the "concensus" science on climate change.

    Yet we STILL await any sort of admission of any falsificatrion of any part of the hypothesis (even the subjective measures of certainty)

    Phil Jones global mean temperature forecast (comparing 2006-2010 with 2001-2005) is another busted flush on the horizon: