In 2007 I produced an initial verification of the RMS forecast based on comparing actual losses over two hurricane seasons with the prediction, and suggested that the forecast was already off track. Wih the end of the 2009 North Atlantic hurricane season today, we now have 2 more years of data (for a totoal of 4 years) to use in evaluating the 5-year 2006 RMS forecast. The figure below shows the RMS forecast in the context of the historical average (insured) losses and the actual losses, all expressed on an annual basis. All data comes from the ICAT Damage Estimator and insured losses are calculated as 50% of total losses. (Note that 2009 had essentially no losses.) The figure at the top of this post shows the same data on a cumulative basis.
For the first four years the RMS 2006 forecast has obviously performed poorly, when compared to the historical average. If the forecast is to be evaluated on an annual basis -- which is how I interpret the intent of RMS -- then the forecast is a bust regardless of what happens in 2010, as the historical record has already proved superior in 3 of the five years 2006-2010. If the forecast is to be viewed cumulatively over five years (which I understand is not the intent of RMS) the forecast can still be interpreted as a success if 2010 sees $23 billion or more in insured losses, or a $46 billion season. There are 7 of 110 years in our dataset that saw this much or more damage, giving about a 6% chance of such an event based on climatology.
For further reading:
I participated in the 2008 RMS expert elicitation and provided a critique of it here and the expert elicitor responded (not on behalf of RMS though) here. I argue why it is that 5-year predictions are a poor substitute for the historical record in this peer-reviewed paper:
Pielke, Jr., R.A., 2009. United States hurricane landfalls and damages: Can one-to five-year predictions beat climatology?, Environmental Hazards, Vol. 8, pp. 187-200.