14 November 2010

The $82 Billion Prediction

The Sarasota Herald-Tribune has an revealing article today about the creation in 2006 of a "short-term" hurricane risk prediction from a company called Risk Management Solutions.  The Herald-Tribune reports that the prediction was worth $82 billion to the reinsurance industry.  It was created in just 4 hours by 4 hurricane experts, none of whom apparently informed of the purposes to which their expertise was to be put.  From the article:
Hurricane Katrina extracted a terrifying toll -- 1,200 dead, a premier American city in ruins, and the nation in shock. Insured losses would ultimately cost the property insurance industry $40 billion.

But Katrina did not tear a hole in the financial structure of America's property insurance system as large as the one carved scarcely six weeks later by a largely unknown company called Risk Management Solutions.

RMS, a multimillion-dollar company that helps insurers estimate hurricane losses and other risks, brought four hand-picked scientists together in a Bermuda hotel room.

There, on a Saturday in October 2005, the company gathered the justification it needed to rewrite hurricane risk. Instead of using 120 years of history to calculate the average number of storms each year, RMS used the scientists' work as the basis for a new crystal ball, a computer model that would estimate storms for the next five years.

The change created an $82 billion gap between the money insurers had and what they needed, a hole they spent the next five years trying to fill with rate increases and policy cancellations.

RMS said the change that drove Florida property insurance bills to record highs was based on "scientific consensus."

The reality was quite different.
Here is the agenda for that 2005 workshop in Bermuda. The Herald-Tribune's description of the meeting that led to the $82 billion financial impact beggars belief:
The daily papers were still blaring news about Katrina when Jim Elsner received an invitation to stay over a day in Bermuda.

The hurricane expert from Florida State University would be on the island in October for an insurance-sponsored conference on climate change. One of the sponsors, a California-based company called RMS, wanted a private discussion with him and three other attendees.

Their task: Reach consensus on how global weather patterns had changed hurricane activity.

The experts pulled aside by RMS were far from representative of the divided field of tropical cyclone science. They belonged to a camp that believed hurricane activity was on the rise and, key to RMS, shared the contested belief that computer models could accurately predict the change.

Elsner's statistical work on hurricanes and climatology included a model to predict hurricane activity six months in advance, a tool for selling catastrophe bonds and other products to investors.

There was also Tom Knutson, the National Oceanic and Atmospheric Administration meteorologist whose research linking rising carbon dioxide levels to potential storm damage had led to censoring by the Bush White House.

Joining them was British climate physicist Mark Saunders, who argued that insurers could use model predictions from his insurance-industry-funded center to increase profits 30 percent.

The rock star in the room was Kerry Emanuel, the oracle of climate change from the Massachusetts Institute of Technology. Just two weeks before Katrina, one of the world's leading scientific journals had published Emanuel's concise but frightening paper claiming humanity had changed the weather and doubled the damage potential of cyclones worldwide.

Elsner said he anticipated a general and scholarly talk.

Instead, RMS asked four questions: How many more hurricanes would form from 2006 to 2010? How many would reach land? How many the Caribbean? And how long would the trend last?

Elsner's discomfort grew as he realized RMS sought numbers to hard-wire into the computer program that helps insurers set rates.

"We're not really in the business of making outlooks. We're in the business of science," he told the Herald-Tribune in a 2009 interview. "Once I realized what they were using it for, then I said, 'Wait a minute.' It's one thing to talk about these things. It's another to quantify it."

Saunders did not respond to questions from the Herald-Tribune. Knutson said if RMS were to ask again, he would provide the same hurricane assessment he gave in 2005.

But Emanuel said he entered the discussion in 2005 "a little mystified" by what RMS was doing.

He now questions the credibility of any five-year prediction of major hurricanes. There is simply too much involved.

"Had I known then what I know now," Emanuel said, "I would have been even more skeptical."

Elsner's own frustration grew when he attempted to interject a fifth question he thought critical to any discussion of short-term activity: Where would the storms go?

The RMS modelers believed Florida would remain the target of most hurricane activity. Elsner's research showed storm activity shifted through time and that it was due to move north toward the Carolinas.

But RMS' facilitator said there was not enough time to debate the matter, Elsner said. There were planes to catch.

In the end, the four scientists came up with four hurricane estimates -- similar only in that they were all above the historic average.

RMS erased that difference with a bit of fifth-grade math. It calculated the average.

Thus, the long-term reality of 0.63 major hurricanes striking the U.S. every year yielded to a prediction of 0.90.

Contrary to Elsner's research, RMS aimed most of that virtual increase at Florida.

On paper, it was a small change from one tiny number to another tiny number.

Plugged into the core of a complex software program used to estimate hurricane losses, the number rewrote property insurance in North America.

Risk was no longer a measure of what had been, but what might be. And for Floridians living along the Atlantic, disaster was 45 percent more likely.

RMS defended its new model by suggesting it had brought scientists together for a formal, structured debate.
Elsner disputes that idea.

"We were just winging it," he said.
I participated in the 2008 RMS expert elicitation, and at the time I explained that their methodology was biased and pre-determined.  A group of monkeys would have arrived at the exact same results.  Here is what I wrote then (and please see that post for the technical details on the "monkeys" making predictions, and the response and discussion with the RMS expert elicitor is here):
I have in the past been somewhat critical of RMS for issuing short-term hurricane predictions (e.g., see here and here and here). I don’t believe that the science has demonstrated that such predictions can be made with any skill, and further, by issuing predictions, RMS creates at least the appearance of a conflict of interest as many of its customers will benefit (or lose) according to how these predictions are made. . . .

The RMS expert elicitation process is based on questionable atmospheric science and plain old bad social science. This alone should lead RMS to get out of the near-term prediction business. Adding in the appearance of a conflict of interest from clients who benefit when forecasts are made to emphasize risk above the historical average makes a stronger case for RMS to abandon this particular practice. RMS is a leading company with an important role in a major financial industry. It should let its users determine what information on possible futures they want to incorporate when using a catastrophe model. RMS should abandon its expert elicitation and its effort to predict future hurricane landfalls for the good of the industry, but also in service of its own reputation.
RMS has since that time apparently shelved its expert elicitation process. My experiences prompted me to write up a paper on near-term predictions such as those employed by RMS, and it was published in the peer-reviewed literature:
Pielke, Jr., R.A. (2009), United States hurricane landfalls and damages: Can one-to five-year predictions beat climatology?. Environmental Hazards 8 187-200, issn: 1747-7891, doi: 10.3763/ehaz.2009.0017

This paper asks whether one- to five-year predictions of United States hurricane landfalls and damages improve upon a baseline expectation derived from the climatological record. The paper argues that the large diversity of available predictions means that some predictions will improve upon climatology, but for decades if not longer it will be impossible to know whether these improvements were due to chance or actual skill. A review of efforts to predict hurricane landfalls and damage on timescales of one to five years does not lend much optimism to such efforts in any case. For decision makers, the recommendation is to use climatology as a baseline expectation and to clearly identify hedges away from this baseline, in order to clearly distinguish empirical from non-empirical justifications for judgements of risk.
At the same time that RMS was rolling out its new model in 2006, an RMS scientist was serving as a lead author for the IPCC AR4.  He inserted a graph (below) into the report suggesting a relationship between the costs of disasters and rising temperatures, when in fact the peer-reviewed literature said the opposite.
RMS earlier this year admitted that the inclusion of that graph was a mistake, as it could have been "misleading."

And, you might ask, how did that five-year "short term" prediction from RMS made for 2006-2010 actually pan out?  As you can see below, not so good.


  1. Are things like this not against the law? If they are not, it could be argued that they ought to be.
    Say we had been talking about a drug company or a company producing a prospectus?
    The world is not only more wicked than I imagined, it seems to be more wicked than I CAN imagine.

  2. Sounds like that age old practice known as 'charging what the traffic will bear.'
    The modern twist is to ask a couple of scientists for a wild gues to add a superficial gloss of scientific respectability, which allows a 45% hike in charges.
    Great returns if you can get them, and not fraud, since the marks wilingly hand over the money. Ethical? I don't think so.

  3. I don't know why this surprises anyone. If you look at all the emission trading schemes to mitigate carbon, the benefits to the traders is enormous while the benefits to the climate are undetectable. The insurance industry is an important cog in the finacial industry so its no surprise they worked their angle as well. After the abuse the nation has suffered at the hands of the finacial industry over the last 10 years, I am absolutely amazed that the general populations and our political leaders are not much more suspicious about Wall Streets motives in the climate change debate. I'm also a litte suprised that more climate scientists have not come to the realization that they are pawns in this game.

  4. hmm... "We were just winging it".

    Now where have I heard that before?


    What a rigorous science club these guys run.

  5. Whether or not it's illegal, it's is another appalling but potentially helpful example of where the financial incentives all too often lie in AGW theory and its supposed impacts. Many thanks to Roger for spelling it out. We look forward to seeing you in London next week!

    Richard Drake

  6. Now you are messing with the real money. Be careful.

  7. Now, what was that about "Deniers" being nothing but Shills for "Big Oil" etc?

  8. As I've stated before, insurance is a tax avoidance scheme. Money set aside for 'loss reserves' is non-taxable. The more money that has to be set aside accelerates 'tax free' asset growth.

    It's not at all surprising that insurers would be more then happy to encourage marginally scientifically defend-able loss scenarios.

    Lot's of folks like to talk about how the oil companies control the world, but it's actually the insurance companies that control the oil companies.

  9. Climate fix mentioned by Lomborg in this Op ed piece in The Australian:

  10. Been lurking for a while, but this is the first time I've wanted to comment. Wow. Please relay, and ask your readers to relay, to their members of Congress or other representatives. Malfeasance or misfeasance, this is purely getmorefeesance.

  11. This really does beg the question about who is the paid schill for what.
    By the way, the reason they did not hire chimps to do the study is because the costs of renting, transporting and keeping the chimps is much higher than that of hiring weather experts.

  12. "The paper argues that the large diversity of available predictions means that some predictions will improve upon climatology, but for decades if not longer it will be impossible to know whether these improvements were due to chance or actual skill. "

    The truth hurts the greedy. Bring back the octopus !

  13. Roger, how much revisionist history is being done by a few of these scientists? Judy and Peter don't get off the hook here either -- with their testimony for Jeb Bush...

  14. Is AGW a hoax perpetrated on mankind by Big Insurance? 8-P

  15. So, why do you and other scientists accept such expert elicitation invitations from commercial firms? What did you expect?

  16. Omniclimate asks: "Is AGW a hoax perpetrated on mankind by Big Insurance?" No...AGW is a hoax perpetrated on mankind by environmentalists who believe their noble cause (saving the planet, whatever that means) is so great that they have carte blanche to exaggerate the threat to that noble cause. The insurance companies are just taking advantage of the hoax, as are the carbon traders, wind farmers, environmental groups, politicians and so on.

    For real 'wrong-doing' to persist, it must hide itself under the banner of a noble cause.

  17. Roger,

    I believe you’ve encouraged misunderstanding about the use of catastrophe models as evident in the comments from Sean, Jack, Chuck, Richard, Frontiers of Faith and Science, Harrywr2, Nordlander, and Omniclimate. You state "As we approach the end of the 2010 hurricane season we are close to being able to offer a definitive evaluation of [the RMS 2006-2010] forecast." And your conclusion is "not so good" followed by a graph showing the large difference between the "RMS Prediction" and "Actual Damage".

    I hope your motivation is not to pander to fears about insurance companies overcharging by posting an "Evaluation of RMS Hurricane Damage Forecast 2006-2010". Shouldn't you be an expert in the field of catastrophe modeling before performing the evaluation? And if you are not, aren't you exposing your readers to the issue found by Sarewitz?

    If we assume that the readers of your blog identified above trust you to help them reach conclusions about the use of catastrophe models, then we shouldn't be surprised if they believe that cat models are a tool to rip off policy holders. But it is because they are relying on an utterly misleading argument that you've put forth in the form of this graph. I'm sure you understand the practical application of the RMS model and that the output has always been probabilistic. The conditions in the Atlantic that underpin a higher risk near-term view have existed since 1995. During this time, most 5-year periods have shown lower actual damage than the mean estimate from the model. This is to be expected! Catastrophe losses are volatile and to compare the mean statistic from a probabilistic model with a 5-year realization is a misleading use of statistics. The model includes very devastating events with damages from a single occurrence exceeding $700 billion. These potential outcomes are included in the calculation of the mean estimate. In order to compare actual realizations with the model mean, you would need to have an historical record that includes these large events. I'm sure you understand this, but some of your readers may not and they look to you to form their understanding. Why don't you just show the chance that a five-year sample from the RMS model would be below the actual record from 2006-2010? This would at least have some meaning and would not distort perceptions of the model's performance.

  18. -18-Tao Man

    Thanks for your comments, a few reactions:

    1. The RMS 5 year prediction is not a product of their model but of an expert elicitation process. That process, which I participated in in 2008, does not produce a probabilistic PDF but a single number. None of the landfall prediction methods that EMS used as input to the elicitation process was probabilistic, and none generated a 5-year prediction of zero landfalling intense storms.

    2. You are correct that RMS should have approached the prediction probabilistically from the outset. They did not.

    3. Even if they had, there is no scientific basis for claims of skill in 5-year landfall or damage predictions, whether deterministic or probabilistic.

    For further discussion of these issues, see:


    I am quite comfortable labeling the RMS 2006-2010 prediction as "not good."

  19. "Go on take the money and run" - Steve Miller

    Why weren't the hurricane experts asked that since they felt uncomfortable about what RMS was doing, why didn't they use better judgement and walk away?