14 November 2010

The $82 Billion Prediction

The Sarasota Herald-Tribune has an revealing article today about the creation in 2006 of a "short-term" hurricane risk prediction from a company called Risk Management Solutions.  The Herald-Tribune reports that the prediction was worth $82 billion to the reinsurance industry.  It was created in just 4 hours by 4 hurricane experts, none of whom apparently informed of the purposes to which their expertise was to be put.  From the article:
Hurricane Katrina extracted a terrifying toll -- 1,200 dead, a premier American city in ruins, and the nation in shock. Insured losses would ultimately cost the property insurance industry $40 billion.

But Katrina did not tear a hole in the financial structure of America's property insurance system as large as the one carved scarcely six weeks later by a largely unknown company called Risk Management Solutions.

RMS, a multimillion-dollar company that helps insurers estimate hurricane losses and other risks, brought four hand-picked scientists together in a Bermuda hotel room.

There, on a Saturday in October 2005, the company gathered the justification it needed to rewrite hurricane risk. Instead of using 120 years of history to calculate the average number of storms each year, RMS used the scientists' work as the basis for a new crystal ball, a computer model that would estimate storms for the next five years.

The change created an $82 billion gap between the money insurers had and what they needed, a hole they spent the next five years trying to fill with rate increases and policy cancellations.

RMS said the change that drove Florida property insurance bills to record highs was based on "scientific consensus."

The reality was quite different.
Here is the agenda for that 2005 workshop in Bermuda. The Herald-Tribune's description of the meeting that led to the $82 billion financial impact beggars belief:
The daily papers were still blaring news about Katrina when Jim Elsner received an invitation to stay over a day in Bermuda.

The hurricane expert from Florida State University would be on the island in October for an insurance-sponsored conference on climate change. One of the sponsors, a California-based company called RMS, wanted a private discussion with him and three other attendees.

Their task: Reach consensus on how global weather patterns had changed hurricane activity.

The experts pulled aside by RMS were far from representative of the divided field of tropical cyclone science. They belonged to a camp that believed hurricane activity was on the rise and, key to RMS, shared the contested belief that computer models could accurately predict the change.

Elsner's statistical work on hurricanes and climatology included a model to predict hurricane activity six months in advance, a tool for selling catastrophe bonds and other products to investors.

There was also Tom Knutson, the National Oceanic and Atmospheric Administration meteorologist whose research linking rising carbon dioxide levels to potential storm damage had led to censoring by the Bush White House.

Joining them was British climate physicist Mark Saunders, who argued that insurers could use model predictions from his insurance-industry-funded center to increase profits 30 percent.

The rock star in the room was Kerry Emanuel, the oracle of climate change from the Massachusetts Institute of Technology. Just two weeks before Katrina, one of the world's leading scientific journals had published Emanuel's concise but frightening paper claiming humanity had changed the weather and doubled the damage potential of cyclones worldwide.

Elsner said he anticipated a general and scholarly talk.

Instead, RMS asked four questions: How many more hurricanes would form from 2006 to 2010? How many would reach land? How many the Caribbean? And how long would the trend last?

Elsner's discomfort grew as he realized RMS sought numbers to hard-wire into the computer program that helps insurers set rates.

"We're not really in the business of making outlooks. We're in the business of science," he told the Herald-Tribune in a 2009 interview. "Once I realized what they were using it for, then I said, 'Wait a minute.' It's one thing to talk about these things. It's another to quantify it."

Saunders did not respond to questions from the Herald-Tribune. Knutson said if RMS were to ask again, he would provide the same hurricane assessment he gave in 2005.

But Emanuel said he entered the discussion in 2005 "a little mystified" by what RMS was doing.

He now questions the credibility of any five-year prediction of major hurricanes. There is simply too much involved.

"Had I known then what I know now," Emanuel said, "I would have been even more skeptical."

Elsner's own frustration grew when he attempted to interject a fifth question he thought critical to any discussion of short-term activity: Where would the storms go?

The RMS modelers believed Florida would remain the target of most hurricane activity. Elsner's research showed storm activity shifted through time and that it was due to move north toward the Carolinas.

But RMS' facilitator said there was not enough time to debate the matter, Elsner said. There were planes to catch.

In the end, the four scientists came up with four hurricane estimates -- similar only in that they were all above the historic average.

RMS erased that difference with a bit of fifth-grade math. It calculated the average.

Thus, the long-term reality of 0.63 major hurricanes striking the U.S. every year yielded to a prediction of 0.90.

Contrary to Elsner's research, RMS aimed most of that virtual increase at Florida.

On paper, it was a small change from one tiny number to another tiny number.

Plugged into the core of a complex software program used to estimate hurricane losses, the number rewrote property insurance in North America.

Risk was no longer a measure of what had been, but what might be. And for Floridians living along the Atlantic, disaster was 45 percent more likely.

RMS defended its new model by suggesting it had brought scientists together for a formal, structured debate.
Elsner disputes that idea.

"We were just winging it," he said.
I participated in the 2008 RMS expert elicitation, and at the time I explained that their methodology was biased and pre-determined.  A group of monkeys would have arrived at the exact same results.  Here is what I wrote then (and please see that post for the technical details on the "monkeys" making predictions, and the response and discussion with the RMS expert elicitor is here):
I have in the past been somewhat critical of RMS for issuing short-term hurricane predictions (e.g., see here and here and here). I don’t believe that the science has demonstrated that such predictions can be made with any skill, and further, by issuing predictions, RMS creates at least the appearance of a conflict of interest as many of its customers will benefit (or lose) according to how these predictions are made. . . .

The RMS expert elicitation process is based on questionable atmospheric science and plain old bad social science. This alone should lead RMS to get out of the near-term prediction business. Adding in the appearance of a conflict of interest from clients who benefit when forecasts are made to emphasize risk above the historical average makes a stronger case for RMS to abandon this particular practice. RMS is a leading company with an important role in a major financial industry. It should let its users determine what information on possible futures they want to incorporate when using a catastrophe model. RMS should abandon its expert elicitation and its effort to predict future hurricane landfalls for the good of the industry, but also in service of its own reputation.
RMS has since that time apparently shelved its expert elicitation process. My experiences prompted me to write up a paper on near-term predictions such as those employed by RMS, and it was published in the peer-reviewed literature:
Pielke, Jr., R.A. (2009), United States hurricane landfalls and damages: Can one-to five-year predictions beat climatology?. Environmental Hazards 8 187-200, issn: 1747-7891, doi: 10.3763/ehaz.2009.0017

This paper asks whether one- to five-year predictions of United States hurricane landfalls and damages improve upon a baseline expectation derived from the climatological record. The paper argues that the large diversity of available predictions means that some predictions will improve upon climatology, but for decades if not longer it will be impossible to know whether these improvements were due to chance or actual skill. A review of efforts to predict hurricane landfalls and damage on timescales of one to five years does not lend much optimism to such efforts in any case. For decision makers, the recommendation is to use climatology as a baseline expectation and to clearly identify hedges away from this baseline, in order to clearly distinguish empirical from non-empirical justifications for judgements of risk.
At the same time that RMS was rolling out its new model in 2006, an RMS scientist was serving as a lead author for the IPCC AR4.  He inserted a graph (below) into the report suggesting a relationship between the costs of disasters and rising temperatures, when in fact the peer-reviewed literature said the opposite.
RMS earlier this year admitted that the inclusion of that graph was a mistake, as it could have been "misleading."

And, you might ask, how did that five-year "short term" prediction from RMS made for 2006-2010 actually pan out?  As you can see below, not so good.