24 January 2012

Cat Model Mayhem

Writing at her blog, The Short Run, my superstar grad student Jessica Weinkle looks at recent catastrophe model filings in the state of Florida, as part of her dissertation research:
In America's deep south, a region not so far away, hides a new foe threatening otherwise intelligent people's ability to decide. The Louisiana Insurance Commissioner, Jim Donelone, has rung the alarm putting homeowners on alert of  "The looming threat of the new cat model, RMS 11".  This is the newest addition in the catastrophe model rogue gallery challenging the gallant efforts of state insurance regulating offices. The kryptonite in their coding is the incredible capacity to produce scientifically supported uncertainty thereby weakening the ability to control rates by politically hopeful insurance commissioners everywhere. A past episode between dueling regulating powers and risk predicting machinery demonstrated the societal cost inflicted by these dastardly foes creating uncertainty whenever plugged into a wall. In 2006, RMS rolled out an arbitrary change to their trusty hurricane catastrophe model in RiskLink 6.0, costing Florida homeowners $82 billion. Stay tuned to state regulating offices for the latest updates on the battle between man and machine...

In the mean time, let's take a closer look at these new trade secret rascals...
Weinkle uncovers some eyebrow raising factiods, such as the fact that the estimated probability of a Category 5 hurricane hitting Florida has apparently increased from previous model filings in several models by 100%. She also shows that across five different models, the estimated cost of a Category 5 storm in Florida ranges from $18 billion to $146 billion.

Based on these numbers, Weinkle calls the catastrophe models tools that create uncertainties and makes the nonobvious point that decisions about risk are actually decisions about modeled risk -- which may or may not be the same thing:
Together, these models create a great deal of uncertainty about the risk being insured against.  In the world of insurance, uncertainty about the risk is risk in and of itself.  If uncertainty increases, then the cost will too and vice versa.  So, a reasonable question to ask would be, "Has the modeled risk changed?"
Not surprisingly, catastrophe models have faced some criticism, such as found in this recent news article from Louisiana:
Catastrophe models are controversial. Proponents say they bring science to underwriting and synthesize the latest understanding of storms and climate change to insurers. Opponents say they're gee-whiz black boxes that manufacture instant justification for high rates for insurers.
The problem with catastrophe models is not that they lack value (they are actually extremely powerful and potentially useful tools), it is just very hard to assess what that value is (e.g., PDF), and their black box nature makes such assessments extremely difficult. The lack of an industry-wide evaluation capability, strong hints of conflict of interest and the defensive nature of some of the cat modelers makes the issue a mine field of bad decisions for businesses and governments alike.


  1. How is uncertainty determined when a system is incompletely and, as the evidence suggests, insufficiently characterized? Where local and regional variations can occur over a short or long period, over a limited or prolonged duration. Where the system is influenced through subterranean, terrestrial, extraterrestrial, and even conscious (i.e. human) influences.

    Why is it appropriate to use statistical tools to model the behavior of a system which is generally believed to exhibit a chaotic behavior? Where the characteristics of the behavior have not been established, and yet seemingly arbitrary definition of notable features such as period are made. Where direct observation (i.e. not inferred or proxy) is limited and of inconsistent quality.

    In any case, with respect to private enterprise, it would seem reasonable to base their risk assessments on liberal estimates in order to control their potential losses. The same can be done by governments, but the exploitation would be purely involuntary, and therefore the assessments should be derived from conservative estimates. With both they would be, ideally, permitted an open critical review. The consensus should only remain legitimate as long as it is capable of responding to contrary evidence and argument. Due to its inherently uncertain nature its assessments must be adaptive. Maybe that's another behavior which needs to be characterized (e.g. oversight of the overseers).

    I wonder how many levels would be sufficient to ensure an objective and optimal outcome.

  2. If classical liberalism was the free-market liberalism of Adam Smith et al., what makes neoliberalism different from classical liberalism?

  3. Do you have any insight into the relative contributions of the following factors to the overall uncertainty:
    1) Climatology (predicting landfall frequency)
    2) Where the hurricane hits (huge variations in estimated damage if Miami takes a direct hit vs. hitting a less populated area)
    3) Estimating damage for a specific hurricane path (i.e., property inventory and vulnerability to wind/water damage)

    If (1) is a big factor, then the models have little or no skill regarding hurricane frequency, and standardising on climatological (i.e., historical) rates would make intercomparison of models easier with no degradation of their reliability.

    If (2) is a big factor, then I would say the models are accurately describing the fact that we don't know where hurricanes will hit, and they're doing what they should be (i.e., uncertainty is not a problem with the models and we might expect the models to give a good estimate of the range of uncertainty given that we can't predict hurricane tracks a year or more in advance)

    If (3) is a big factor, then I would think we have a solveable problem and the uncertainty points to an area where research could realistically be expected to reduce uncertainties and potentially reduce risks (e.g., better understanding of what kinds of structures are more vulnerable and how to write building codes to reduce those vulnerabilities).