Here I argue that our disagreement lies not in different views about the nuts and bolts of probabilistic forecasting, but rather our views on whether the IPCC is engaged in providing guidance to decision makers about the probable course of the future, or instead, is engaged in an academic exercise. This would seem to be a natural point of disagreement between an academic involved in modeling and a policy scholar. James does the climate science community no favors by personalizing the debate, so here I'll stick to the issues, which are worth a discussion.
Let’s start with an analogy to make this distinction more clear. Please note that I reject James choice of a die roll as an analogy because it begins with a presumption that the probabilities for various future outcomes are in fact known with certainty. While such an assumption is highly complementary to climate modelers, the fact is that the actual probability distributions for the future are unknown. That is why evaluation of probabilistic statements is necessary.
Let’s say that I make the following statement:
[A] It is very likely that a team from the AFC will win the 2012 Super Bowl.You, being a Green Bay Packer fan, take issue with my statement and ask me if I want to bet, but first you ask me what I mean by “very likely.” I explain that by “very likely” I mean at least a 90% chance. You then ask if I will give 9 to 1 odds on a $1 bet on the Super Bowl. Since I am confident in my statement (and I believe that the bet gives me good odds as I think that the chances could even be higher than 90%), I agree to bet and we shake hands.
Consider the following way that events might play out -- Let’s flash forward to the Super Bowl in 2012. Imagine that the Denver Broncos (hey, it is my example;-) beat the Packers. In that case, I would win the bet and you would pay me $1. We would both agree that the expectation expressed in my statement [A] was correct in a common sense understanding of the term. Had the Packers won the game, I would part with $9 and we’d agree that my expectation was incorrect.
Now let’s consider an alternative ending to this scenario. Let’s assume that the Packers win the game. I immediately protest to you that I actually did not lose the bet -- at least not yet, because my statement was probabilistic in the sense that it referred to the entire set of potential Super Bowl games in which I judged that the AFC would win 90% or more of them. It just so happens that that this was one of those cases that happens to fall in the 10% of possible outcomes. In order to judge the winner of the bet we would have to replay the Super Bowl many times, and I suggest that we do so on Madden 2012, a computer-game simulation.
You would reply, “Huh? Whatever, you lost the bet, where’s my $9?”
The difference between the two endings lies in whether one views a probabilistic statement as a basis for committing resources for a future that will play out in only one way or as an academic exercise to be explored using simulations of the real world.
On the one hand, in the first case, a Packers’ victory would mean that the expressed judgment in [A] turned out to form the basis for a losing bet. It is important to understand that the judgment [A] may have been perfectly sound and defensible at the time that it was made – perhaps it was based on my expertise or perhaps I actually ran a Madden 2012 simulation 10,000 times under various scenarios to create an ensemble projection space as the basis for my judgment. Perhaps then the outcome was just bad luck, meaning that the 10% is realized 10% of the time. Actually, we can never know the answer to whether the expectation was actually sound or not, only that a commitment based on the judgment lead to a loss of $9.
On the other hand, if I actually meant statement [A] simply as an academic exercise, perhaps in relation to some simulation of the real world, I should not be betting in the real world.
Let’s continue with the analogy and say I was a sports handicapper and you wanted to get a sense of my skill in giving odds on outcomes. One way that you might evaluate my performance is to look at all of my Super Bowl predictions over time. But that doesn’t offer much of an experiential basis with which to judge probabilistic projections. Another way that you might judge my performance is to look across my handicapping of many sports and see how the collective outcome compares with what I have projected. For instance, if I make 100 projections that are judged to be “very likely” you will have a lot of confidence in my skill if 90 of those 100 projections actually realize, even if they are across 100 different, incommensurable events. But even then there would be problems in assessing my skill (e.g., what if there are 10,000 sports handicappers and you come to me because my record is better than the others, is that because I am good or just lucky? But I digress).
Now let’s relate this back to climate science. As an example, the IPCC says the following:
It is very likely that hot extremes, heat waves and heavy precipitation events will continue to become more frequent.If decision makers commit resources (let’s assume intelligently, fully recognizing and understanding a 90% probability) based on this projection, and it turns out that the actual future experienced on planet Earth is one of constant or declining hot extremes, heat waves and heavy precipitation events, then the common sense view will no doubt be that the IPCC got this one wrong.
Climate modelers who protest such a common sense view based on what they describe as the impossibility of verification of probabilistic statements generated from ensemble projections from modeled simulations of the real world will be laughed off as out-of-touch academics, and rightly so. Infallibility doesn't even work for the Pope.