The first of these dynamics is what might be called the ‘guaranteed winner scam’. It works like this: select 65,536 people and tell them that you have developed a methodology that allows for 100 per cent accurate prediction of the winner of next weekend’s big football game. You split the group of 65,536 into equal halves and send one half a guaranteed prediction of victory for one team, and the other half a guaranteed win on the other team. You have ensured that your prediction will be viewed as correct by 32,768 people. Each week you can proceed in this fashion. By the time eight weeks have gone by there will be 256 people anxiously waiting for your next week’s selection because you have demonstrated remarkable predictive capabilities, having provided them with eight perfect picks. Presumably they will now be ready to pay a handsome price for the predictions you offer in week nine.The second,
. . . is the ‘hot hand fallacy’ which was coined to describe how people misinterpret random sequences, based on how they view the tendency of basketball players to be ‘streak shooters’ or have the ‘hot hand’ (Gilovich et al., 1985). The ‘hot hand fallacy’ holds that the probability in a random process of a ‘hit’ (i.e. a made basket or a successful hurricane landfall forecast) is higher after a ‘hit’ than the baseline probability.9 In other words, people often see patterns in random signals that they then use, incorrectly, to ascribe information about the future.In the paper I used the dynamics to explain why there is not likely to be convergence on the skill of hurricane landfall forecasts anytime soon. The existence of (essentially) an infinite number of models of hurricane landfalls coupled with the certainty that unfolding experience will closely approximate a subset of available models creates a context ripe for seeing spurious relationships and chasing randomness. However, the basic argument has much more general applicability.
A new paper is just out by Nattavudh Powdthavee and Yohanes E. Riyanto from the Institute for the Syudy of Labor in Bonn, Germany provides some empirical support for this argument. The paper --titled, "Why Do People Pay for Useless Advice? Implications of Gambler’s and Hot-Hand Fallacies in False-Expert Setting -- looks "experimentally whether people can be induced to believe in a non-existent expert, and subsequently pay for what can only be described as transparently useless advice about future chance events."
In the study they authors operationalized the dynamics of the "the guaranteed winner scam meets the hot hand fallacy" using coin flips, while going to great lengths to ensure that the participants were aware that the coin being flipped was fair (i.e., the flips were random), even going so far as to have the participants furnish the coin.
They found that upon receiving an accurate "prediction" of the subsequent coin flip, many participants were willing to abandon any assumption of randomness and pay for a prediction of the next toss:
On average, the probability of buying the prediction in Round 2 for people who received a correct prediction in Round 1 was 5 percentage points higher than those who previously received an incorrect prediction in Round 1 (P=0.046). The effect is monotonic and well-defined; probabilities of buying were 15 percentage points (P=0.000), 19 percentage points (P=0.000), and 28 percentage points (P=0.000) higher in Rounds 3, 4, and 5 . . .The authors identify two interesting results:
The first is that observations of a short streak of successful predictions of a truly random event are sufficient to generate a significant belief in the hot hand of an agent; the perception which also did not revert even in the final round of coin flip. By contrast, the equally unlikely streak of incorrect predictions also generated a relatively weak belief in the existent of an “unlucky” agent whose luck was perceived to be likely to revert before the game finishes; evidence which was reflected in an increase in the subject’s propensity to buy in the final round of coin flip.The study also looked at whether characteristics of the participants might be related to their behavior, finding: "there is no statistical evidence that some people are systematically more (or less) susceptible to the measured effects."
What does this study mean for how we thing about science in decision making?
While the authors focus on "false" experts, the findings have much broader relevance in the context of "true" experts. The simple reason for this is that the distribution of legitimate scientific findings about many complex subjects covers an enormous range of possible outcomes. Not all of these outcomes can simultaneously be correct -- whether they are looking at the past, at causality or offering projections of the future.
In the example that use from my paper cited above, I explain how a single scientific paper on hurricane landfalls provides 20 scientifically legitimate predictions of how many hurricanes would hit the US over the subsequent 5 years:
Consider, for example, Jewson et al. (2009) which presents a suite of 20 different models that lead to predictions of 2007–2012 landfall activity to be from more than 8 per cent below the 1900–2006 mean to 43 per cent above that mean, with 18 values falling in between. Over the next five years it is virtually certain that one or more of these models will have provided a prediction that will be more accurate than the long-term historical baseline (i.e. will be skilful). A broader review of the literature beyond this one paper would show an even wider range of predictions. The user of these predictions has no way of knowing whether the skill was the result of true predictive skill or just chance, given a very wide range of available predictions. And because the scientific community is constantly introducing new methods of prediction the ‘guaranteed winner scam’ can go on forever with little hope for certainty.8Such models are of far more than academic interest -- they guide hundreds of billions of dollars in investment and financial decisions related to insurance and resinsurance. What if such decisions rest on an intellectual house of cards? How would we know?
The general issue is that a bigger problem than discerning legitimate from illegitimate expertise is figuring out how to use all of the legitimate expertise at our disposal. The dynamics of the "guaranteed winner scam meets the hot hand fallacy" also presents a challenge for experts themselves in interpreting the results of research in light of evolving experience. As experts are people too, they will be subject to the same incentives in and obstacles to interpreting information as were found by Powdthavee and Riyanto.
The dominant strategies in political discourse used to deal with this situation of too much legitimate science are to argue that there is one true perspective (the argument from consensus) or that experts can be judged according to their non-expert characteristics (argument by association). My experiences over the past decade or so related to extreme events and climate change provides a great example how such strategies play out in practice, among both experts and activists.
As we have learned, neither strategy is actually a good substitute for evaluating knowledge claims and understanding that uncertainty and ignorance are often irreducible, and decisions must be made accordingly.