The Myth of the Optimism Bias?

This article was originally posted by ‘Neuroskeptic’ on on 3 June 2016. The article discusses research on optimism bias, as carried out by a team of psychological researchers including Birkbeck’s Professor Ulrike Hahn.

OptimismAre humans natural, irrational optimists? According to many psychologists, humans show a fundamental optimism bias, a tendency to underestimate our chances of suffering negative events. It’s said that when thinking about harmful events, such as contracting cancer, most people believe that their risk is lower than that of ‘the average person’. So, on average, people rate themselves as safer than the average. Moreover, people are also said to show biased belief updating. Faced with evidence that the risk of a negative outcome is higher than they believed, people don’t increase their personal risk estimates properly.

But now a group of researchers, led by first author Punit Shah of London, hascriticized the theory of biased belief updating and, by extension, the whole optimism bias model. Shah et al. say that optimism bias may be a mere statistical artifact, a product of the psychological test paradigms used to assess it. They argue that even perfectly rational, unbiased individuals would seem ‘optimistic’ in these tests. Specifically, the authors say that the apparent optimism is driven by the fact that negative events tend to be uncommon.

The new work builds on a 2011 paper by Adam J. L. Harris and Ulrike Hahn, also authors of the present paper. The 2011 article criticized the claim that people show an optimism bias by rating themselves as safer than the average. The new paper takes aim at biased belief updating. Here’s how Shah et al. describe their argument:

New studies have now claimed that unrealistic optimism emerges as a result of biased belief updating with distinctive neural correlates in the brain. On a behavioral level, these studies suggest that, for negative events, desirable information is incorporated into personal risk estimates to a greater degree than undesirable information (resulting in a more optimistic outlook).


However, using task analyses, simulations and experiments we demonstrate that this pattern of results is a statistical artifact. In contrast with previous work, we examined participants’ use of new information with reference to the normative, Bayesian standard.


Simulations reveal the fundamental difficulties that would need to be overcome by any robust test of optimistic updating. No such test presently exists, so that the best one can presently do is perform analyses with a number of techniques, all of which have important weaknesses. Applying these analyses to five experiments shows no evidence of optimistic updating. These results clarify the difficulties involved in studying human ‘bias’ and cast additional doubt over the status of optimism as a fundamental characteristic of healthy cognition.

I asked Shah and his colleagues to explain the case against the optimism bias in belief updating in a nutshell. They said

All risk estimates have to fit into a scale between 0% and 100%; you can’t have a chance of getting a heart attack at some point in your life of less than 0% or greater than 100%. The problems for the update method arise from the fact that the same ‘movement’ in percentage terms means different things in different parts of the scale.


Someone whose risk decreases from 45% to 30% has seen their risk cut by 1/3, whereas someone whose risk increases from 15% to 30% has seen their risk double -much bigger change. So the same 15% difference means something quite different if you have to revise your beliefs about your individual risk downwards (good news!) or upwards (bad news!) toward the same percentage value. The moment people’s risk estimates are influenced by individual risk factors (a family history of heart attack increases your personal risk by a factor of about 1.6), people should change their beliefs to different amounts, depending on the direction of the change. The update method falsely equates the 15% in both cases.


If the difference in belief change simply reflects these mathematical properties of risk estimates then one should see systematic differences between those increasing and those decreasing their risk estimates regardless of whether they happen to be estimating a negative or a positive event. But in the first case, this will look like ‘optimism’, in the second case it will look like ‘pessimism’. This is the pattern our experiments find…


The evidence base thus seems far less stable than previously considered. There is, using various paradigms, plenty of evidence for optimism in various real-world settings such as sports fans predictions and political predictions, but these just show that certain people might be optimistic in certain situations, not that there is a general optimistic tendency across situations that would be required to say people are optimistically biased. It is also important to note that because this belief updating paradigm has been used in so many neuroscience studies, it means those neuroscience data are also uninterpretable.

Read the original article on

Read the original article on

In my view, Shah et al. make a strong case that the evidence for optimism bias needs to be reexamined. Their argument makes a crucial prediction: that people should show a ‘pessimistic’ bias (the counterpart of the optimism bias) when asked to rate their chance of experiencing rare, positive events. In the new paper, the authors report finding such a pessimistic bias in a series of experiments. But perhaps they should team up with proponents of the optimism bias and run an adversarial collaboration to convince the believers.

  • Punit Shah, Adam J. L. Harris, Geoffrey Bird, Caroline Catmur, & Ulrike Hahn (2016). A Pessimistic View of Optimistic Belief Updating Cognitive Psychology

Leave a Reply

Your email address will not be published. Required fields are marked *