If we have learned anything from recent years in the behavioral sciences, it is that humans have numerous but systematic psychological biases that steer our judgment and decision-making away from what one might expect if we were even-handed in weighing up costs, benefits and probabilities. However, many psychologists and social scientists have been content to document these biases and then, against the background of vaunted rational choice theory, single them out as the causes of policy failures, disasters, and wars. Homo sapiens, so the argument goes, falters where Homo economicus would prevail.
An evolutionary perspective on psychological biases tells a very different story. We have these psychological biases not by accident but by design. Human cognitive mechanisms evolved to deal with the problems of the past, where we spent 99% of our history, not those of the present. We should, therefore, hardly expect our brains to perform well all the time in modern settings where the social and physical environment is so different. Often, we are fish out of water.
New work argues that there is a significant twist to both of these perspectives. The very mistakes we often attribute to biases may in fact be part of their design. Counterintuitively, psychological biases can improve decision making precisely because they generate a pattern of mistakes that helps us out in the long run. Under conditions of uncertainty (imperfect information) and asymmetric costs of ‘false-positive’ (assumed true but wrong) and ‘false-negative’ errors (assumed wrong but true), biases can lead to mistakes in one direction but – in so doing – steer us away from more costly mistakes in the other direction. For example, we sometimes think sticks are snakes (which is harmless), but rarely that snakes are sticks (which can be deadly).
In a new paper by Dominic Johnson, Dan Blumstein, James Fowler, and Martie Haselton, so-called ‘error management’ biases are shown to have been independently identified by multiple studies from a range of fields spanning economics, psychology, and biology, suggesting the phenomenon is robust across domains, disciplines, and methodologies. Applications range from “engineering” problems such as how organisms allocate repair costs to different parts of the genome, to social and financial problems such as whether to risk gambling on risky ventures or not, and political issues such as if and how to act against climate change or states that may be developing (or using) WMD. All of these are problems faced under uncertainty and where being wrong in different ways has different costs. Error management theory can help to understand not only how people tackle such problems, but how they could tackle them more effectively.
The phenomenon of error management is so pervasive that it appears to represent a general feature of life, with common sources of variation that affect them all in similar ways. The role of errors in evolution offers an explanation, in error management theory (EMT; first coined by evolutionary psychologists Martie Haselton and David Buss), for the evolution of cognitive biases as the best way to manage errors under cognitive and evolutionary constraints. If humans were perfect computers with perfect information, we could avoid mistakes altogether. But until that time, biases can help us make the right mistakes rather than the wrong ones. To err is human, but we should perhaps be grateful for this blessing in disguise.