If we have learned anything from recent years in the behavioral sciences, it is that humans have numerous but systematic psychological biases that steer our judgment and decision-making away from what one might expect if we were even-handed in weighing up costs, benefits and probabilities. However, many psychologists and social scientists have been content to document these biases and then, against the background of vaunted rational choice theory, single them out as the causes of policy failures, disasters, and wars. Homo sapiens, so the argument goes, falters where Homo economicus would prevail.

An evolutionary perspective on psychological biases tells a very different story. We have these psychological biases not by accident but by design. Human cognitive mechanisms evolved to deal with the problems of the past, where we spent 99% of our history, not those of the present. We should, therefore, hardly expect our brains to perform well all the time in modern settings where the social and physical environment is so different. Often, we are fish out of water.

New work argues that there is a significant twist to both of these perspectives. The very mistakes we often attribute to biases may in fact be part of their design. Counterintuitively, psychological biases can improve decision making precisely because they generate a pattern of mistakes that helps us out in the long run. Under conditions of uncertainty (imperfect information) and asymmetric costs of ‘false-positive’ (assumed true but wrong) and ‘false-negative’ errors (assumed wrong but true), biases can lead to mistakes in one direction but – in so doing – steer us away from more costly mistakes in the other direction. For example, we sometimes think sticks are snakes (which is harmless), but rarely that snakes are sticks (which can be deadly).

In a new paper by Dominic Johnson, Dan Blumstein, James Fowler, and Martie Haselton, so-called ‘error management’ biases are shown to have been independently identified by multiple studies from a range of fields spanning economics, psychology, and biology, suggesting the phenomenon is robust across domains, disciplines, and methodologies. Applications range from “engineering” problems such as how organisms allocate repair costs to different parts of the genome, to social and financial problems such as whether to risk gambling on risky ventures or not, and political issues such as if and how to act against climate change or states that may be developing (or using) WMD. All of these are problems faced under uncertainty and where being wrong in different ways has different costs. Error management theory can help to understand not only how people tackle such problems, but how they could tackle them more effectively.

The phenomenon of error management is so pervasive that it appears to represent a general feature of life, with common sources of variation that affect them all in similar ways. The role of errors in evolution offers an explanation, in error management theory (EMT; first coined by evolutionary psychologists Martie Haselton and David Buss), for the evolution of cognitive biases as the best way to manage errors under cognitive and evolutionary constraints. If humans were perfect computers with perfect information, we could avoid mistakes altogether. But until that time, biases can help us make the right mistakes rather than the wrong ones. To err is human, but we should perhaps be grateful for this blessing in disguise.

Published On: September 11, 2013

Daniel Blumstein

Daniel Blumstein

Daniel Blumstein is a Professor and Chair of the Department of Ecology and Evolutionary Biology at UCLA, and a Professor in UCLA’s Institute of the Environment and Sustainability. He received his PhD at UC Davis in animal behavior, and had postdoctoral fellowships at the University of Marburg (Germany), The University of Kansas, and Macquarie University (Australia). He is a behavioral ecologist broadly interested in the evolution of behavior and the application of behavioral and evolutionary principles to policy, health, and defense. He has studied the behavior and ecology of mammals (including humans), birds, fish, lizards, hermit crabs, and sea anenome and runs the 50+ year project studying the behavioral and evolutionary ecology of yellow-bellied marmots at the Rocky Mountain Biological Laboratory in Gothic, Colorado. The author of over 200 scholarly works and five books, his most recent books include: “A Primer of Conservation Behavior” (Sinauer Associates, 2010, with Esteban Fernandez-Juricic), “The Failure of Environmental Education (And How We Can Fix It)” (University of California Press, 2011, with Charles Saylan), and “Eating Our Way to Civility: A Dinner Party Guide” (a Kindle and Apple e-book, 2011).


  • Bryan Atkins says:

    Fascinating. Like with science, less wrong is better.

    Kind of reminds me of that great line in Bukowski’s screenplay: Barfly.

    Two guys get in a knife fight, and in their struggles for control, the knife wielder takes the blade to the gut. As he’s wiggling it out, he’s says: “Dumb luck, mofo.” His rival replies, “Yeah, but that counts too!”

    Regarding error management: Per our species limitations, exponentially accelerating complexity, the pathetic state of our political institutions and their woeful ineptness with regard to writing code for how culture interfaces with reality, what about this? (doubt it’s original):

    We vote on political SOFTWARE PACKAGES.

    Political Parties write a software program for gov’t policy for X years. Computers then run all the different parties’ software over and over per the exact same set of initial 
conditions and we’re able to see the predicted outcomes of each software.
    It won’t be exact, but may be a set of outcome ranges with various probabilities. The people vote on which mix of outcomes they want, they value. The trade offs between various policies, laws, etc. will be more explicit, and more measurable than the current verbal, vote-pandering drivel.

    We’ll get better at governance software, and predicting outcomes. Politics becomes open-source software and we’re the code writers per our areas of knowledge, of specialization.

    The software itself could have bias-correction parameters, rapid error detection-correction features, and even a feature that weights moral code over monetary code (an increasingly complexity inadequate coding structure) in decision making.

  • Helen Camakaris says:

    Thank you, I enjoyed this article and found it very useful. I’m interested in how we might work within our limitations to successfully combat climate change and issues of sustainability. I’ve started exploring ideas but would welcome more. See http://newint.org/features/web-exclusive/2015/08/28/evolutionary-shortcomings-and-climate-catastrophe/ and https://theconversation.com/dont-trust-your-stone-age-brain-its-unsustainable-9075. My FB page is Helen Camakaris, Writer and my Twitter handle is helenmcama.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.