Steven Pinker’s thoughtful remarks concerning group selection present a useful occasion for clearing some misconceptions surrounding recent developments in the behavioral sciences concerning our understanding of moral vs. self-interested behavior. Initiated in 1966 by George C. Willams’ Adaptation and Natural Selection and followed a decade later by Richard Dawkins’ The Selfish Gene, evolutionary biologists in the last quarter of the Twentieth century came to view humans as fundamentally selfish, contributing to society only when socially-imposed rewards and punishment render it in their self-interest to do so. Dawkins, for instance, opines in the opening pages of The Selfish Gene, “We are survival machines—robot vehicles blindly programmed to preserve the selfish molecules known as genes…. a predominant quality to be expected in a successful gene is ruthless selfishness. This gene selfishness will usually give rise to selfishness in individual behavior…. Anything that has evolved by natural selection should be selfish.”

Of course, it does not appear in our daily life that everyone is selfish, and if we introspect, most of us will agree that we try to behave, however successfully or unsuccessfully, as moral beings willing to sacrifice personal amenities in the pursuit of truth, justice, loyalty and compassion. Dawkins’ explanation is that human morality is a cultural facade laid upon our basically selfish human nature. “Be warned,” he states, “that if you wish, as I do, to build a society in which individuals cooperate generously and unselfishly towards a common good, you can expect little help from biological nature. Let us try to teach generosity and altruism, because we are born selfish.”

But why do fundamentally selfish beings, which is what humans are according to the selfish gene theory, accept cultural norms that contradict their natural strivings? Richard Alexander answered this question in 1987 in his The Biology of Moral Systems with his concept of indirect reciprocity, according to which we all continually evaluate others for possible future gainful interactions, and we reject individuals who violate norms of reciprocity. The somewhat more general answer offered by Pinker is that is that each of us conforms to social norms out of fear of losing our good reputation. What appears to be self-sacrifice is thus simply a superficial veneer covering our selfish natures. “Scratch an altruist,” biologist Michael Ghislin eloquently wrote in 1974, “and watch a hypocrite bleed.”

Pinker frames the issue in terms of sacrificing personal interests on behalf of the group. “What we don’t expect to see,” he writes, “is the evolution of an innate tendency among individuals to predictably sacrifice their expected interests for the interests of the group.” This is not the correct way to frame the issue. People do not generally “sacrifice on behalf of the group.” Rather, people have moral principles that the strive to uphold, and that compete with their material interests. When I behave honestly in a transaction, I may have no intention whatsoever of sacrificing on behalf of my transaction partners, much less on behalf of my society. I just do what I think is the morally correct thing to do. When I bravely participate in a collective action against a despotic regime, I am upholding my moral principles, not sacrificing on behalf of the group. Indeed, it is no sacrifice at all to behave morally, because we humans care about our moral worth in much the same way as we care about our material circumstances.
The past few decades have seen the massive accumulation of evidence in favor of the view that human beings are inherently moral creatures, and that morality is not a simple cultural veneer. Humans are born with a moral sense as well with a predisposition to accept and internalize moral norms their society, and often to act on these moral precepts at personal cost. In our book, A Cooperative Species, Samuel Bowles and I summarize a plausible model of human nature in which “people think that cooperating is the right thing to do and enjoy doing it, and that they dislike unfair treatment and enjoy punishing those who violate norms of fairness.” Most individuals include moral as well as material goals in their personal decision-making, and they willingly sacrifice material interests towards attaining moral goals. It is this view that I will defend in my remarks.

Pinker does not present, and indeed makes light of the body of research supporting the existence of a basic human moral sense, suggesting that there is only one piece of evidence supporting the view that people behave morally when their reputations are not at stake: “It seems hard to believe,” he says, “that a small effect in one condition of a somewhat contrived psychology experiment would be sufficient reason to revise the modern theory of evolution, and indeed there is no reason to believe it. Subsequent experiments have shown that most of the behavior in these and similar games can be explained by an expectation of reciprocity or a concern with reputation.” Because expectation of reciprocity and concern for reputation are basically selfish and do not involve a fundamental respect for moral values, Pinker is simply reiterating Dawkins’ message of a half-century ago that we are the selfish product of selfish genes.

1. Morality and Human Nature
Today the economics and psychology journals, including the most influential natural science journals, Science and Nature, are full of accounts of human moral and prosocial behavior. Pinker dismisses this evidence by asserting that “Any residue of pure altruism” beyond self-interested reciprocity and reputation building “can be explained by the assumption that people’s cooperative intuitions have been shaped in a world in which neither anonymity nor one-shot encounters can be guaranteed.” In other words what looks like moral behavior is just a form of mental error due to imperfections of the human brain.

The empirical evidence on cooperation in humans does not support Pinker’s view. The social interactions studied in the laboratory and field always involve anonymity, so subjects cannot help or harm their reputations, and they usually are one-shot, meaning that subjects cannot expect to be rewarded in the future for sacrifices they make at a given point in time.

Pinker does cite a few studies that support his position. “Subsequent experiments have shown that most of the behavior in these and similar games can be explained by an expectation of reciprocity or a concern with reputation.” Let us see what these studies in fact say. Reciprocity, says Pinker, “is driven not by infallible knowledge but by probabilistic cues. This means that people may extend favors to other people with whom they will never in fact interact with again, as long as the situation is representative of ones in which they may interact with them again.” The only published paper he cites is by Andrew W. Delton, Max M. Krasnow, Leda Cosmides and John Tooby, “Evolution of Direct Reciprocity Under Uncertainty can Explain Human Generosity in One-shot Encounters.”[1] This paper (and several related papers coming out of the Center for Evolutionary Psychology at Santa Barbara, California) show that, in the authors’ words “generosity is the necessary byproduct of selection on decision systems for regulating dyadic reciprocity under conditions of uncertainty. In deciding whether to engage in dyadic reciprocity, these systems must balance (i) the costs of mistaking a one-shot interaction for a repeated interaction (hence, risking a single chance of being exploited) with (ii) the far greater costs of mistaking a repeated interaction for a one-shot interaction (thereby precluding benefits from multiple future cooperative interactions). This asymmetry builds organisms naturally selected to cooperate even when exposed to cues that they are in oneshot interactions.”

This statement is of course not only true, but completely obvious, and does not require sophisticated academic papers to validate its truth. However it does not explains human generosity. It is elementary logic that to say that P explains Q does not mean that if P is true then Q is true, but rather the converse: whenever Q is true, then P is true as well. In the current context, this means that whenever subject A sacrifices on behalf of stranger B in an experiment, it must be true that A is sufficiently uncertain concerning the probability of meeting B again, and A would incur a sufficiently large cost should A meet B again in the future, that it pays A to sacrifice now. The authors have not even attempted to show that this is the case. Nor is it plausible. The experiments under discussion assume subject anonymity, subjects will never knowingly meet again. Pinker’s supposed counter-evidence is thus invalid. To my knowledge, there is simply no credible counter-evidence.

Read more at Social Evolution Forum

Published On: June 27, 2012

Herbert Gintis

Herbert Gintis

Herbert Gintis (Ph.D. in Economics, Harvard University, 1969) is External Professor, Santa Fe Institute, and Professor of Economics, Central European University. He and Professor Robert Boyd (Anthropology, UCLA) head a multidisciplinary research project that models such behaviors as empathy, reciprocity, insider/outsider behavior, vengefulness, and other observed human behaviors not well handled by the traditional model of the self-regarding agent. His web site, www-unix.oit.umass.edu/~gintis, contains pertinent information. Professor Gintis published Game Theory Evolving (Princeton: Princeton University Press, 2000), and is coeditor, with Joe Henrich, Robert Boyd, Samuel Bowles, Colin Camerer, and Ernst Fehr, of Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-scale Societies (Oxford: Oxford University Press, 2004), and with Samuel Bowles, Robert Boyd and Ernst Fehr, Moral Sentiments and Material Interests: On the Foundations of Cooperation in Economic Life (Cambridge: MIT Press, 2005). He is currently completing a book with Professor Bowles entitled A Cooperative Species: Human Reciprocity and its Evolution.


  • limbic says:

    what if acting prosocial or proself in these anonymous one shot games just depends on “habit” and habit depends on how things are done in the social network subjects are part of? It would mean that humans are behaviorally flexible with respect to altruism and cooperation and may switch their behavioral pattern according to context, but with a certain lag due to habit which the game experiments aren’t able to capture. In line with this Bogaert, Boone and Declerck (2008) demonstrated that trust and goal alignment are important contextual moderators of cooperation: for prosocials, cues signalling trust are necessary to generate positive expectations regarding alters’ behaviour, whereas proselfs need external incentives to align their personal interest with a cooperative goal. They also found that for instance economy students often fell into the proself category, demonstrating the role of acquired behavior with respect to altruism.

    I would thus say that humans are behaviorally flexible with regard to moral behavior, ranging from proself to prosocial depending on current and prior context and interactions. Question is whether the capacity to be truly altruist (without reciprocity or inclusive fitness benefits) is biologically prepared as a result of ancestral group selection or whether it is rather an extension of our ability to be loyal to those who we consider ingroups (which may even include quite genetically unrelated nonhuman animals such as dogs).

  • am7nje says:

    xj3uwgbaby carriermf5ednhttp://www.ergobabysale.us

  • am7nje says:

    xj3uwgbaby carriermf5ednhttp://www.ergobabysale.us

Leave a Reply to am7nje Cancel Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.