Every earthworm has, at one point, been your mother.

Buddhism has many such thought experiments, ways to expand our notions of morality, to align it with what I’ll call here “universal morality”.

Universal morality is obscured by our evolved morality. Some problems cause disproportionate suffering; there are ways to greater optimize the flourishing of humans and other sentient beings. Our moral psychology, however, is designed to punish those who challenge our in-group’s interests, reward those who work in our favor and maintain our signaled moral identity. This evolved morality not only obscures universal morality but also creates an aversion to improvements to humans that would align our intuitions with actions that promote sentient well-being.

Progress on problems further away from our evolved intuitions, such as in mathematics and physics, has always been faster than progress on understanding human psychology and moral philosophy. The fewer layers of evolved psychology to peel away, the faster progress can be made. B.F. Skinner noted it should be more difficult to send a man to the moon than to implement effective education or to rehabilitate criminals. He lamented the degree to which we can control the inanimate, including weapons, without the wherewithal to solve social problems. Why? Humans anthropomorphize themselves. Understanding of psychology is clouded by intuitions especially the strong intuition that humans are not objects of a deterministic universe.

Evolved morality not only obscures universal morality but also creates an aversion to improvements to humans that would align our intuitions with actions that promote sentient well-being.

Morality is too close to our eyes for us to see. Compounding the confusion of studying anything as intimate as our own psychology is the self-deception integral to moral psychology. We punish those who transgress while looking for loopholes for ourselves, making self-reported moral reasoning especially suspectIn the trolley problem participants are more likely to make utilitarian choices the greater their distance from the action that caused one death instead of many (e.g. choosing to save three lives instead of one). There is no real difference between pushing someone onto the tracks and flipping a switch except in terms of plausible deniability. Detecting psychopaths was an important ancestral problem. Thus our conscious moral reasoning is optimized to signal we are not psychopaths. “True” evolved moral reasoning is insulated from conscious awareness. Consistency, virtue and capacity for self-punishment, otherwise known as guilt, are prioritized over aggregate benefit.

Moral debates dance around biting bullets and avoiding fanciful repugnant conclusions. Advocates of moral perspectives confuse morality with the desire to preserve their reputations or align with the intuitions of their readers. Thought experiments can help us transcend our evolved psychology but the vast majority of moral reasoning celebrates the output of essentially vestigial moral emotions. There is no wisdom in repugnance. Nor is there any wisdom in beliefs we develop to align our intuitions with those of others, preserve our reputations or signal we are not psychopaths.

We have made leaps and bounds in moral achievement compared to the deep history of humanity and to the rest of the natural world. Some consider nonhuman animals their in-group, or at least, are signaling their expanded moral circle. There are hacks that stretch our moral circle outside our ingroup, like narrative fiction and meditation. The world is a much better place than it was, but an evolutionary perspective does predict limits to this moral expansion. The flourishing of sentient beings is still impeded by moral disgust, outrage, and self-deception. Ironically, moral enhancement such as augmenting intelligence with embryo selection, the only means by which we may expand our moral circle, is averted by feelings of disgust, neophobic anxiety and an attachment to an amorphous, ill-defined concept of “natural”. One prominent criticism of evolutionary psychology is that it condones immoral behavior by pointing to its natural origins. Far from it, I believe we must acknowledge the shadow of our evolved morality in order to transcend it.

The rapid development of artificial intelligence has both confused and clarified our vision of a universal morality. We don’t want our self-driving cars to choose to kill three pedestrians rather than injure one driver. Most of us would want an artificial superintelligence to work for the aggregate good, to limit suffering and increase well-being rather than signal consistency, disgust sensitivity, guilt or commitment to tradition. Why shouldn’t we modify ourselves to become more moral as well?

A moral artificial intelligence should have very different moral goals than the average human, but how will we be able to build one with our evolved moral intuitions in place?

Dear Seed AI, every living being has, at some point, been your mother. 

This article is from TVOL’s project titled “This View of Morality: Can an Evolutionary Perspective Reveal a Universal Morality?” You can download a PDF of the project [here], comment on this article below, or comment on the project as a whole in the Summary and Overview.

Published On: May 17, 2018

Diana Fleischman

Diana Fleischman

Diana Fleischman is an Associate Professor at the University of Portsmouth. In addition to psychological research Diana is involved in effective altruism and currently sits on the board for Sentience Institute, a think tank promoting expansion of the moral circle to nonhuman animals. On February 8th Diana will be delivering the annual Darwin Day lecture on the evolution of human morality in London for the British Humanist Society.



  • David Sloan Wilson says:

    Thanks for this very interesting commentary! If you define “universal morality” as “morality extended to all human beings and even beyond” , then most of what you say makes sense to me. Our evolved moral psychology doesn’t even remotely get us there and often gets in the way. However, if we define morality as something that takes place within a specified moral circle, which might be small or large, then different conclusions follow. For one thing, I think we can be less cynical about the self-serving nature of morality. It is possible for someone to be genuinely group oriented, and to hold himself or herself to the same standards as others. This isn’t just an idealistic statement, it is a social strategy that can win the Darwinian contest under specified conditions. Also, if we are flexible in how we define the moral circle (and we are), then we can define it very widely and our “stone age” psychology can work serviceably well. Again, this is not just an idealistic statement but explains why our current moral circles are already so much larger than any group that existed in the deep ancestral past. If I can be persuaded to put America first, I can be persuaded to put the Earth first. Of course, there must be appropriate norms that are monitored, with punishment for breaches and all that, but it’s possible. It’s hard to argue for limitations to the size of our moral circles given how large they have already become!

    • John Lestino says:

      Dear Dr. Wilson,

      I’m so fortunate to have come across the series of article related to moral psychology. I hope you will share your articles and perspectives with Dr. Alan Fiske of UCLA. Dr. Fiske’s research and scholarship is enlightening given his Relational Models Theory. I suggest sharing with your readers the often cited article, co-written by Dr. Rai, titled, “Moral Psychology is Relationship Regulation….” as a must read for those of us reading your series here.

      Thank you,

      John Lestino, MA, LPC

  • Mark Sloan says:


    As you suggest, our evolved morality is complicated and self-contradictory with many examples of despicable behaviors being judged culturally ‘moral’. But what is most despicable and strange about human morality can be a powerful help in evaluating moral theories. Requiring any evolutionary moral theory to explain even the most bizarre and despicable moral norms enables robust confirmations of scientific truth. Once we have a robustly confirmed moral theory, perhaps it can lead us to what is universally moral.

    As you also point out, it is important to remember that our moral intuitions may be unpredictably contrary to what is universally moral. Perhaps a robust theory of morality will help us reliably sort out when and why we can expect dissonant contrariness to appear.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.