Are there moral universals? At first glance, this looks like a question of fact. To answer it, we’d have to nominate some candidates for universal moral truth, and check to see whether everyone accepts them. We could ask all 7.6 billion people, for example, whether they think recreational cruelty is wrong. Given human perversity, there’s a good chance that some will answer “no.” In that sense, there are probably no moral universals.
But that’s not the question we really mean to ask, is it? What we want to know is whether any moral strictures are binding on us all. So clarified, the answer flips: Of course there are moral universals. “Recreational cruelty is wrong” is an incontestable example of the type in question. Yes, some nut job might assert otherwise, but why should we listen to him? Either he doesn’t understand the question, or he’s being needlessly perverse. More important, he’s obligated to avoid recreational cruelty whether he knows it or not.
Invariably, clever people come up with counterexamples. ‘What about sadomasochists and the organizers of ultra-marathons: don’t they facilitate recreational cruelty?’ Such counterexamples miss the point. I could just as well have nominated “It’s wrong to visit recreational cruelty on the unconsenting.” Or “Pointless suffering is a bad thing.” Remember, one instance of a moral universal suffices to prove the existence claim.
A single instance, though, doesn’t tell us what we really want to know. We want to know whether anything like a well-functioning value system has universal validity.
You don’t need much in the way of normative assumptions to convert facts into values. Consider the assertion: “All else being equal, more wellbeing is better than less.” Who could object? It’s all but definitionally true.
For much of the twentieth century, the politically correct answer was ‘No: universally valid value systems don’t exist.’ People worried that an affirmative answer would license political or cultural imperialism: people could get the idea that things really are right and wrong, and this might lead them to impose their values on others. In this way, it became trendy to deny moral universals.
Trendy, but wrong-headed. For one thing, there’s a big gap between “Moral universals exist” and “I have all the answers.” Recognizing moral universals needn’t render one arrogant and ready to impose. Second, our tendency to deny moral universals subverts the search for common moral ground. (Why engage in value inquiry if moral truths don’t exist?) Third, denying that there are common moral truths doesn’t just humble cultural imperialists; it also humbles the compassionate and the tolerant, robbing them of conviction. Cultural relativism robs us of moral courage.
Fortunately, the moral sciences are starting to change all of this. Moral and social psychology, game theory, ethology, primatology, evolutionary psychology: all of these shed light on the origins and functioning of moral sensibilities (also moral intuitions, norms, and rules). We now know that morality evolved to serve a “pro-social” function: in the past, it promoted cooperation and survival. Yes, the first nervous systems prioritized self-care, giving the creatures that bore them a survival advantage, but natural selection has repurposed our nervous systems to also care for kin, friends, and tribesmen. Our brains now deliver a mix of self and other-regarding intuitions.
On the whole, our instincts are more selfish, short-sighted and tribal than is warranted. To properly promote shared wellbeing, we must deliberately discount some moral intuitions, and deliberately amplify others. Here, moral norms prove useful. Prohibitions against lying, cheating, and stealing, for example. “Be nice” is a good rule of thumb, as is “Respect basic rights.” “Treat others the way you like to be treated” is pretty nifty, too. It’s not hard to extend the list.
Notice that exhortations like these are more than merely subjective. Our preference for kindness over cruelty, for example, isn’t arbitrary. Why? Well, kindness is objectively more conducive to shared wellbeing than cruelty is. The same goes for fairness over unfairness, and honesty over deceit. Given basic facts about animal nervous systems, some things really are better than others.
You don’t need much in the way of normative assumptions to convert these facts into moral principles. Consider the assertion: “All else being equal, more wellbeing is better than less.” Who could object? Anyone worth taking seriously? Surely not: it’s all but definitionally true. This simple idea is an excellent place to begin building ethical common ground.
It’s like a seed crystal: add this idea to a solution of facts, and all kinds of moral truths precipitate out. And the truths you get—such as “Best not to harm conscious critters”— have a strong claim to universal validity. So why not assert the existence of moral universals? By so doing, we affirm our commitment to behaviors that tend to improve our collective lot.
Image: Ron Mader | Flickr
This article is from TVOL’s project titled “This View of Morality: Can an Evolutionary Perspective Reveal a Universal Morality?” You can download a PDF of the project [here], comment on this article below, or comment on the project as a whole in the Summary and Overview.
Thanks very much for this commentary, which is close to my own view. When I started to read moral philosophers such as Bernard Williams, I was struck by how morality is axiomatically defined in terms of the good of others and groups as whole. In a TVOL interview with the moral philosopher Simon Blackburn, I asked him to define morality as he would in an intro class, without regard to evolution, and what he said made perfect sense from an evolutionary perspective (the subject of the rest of our interview).
A lot of the variability in moral systems comes from how the moral circle is drawn–who it includes and who it excludes. Once we take note of this fact, then what takes place within any given moral circle appears a lot more uniform. Also, it’s important to note that people take part in many groupings and always have–an encampment, a hunting party, a war party, one’s immediate kin. We are sufficiently context-sensitive in our behavior that we can have separate norms for each grouping. We’re drawing our moral circles all the time.
Finally, I like what you say about wellbeing having a straightforward biological interpretation of survival and reproduction. Joseph Conrad said that he liked writing stories about the sea because life on a ship is so morally simple. What’s right and good is to stay afloat. If you want to get people into the mood of a whole-earth morality, just describe the whole earth as a single ship. It’s that easy. Of course, it’s not at all easy to establish the whole apparatus of a moral system at that scale, but it’s not hard to get people to adopt the frame of mind.
Good points all, David. For my part, I don’t define morality for my intro students, or start with an axiomatic definition. Instead, I just start with an urgent and important question: What do we mean when we call some behavior good or right or moral? Or other behavior bad or wrong or immoral? Because different answers have huge implications for how we (should) live, it’s terribly important that we answer the question well.
The idea that moral codes should function to promote shared wellbeing–that is, our collective survival and thriving–is not an axiom for me; it’s the outcome of sustained reflection on how we should think about right and wrong…
Andy,
I agree: it is almost definitionally true that the goal of morality is to increase well-being. I expect many of our contributors would also agree.
But the ultimate goal of morality may not the only aspect of morality that is universal. There may also be universally moral ‘means’ in addition to this moral ‘end’.
Consider an implication of science defining universally moral ‘means’ such as “Increase the benefits of cooperation without exploiting others” as I claim. This scientific claim includes the vague goal of “Increase the benefits of cooperation”. What “benefits of cooperation” are people motivated to pursue if not increasing well-being? Perhaps the science about what universally moral means ‘are’ supports your claim of what the goal of morality ‘is’.
Note there is no mention here of mysterious and troublesome oughts or sources of “bindingness” regarding either universally moral ‘means’ or ‘ends’. With both universally moral means and ends defined (without any troublesome oughts) then haven’t we defined the goal and means of a universal morality? Perhaps worth exploring?
Mark-
I agree. Establishing the universality of the end is just the first step towards developing a rich universal morality. My claim is that, with enough care and attention to detail, we can generate universally binding answers to our “means” questions as well. What really works to bring about a shared end? That’s a question of fact, to be settled by scientific inquiry into causal relationships. Of course, the relevant inquiry must keep a steady eye on what matters, and steer clear of unintended consequences. Your “Increase the benefits of cooperation without exploiting others” is a nice example of a rule that is likely to pass muster, because it carefully rules out the dangerous kind of end-justifies-the-means thinking. (I like that it’s a judicious hybrid of consequentialist and Kantian ethics, too.)
Andy,
The challenging task of showing there are “universally binding answers to our ‘means’ questions” may not strictly be required. For example, my suggested moral ‘means’, “increase the benefits of cooperation without exploiting others”, is what shaped the largest part of the biology underlying our moral sense and, as a component of our moral sense, I argue triggers our emotional experience of durable well-being as a reward for cooperation. If we can show that 1) such an evolutionary moral principle is the one most likely to meet common human needs and preferences and 2) it ‘is’ universally moral (but not innately binding) as a matter of science, then it may be enough that such a principle is a clear preference for refining cultural moral codes – no innate bindingness needed.