An Introduction to Evolutionary Ethics (10 page)

Read An Introduction to Evolutionary Ethics Online

Authors: Scott M. James

Tags: #Philosophy, #Ethics & Moral Philosophy, #General

BOOK: An Introduction to Evolutionary Ethics
7.86Mb size Format: txt, pdf, ePub

With this story firmly in mind, let's turn to morality. As we saw in the previous chapter, the biological value of establishing and preserving cooperative alliances among one's neighbors would have been critical for the survival (and reproductive success) of early humans. While the processes of inclusive fitness would have ensured at least some resources coming one's way, this would be far from optimal. Anthropologists and ethnographers hypothesize that early humans existed in small bands of about thirty-five individuals. And these bands may have coexisted with other bands totaling some 150 people. One might have treated some of these individuals as relatives, but a sizable number would be mere neighbors. An individual who could routinely count on these non-relatives for assistance – in return for giving assistance – would have possessed a pronounced advantage over an individual unable or unwilling to forge such relationships.

As in the case of our discriminating males, however, it's one thing to identify what is biologically advantageous, another thing to design individuals capable of regularly reaching it. We cannot expect that our earliest ancestors calculated the long-term value of establishing cooperative alliances. This is no more plausible than the idea that early males calculated female fertility rates. But the adaptive sub-problem here is even more pronounced than in the case of mate selection. For we have to remember that there would have been persistent pressure to
resist
cooperating. Recall our discussion of the Prisoner's Dilemma in the previous chapter. Cooperating in Prisoner's Dilemma games is hardly the most attractive option: first, it means forgoing the highest payoff (i.e., exploiting the cooperation of another) and, second, it opens one up to being exploited. If early humans had enough sense to know how to reason about what was good for them, then they would have been leery of setting themselves up for a fall. But as Prisoner's Dilemma games so elegantly make clear, when everyone takes that attitude, everyone suffers. So the adaptive problem in need of solution was this: design individuals to establish and preserve cooperative alliances
despite
the temptation not to cooperate.

The solution (you guessed it) was to design individuals to
think morally
. One of the earliest philosophers to push this specific view was Michael Ruse: “To make us cooperate for our biological ends, evolution has filled us full of thoughts about right and wrong, the need to help our fellows and so on” (1995: 230–1). Cooperating is not merely something to be desired (at least when it is); it's something we regard as
required
. “Morality,” says Ruse, “is that which our biology uses to promote ‘altruism.’” A recent proponent of this view, Richard Joyce, provides the most explicit account of the steps leading up to our moral sense. It's worth pausing over a longer passage: Suppose there was a realm of action of such recurrent importance that nature did not want practical success to depend on the frail caprice of ordinary human practical intelligence. That realm might, for example, pertain to certain forms of cooperative behavior toward one's fellows. The benefits that may come from cooperation – enhanced reputation, for example – are typically long-term values, and merely to be aware of and desire these long-term advantages does not guarantee that the goal will be effectively pursued, any more than the firm desire to live a long life guarantees that a person will give up fatty foods. The hypothesis, then, is that natural selection opted for a special motivational mechanism for this realm: moral conscience. (Joyce 2006: 111) If an early human (let's call him Ogg) believed that not performing certain actions (e.g., killing, stealing, breaking promises) was good for him, then although he would routinely avoid these actions, nothing prevents Ogg from occasionally
changing course
in the face of an even more attractive good, for example, his neighbor's unattended stash of fruit. “Not stealing is good, sure, but just look at those ripe papayas – they're
great
!” So Ogg could be counted on as a reliable neighbor – except, well, when he couldn't be.

But in order for cooperative alliances to work, in order for each to truly benefit, there must be a guarantee that each sticks to his commitment, that neither is tempted to back out when more attractive options arise. Recall Farmer A and Farmer B from the last chapter: each needs the other's help, but helping puts each at risk of exploitation. What each needs is the assurance that the other is
committed
to this cooperative arrangement. And what is true at the level of two individuals is true at the level of groups: each person needs assurances that the sacrifices she makes for the group (e.g., defending against invaders; participating in hunts) are not in vain. This is where morality steps in.

The introduction of moral thinking, characterized along the lines discussed in the previous section, provides the missing guarantee. If Ogg believes that stealing his neighbor's (unattended) fruit is not merely undesirable but
prohibited
, and if this belief is strongly tied to Ogg's motivation, then this would be the best guarantee that Ogg will not commit those actions.
5
And by not committing those actions, Ogg would avoid the very kinds of behavior that would threaten cooperative alliances. We have to remember that, in small groups, it's not just what Ogg's actual partners think of Ogg; it's also what potential partners think of Ogg. We call it
reputation
.
6
After all, would
you
trust someone who wouldn't hesitate to deceive or kill another human being?

But here's where the lesson we began with matters: there would be
no need whatsoever
for Ogg to have any knowledge of – let alone concern for – the correlation between what's right and wrong, on the one hand, and cooperative alliances on the other. It is enough that Ogg is convinced that some things
just shouldn't be done
– no matter what. He need not also recognize that attitudes like that have a biological payoff. (In fact, we might insist that success actually depends on the absence of any such recognition, for again the point is to block deliberation and lock in cooperation.) Design humans to think (and feel) that some actions are prohibited, and cooperative success will take care of itself.

Well, almost. There are several wrinkles to iron out here. Perhaps the most pressing concern is this: What prevents clever
a
moral individuals from invading and taking over a population of moral creatures? Won't the strong disposition to refrain from immoral acts dangerously handcuff such individuals? These and other concerns will be addressed in the next chapter. In the remaining part of this chapter, I want to show two things. First, this evolutionary account of morality parallels in interesting ways hypotheses about the evolution of religious belief and ritual. Second, and more important, this initial sketch of morality's evolution nicely explains the surface features of moral thinking outlined in the previous section. As I'll try to show, those features are precisely what we would expect to see if the sketch just rendered is correct.

Recall the lesson of the Prisoner's Dilemma: cooperating with others can deliver real benefits,
so long as
you have some guarantee that others are likely to play along. You need a reason, that is, to trust others. The behavioral ecologist William Irons (2001) has argued that religious rituals, backed by deep religious beliefs, can provide just such a reason. The key, says Irons, is
signaling.
Someone who regularly engages in religious ritual, making repeated costly sacrifices, signals to others her commitment to her faith. Someone who goes to the trouble of wearing heavy garments, or praying, or eating only certain types of food, and so on, demonstrates the kind of fidelity to a group that can provide others with the assurance that this person can be trusted. The anthropologist Richard Sosis summarizes the idea this way: “As a result of increased levels of trust and commitment among group members, religious groups minimize costly monitoring mechanisms that are otherwise necessary to overcome free-rider problems that typically plague communal pursuits” (2005: 168). In other words, members spend less (valuable) time worrying who among them can be trusted. This hypothesis yields a number of testable predictions. To name just one, the more costly constraints a religious group puts on a member's behavior the more cohesive the group should be. And one indication of cohesiveness should be the
duration
of the group's existence. Sosis (2005) compared the demands various nineteenth-century American communes placed on their members and how long such communes survived. Indeed, Sosis found that the more demands a commune placed on its members, the longer such a commune remained in existence.

We can't say exactly how these results (if they stand up) bear on the question of the evolution of morality. It may be that they are unrelated. But if they are connected, it would help explain the powerful connection people very commonly draw between religion and morality. How, these people ask, can you have one without the other? As Donald Wuerl, the archbishop of the Roman Catholic diocese of Washington, DC, recently put it in a homily, “ethical considerations cannot be divorced from their religious antecedents.”
7
Perhaps the disposition to feel a connection to a religious group is part of the same disposition to regard actions as right or wrong. Perhaps one triggers the other. At any rate, what we can say is that this area remains almost entirely unexplored. Let me move on to my other closing point.

3.3 Explaining the Nature of Moral Judgments In the previous section we identified six features of moral thinking in need of explaining. The first thing we noted was that moral thinking requires an understanding of prohibitions. To judge that abortion is wrong is not merely (if at all) to express a desire not to have an abortion; it's to assert that abortion is
prohibited
, that it should not be done. This distinction makes a practical difference. For regarding some acts as prohibited, as
wrong
, has a way of putting an end to the discussion; it's a “conversation-stopper.” If I believe that the act is wrong, then that's it. It shouldn't be done. Moral thinking has a way of overriding my other modes of practical deliberation. It's worth contrasting this with our desires.

We're pretty good at getting ourselves to do things we don't desire, even things we passionately hate (for example, getting ourselves to the dentist or cleaning the bathroom). But getting ourselves to do things we think are
immoral
is a different matter. I would bet that no amount of persuasion will get you to steal your neighbor's car or beat up the elderly couple down the street – even if I could guarantee that you wouldn't get caught. This is not to say we're incapable of such things; tragically, we are. The point is that there appears to be a substantial difference between doing something you (strongly) desire not to do and doing something you sincerely believe is (seriously) immoral. Most would agree that it takes considerably more psychological effort to do what we think is seriously wrong than to do what we strongly desire not to do. Part of it has to do with the psychic “cost” of living with ourselves after committing an immoral act.

Now this difference, according to the evolutionary account, has critical biological consequences. For if we assume that reproductive success in creatures like us depended so critically on forging and preserving our social bonds, then this deep reluctance to do what we regard as prohibited is precisely what we should expect to see. Designing creatures with a psychological mechanism that
overrides
practical deliberation when moral matters arise ensures that an individual will not act in ways that might jeopardize future cooperative exchanges. As Joyce noted above, merely desiring not to perform certain actions allows for too much wriggle room: after all, keeping promises may not seem very desirable once we've already benefited from the initial arrangement (as with Farmer A).

This should also explain the sense that prohibited acts remain prohibited
even if one desires to perform them
. We noted earlier that if you judge that no one should have an abortion because abortions are wrong, this judgment remains firm even when applied to someone who positively desires to have an abortion. What this seems to imply is that the truth of a moral judgment does not depend on people's desires, their interests, their moods, and so on. The wrongness of an action is apparently grounded on something more, something transcendent. This fits perfectly with the suggestion above that the recognition of moral wrongness halts further deliberation; it overrides our decision-making. Someone whose moral judgments
did
depend on his desires in this way would run a serious risk of undermining his reputation by acting in antisocial ways whenever his desires overwhelmed him. In general, individuals who could so easily back out of their commitments or steal from their neighbors or murder their enemies – simply because their desires shifted – would have a substantially more difficult time making and keeping cooperative arrangements.
8
(Test yourself: what kind of person would you trust in a Prisoner's Dilemma-style game?) Built into these observations is the assumption that moral judgments are tightly linked to
motivation
, another feature of morality we discussed. Again, if we assume that evolutionary success (for creatures like us) really depended on preserving social arrangements, then for moral thinking to play its biologically significant role it has to
move
us – even in the face of “internal resistance.” Moral thinking should not be idle. It's not like thinking that the sky is blue or that Ogg is a sloppy eater or even that the red berries are tasty. Moral thinking should very reliably “engage the will.” And this is just what we see. For example, you can pretty much guarantee that if someone sincerely believes that abortion is murder, you won't see her having an (elective) abortion later that day. Whatever else moral thinking is, it's practical. It moves us. And it can move us to retaliate. We noted in the previous section that moral thinking implies notions of desert. The next chapter is devoted to exploring how this idea relates to punishment, reputation, and feelings of guilt. Some of the more interesting work coming out of behavioral economics and psychology highlights the strategic importance of punishment and reputation. Indeed, my own view has developed partly in response to these findings.

Other books

Bad Blood by Aline Templeton
Skinwalkers by Hill, Bear
The Revenge of Moriarty by John E. Gardner
Darker by Ashe Barker