Planet of the Apes and Philosophy (11 page)

BOOK: Planet of the Apes and Philosophy
12.99Mb size Format: txt, pdf, ePub
Three Primates: Kant, Mill, and Aristotle

Ethics is a way of reasoning about certain types of problems. It's a tool, just like math or logic. It starts with certain assumptions, or premises and it works out their logical consequences as they illuminate whatever moral problem we're considering. If we start with different premises we may arrive at different conclusions, and there may be no sensible way by which we can judge some conclusions right and other wrong, unless we can show that there's a problem with either the premises or with the reasoning itself.

This, however, doesn't mean that anything goes. Let's consider first an example by analogy with math. If you say that the sum of the angles of a triangle is 180°, are you right or wrong? It depends. If we are working within the axioms (which is what mathematicians call their assumptions) of Euclidean geometry, then you're correct. But if we are operating within the framework of spherical geometry then no, you would be wrong. Either way, however, if you claim that the answer is not 180° within a Euclidean space, you are most definitely wrong.

Will Rodman decides to test ALZ-112 on his father, after his research program at Gen-Sys has been shut down (having in the meantime inadvertently caused permanent enhancements in Caesar). We can look at this decision from the starting assumptions of three standard ethical theories: consequentialism, deontology, and virtue ethics, working our way from those assumptions through the ethical consequences that follow from them.

Consequentialist ethics begins with the assumption that—as the name clearly hints at—what matters in moral decisionmaking is the consequences of one's action. Nineteenth-century philosopher John Stuart Mill is one of the most influential consequentialists, and for him a good action has the consequence of increasing overall happiness, while a bad action has the consequence of increasing overall pain. So, for Mill it does not really matter what Will's intentions were (they were good, we assume, as he was both concerned with his father's health and with a potential cure for Alzheimer's for all humankind), what matters is what happened as a result of his action. And what happened was a disaster. Not only did his father actually die of the disease, but Will's attempt to solve the problem that led to
the failure of his cure will eventually condemn the human race to extinction. That's as bad as consequences can possibly be, I'd say. There is a caveat, however. If the totality of chimp happiness outweighs the pain caused by humanity's extinction, Will may still be vindicated on consequentialist grounds. That, unfortunately, isn't going to help Will or anyone he cared for, except perhaps Caesar.

Deontological ethics is the idea that there are universal rules of conduct that govern our ethical judgments. Religious commandments are an example of a deontological moral system. The most important secular approach to deontology is the one devised by Immanuel Kant in the eighteenth century, and is based on his idea that there is only one fundamental moral rule, which he called the categorical imperative (it's not only an imperative, but no exceptions are allowed!). In one version, the imperative essentially says that we ought to treat other people never solely as means to an end, but always as ends in themselves. In other words, we must respect their integrity as moral agents distinct from but equal to ourselves.

It's not exactly clear how Kant would evaluate Will's actions towards his father. On the one hand, Will attempted the cure on his father because he was genuinely worried about the latter's health, so Will clearly valued his father as an individual for his own sake. On the other hand, if part of Will's goal was to find a general cure for Alzheimer's, then by using his father as an experimental subject, he was using him as a means toward a further end. Moreover, he did so without obtaining his father's explicit consent—indeed, he never even attempted to inform his father about the treatment before or after it was administered. For a deontologist, the consequences aren't what determine the rightness or wrongness of an action at all, so even if Will had succeeded in liberating humanity from Alzheimer's (instead of starting a chain of events that eventually leads to the extinction of the entire species), he would still have done the wrong thing. You can see why Kant was well known for being a bit too strict of a moralist.

Finally, we get to virtue ethics, an idea that was common in ancient Greece and was elaborated in particular by Aristotle. Virtue ethicists are not really concerned with determining what's right or wrong, but rather with what kind of life one ought to live in order to flourish. This means that Aristotle
would consider neither the consequences of an action per se, nor necessarily the intentions of the moral agent, but would look instead at whether the action was the reflection of a “virtuous” character. “Virtue” here does not mean the standard concept found in the Christian tradition, having to do with purity and love of God. Aristotle was concerned with our character, as manifested in traits like courage, equanimity, kindness, and so on.

Was Will virtuous in the Aristotelian sense of the term? Did he display courage, kindness, a sense of justice, compassion, and so on? It seems to me that the answer is an unequivocal yes. He clearly felt compassion for his father (and for Caesar). He had the courage to act on his convictions, which were themselves informed by compassion for both humans and animals. And he was kind to people around him, beginning with his father and with Caesar, and extending to his girlfriend, among others.

All in all, then, we have three different views about Will and what he did. For a consequentialist, his actions were immoral because they led to horrible outcomes. For a deontologist the verdict is a mixed one, considering that he both did and did not use his father as a means to an end. For a virtue ethicist, Will was undeniably on the right track, despite the fact that things, ahem, didn't exactly work out the way he planned them.

Now, one could reasonably ask: okay, but given that the three major theories of ethics give us different results in the case of Will's decision, is there any way to figure out if one of these theories is better than the others? That would be a separate discussion into what is called meta-ethics, that is the philosophy of how to justify and ground ethical systems. However, remember the analogy with math: it's perfectly sensible to say that there is no answer as to which system is better, because their starting points (consequences, intentions, character) are all reasonable and cannot necessarily be meaningfully ranked.

Just to come clean here, I lean toward virtue ethics, and I suspect most viewers of the movie do too—whether they realize it or not. If you saw Will as a positive character, felt the compassion he had for his father, and shared his outrage at the way Caesar was being treated, you cannot reasonably fault him for what happened. He tried his best, and Aristotle was well aware
of the fact that sometimes our best is just not enough. Life can turn into a tragedy even for the individual endowed with the best character traits we can imagine.

Is It Inevitable?

We've seen that there clearly are a number of ethical issues to consider when we contemplate human genetic enhancement, and that our conclusions about such issues depend on which set of moral axioms we begin with. But is any of the above relevant anyway? When it comes to new technologies like genetic engineering we often hear the argument to end all arguments: technological change, some say, is simply inevitable, so stop worrying about it and get used to it. François Baylis and Jason S. Robert, mentioned earlier, give a number of reasons to believe in what we might call techno-fate. Yet, holding something to be inevitable may be a way to dodge the need for tough ethical decisions, with potentially dire consequences, so it's probably wise to take a closer look.

Baylis and Robert base their “inevitability thesis” on a number of arguments.

To begin with, they claim that capitalism rules our society, and bio-capitalism is going to be just one more version of the same phenomenon.

Second, they quote Leon Kass as observing that the ethos of modern society is such that there is a “general liberal prejudice that it is wrong to stop people from doing something,” presumably including genetic engineering of human beings.

Third, say Baylis and Robert, humans are naturally inquisitive and just can't resist tinkering with things, so it's going to be impossible to stop people from trying.

Fourth, we have a competitive nature, and we eagerly embrace everything that gives us an edge on others, and that surely would include (at least temporarily, until everyone has access to the same technology) genetic enhancement.

Lastly, it's a distinctive human characteristic to want to shape our own destiny, in this case literally taking the course of evolution in our own hands.

This seems like a powerful case in favor of inevitability, except for two things. First, we do have examples of technologies that we have developed and then abandoned, which makes the point that technological “progress” is a rather fuzzy concept, and that we can, in fact, reverse our march along a particular technological path.

For instance, we have given up commercial supersonic flight (the Concorde) for a variety of reasons, some of which were economical, other environmental. We used to make industrial use of chlorofluorocarbons (in refrigerators and aerosol cans), but we have eventually curbed and then banned their production because they were devastating the environment, creating the infamous ozone hole. And we have developed the atomic bomb, but have refrained from using it in a conflict after the devastating effects of Hiroshima and Nagasaki, and indeed are trying to ban nuclear weapons altogether. (Well, okay, according to the 1968 movie we will apparently end up using it again, in the process causing our own extinction and giving the planet to the apes. But hopefully that's a timeline that does not actually intersect our own future. . . .)

The second objection to the “inevitability thesis” is that most of the attitudes described by Baylis and Robert are actually very recent developments in human societies, and are restricted to certain parts of the globe, which means that there is no reason to think that they are an unavoidable part of human nature. Capitalism is a recent invention, and it is actually managed and regulated one way or another everywhere in the world. The “liberal prejudice” is actually found only among the libertarian fringe of the American population and almost nowhere else on the planet.

We may be a naturally inquisitive species, but we are also naturally endowed with a sense of right and wrong, and the history of humanity has been characterized by a balance—admittedly sometimes precarious—between the two. Our alleged competitive “nature” is, again, largely a reflection of a specific American ethos, and is balanced by our instinct for cooperation, which is at least as strong. As for shaping our destiny, we would be doing so whether we did or did not decide to engage in human enhancement, or whether—which is much more likely—we decided to do it, but in a cautious and limited way.

The danger inherent in the sort of techno-inevitability espoused by Baylis and Robert is that it undercuts the need for deliberation about ethical consequences, attempting to substitute allegedly unchangeable and even obvious “facts” for careful ethical reasoning. This sort of capitalism-based hubris is captured in
Rise
when CEO Steven Jacobs tells our favorite scientist, Will Rodman: “You know everything about the human brain, except the way it works.” Except, of course, that the (lethal) joke is on the ultra-capitalist Jacobs, since he is the one who plunges into the cold waters of San Francisco Bay a few minutes later into the movie.

Whether we are talking about human genetic enhancement (
Rise
) or the deployment of nuclear weapons (the 1968
Planet of the Apes
movie) these are not issues we can simply deputize to scientists or captains of industry. Rather, they're the sort of thing that requires everyone to come to the discussion table, including scientists, technologists, investors, philosophers, politicians, and the public at large. The price of abdicating ethical decision making is the risk of forging a future like the one that brought Heston's George Taylor to exclaim in desperation: “Oh my God . . . I'm back. I'm home. All the time it was . . . we finally really did it. YOU MANIACS! YOU BLEW IT UP! OH, DAMN YOU! GODDAMN YOU ALL TO HELL!”

6
Who Comes First, Humans or Apes?

T
RAVIS
M
ICHAEL
T
IMMERMAN

              
D
R
. Z
AIUS
:
Do you believe humans and apes are equal?

              
A
LAN
V
IRDON
:
In this world or ours?

              
A
PE
:
In any world.

              
V
IRDON
:
I don't know about any world. But I believe that all intelligent creatures should learn to live and work with each other as equals.

              
D
R
. Z
AIUS
:
Silence!

              
—“Escape from Tomorrow,”
Planet of the Apes
, Season 1, Episode 1

I
n a turning point in the original
Planet of the Apes
movie, Charlton Heston's character George Taylor successfully steals the notepad of Dr. Zira, an ape psychologist who conducts tests on human subjects. Before being brutally beaten by a gorilla guard, Taylor manages to write “My name is Taylor” on it. Zira immediately demands his release, despite the gorilla guard's protests. From that point on, Zira does all she reasonably can to help Taylor.

Other books

The Widow and the Rogue by Beverly Adam
Guns of the Dawn by Adrian Tchaikovsky
Where Forever Lies by Tara Neideffer
The Harrowing of Gwynedd by Katherine Kurtz
When Elephants Fight by Eric Walters
Legon Restoration by Taylor, Nicholas
Vengeance Is Mine by Shiden Kanzaki