Terminator and Philosophy: I'll Be Back, Therefore I Am (25 page)

Read Terminator and Philosophy: I'll Be Back, Therefore I Am Online

Authors: Richard Brown,William Irwin,Kevin S. Decker

BOOK: Terminator and Philosophy: I'll Be Back, Therefore I Am
4.42Mb size Format: txt, pdf, ePub
 
If Terminator machines and Skynet are indeed persons, then for utilitarians, their interests must be taken into consideration, too. From this perspective, it was inconsiderate, to say the least, for humans to attempt to destroy Skynet by “pulling the plug” when it became self-aware. So if we must take the interests of these intelligent machines into consideration, we next have to ask, what kinds of interests do they have? Clearly, not all interests are created equal. Humans, Skynet, and the Terminators all have an interest in self-survival, or in the case of the Terminators, at least species survival. Perhaps we might think that as humans, we have more complex interests than the machines do.
We
can be interested in beauty, art, philosophy, television shows, and movies. It is precisely these complex interests that differentiate people from animals and why under normal circumstances humans are more valuable than animals. Thus, when Judgment Day occurs, the more pressing consequence is the human loss of life, not the animal loss of life that I mentioned earlier.
 
Can Terminators have complex interests? Terminators are primarily characterized by their single-minded interest in achieving their objectives, but they also exhibit curiosity and interest in novel experiences. The Terminator sent back to protect John expresses complex interests in its curiosity about the nature of humanity, in its examination of a small child at arm’s length, and in its ability to pick up slang quickly. It’s quite possible that Terminators, and even Skynet, have complex interests just as we do.
 
Even if the machines didn’t have interests as complex as those of humans, there may still be a case for choosing to maximize their satisfaction over that of the people who died on Judgment Day. It may be the case that the machines, with their set of common interests, greatly outnumber humanity in its common interests.
 
Consider a problem that arises with utilitarianism: satisfying the most interests can be achieved in numerous ways. For example, in
T2
John steals money from a bank via an ATM. It’s not clear whether John is stealing money from a specific bank account or somehow hacking into the bank in general. Let’s imagine that it is the latter. If John steals this money, he has a great time at the mall, and the bank and its insurers are slightly injured. The maximization of the interests in this case may actually result from John’s stealing the money: one person benefits greatly, and many people lose out only slightly. This example also illustrates that for a utilitarian, nothing can be called
universally
wrong. The consequences of the act determine the rightness or wrongness of the act, not the act itself. But what if John didn’t steal just a few hundred dollars? What if he stole a few thousand dollars, or tens of thousands of dollars? Even if John stole one dollar from tens of thousands of banks, it would eventually result in a loss of utility. The individual dollars eventually add up to be a substantial sum. At some point adding another dollar to the transaction would decrease the collective interests of the bankers and insurers to the point where John’s spending spree at the mall would be wrong. Analogously, if Skynet were to create more and more Terminators, and if the machines greatly outnumber the humans after Judgment Day, eventually the aggregate interests of survival and any other interests of the Terminators and Skynet would simply outweigh the interests of the surviving humans, like John stealing dollars from the bank.
 
Judgment Day Is the Morally Preferable Event
 
So just looking at the potential consequences from the utilitarian’s viewpoint, it may be true that Judgment Day is preferable to stopping Dyson, since it actually maximizes interest satisfaction in the long term. Maybe this shouldn’t surprise us: people often endure pain, hardships, and heavy burdens for a long-term payoff, and it’s even more common for people to inconvenience themselves in order to “do the right thing.” In this case, the stakes are just greater—all of humanity may have to shoulder the burden of a nuclear holocaust to accept what is morally required of us. Maybe Sarah should simply walk away from the Dyson residence and celebrate her morally superior decision by sharing a beer with John’s Terminator guardian.
 
Or maybe not. Many utilitarians in the past have taken great pains to argue against some of the more unsavory conclusions that critics of utilitarianism have drawn. For example, if Miles Dyson has rights to life and to liberty, then he can claim the right to be free from being attacked without provocation, and society should defend him if he is attacked. A “naive” utilitarian, someone who straightforwardly analyzes each individual act’s consequences and bases his or her moral decisions on that analysis, might argue that nobody has any rights, since rights are guarantees. In some cases, violating a person’s rights could maximize interests. Yet John Stuart Mill tells us, “To have a right, then, is, I conceive, to have something which society ought to defend me in the possession of. If the objector goes on to ask, why it ought? I can give him no other reason than general utility.”
6
Mill, adopting a more sophisticated utilitarian view, would argue to the contrary that protecting people’s rights satisfies their deepest interests, so even in particular cases where it may maximize the satisfaction of interests to take another’s life in cold blood, we ought not to do so, because allowing such acts would cause a loss of interest satisfaction overall. He writes, “The interest involved is that of security, to every one’s feelings the most vital of all interests. All other earthly benefits are needed by one person, not needed by another; and many of them can, if necessary, be cheerfully foregone, or replaced by something else; but security no human being can possibly do without.”
7
If our rights were not enforced, we could never truly feel secure from tyrannical governments, or even from one another. Like the future resistance soldiers who have good reasons to fear other people (since the others may be Terminators), we too would have good reasons to fear other people, since the other may simply be stronger than us.
 
But it’s not over yet. The naive utilitarian might come back to point out that the very principle that Mill is using here is the principle of maximizing satisfaction of interests. Really, he and Mill are applying the same principle, but at different levels: for example, Mill uses the principle of maximizing satisfaction to justify deviations from general rules forbidding the killing of other people, as in cases of self-defense. Despite our rights to life and liberty, for him there would be circumstances in which it would be perfectly permissible to kill someone in self-defense. So why can’t we make the exception in the Dyson case?
 
Clearly, there’s something about the case that makes it difficult for utilitarians to agree with one another about whether killing Dyson is the morally correct thing to do. In fact, what makes this case compelling is that Sarah
knows
that he plays a very important role in Judgment Day. Unlike most people without precognition, Sarah does have a relatively accurate idea of what the future will be. By contrast, Mill’s argument for protecting people’s rights works in normal, everyday scenarios
precisely because
we don’t have information about what will surely happen: generally, we are bad judges of future events. Ironically, in cases where we know for sure what the consequences will be, consequentialism isn’t much help.
 
This idea that we are poor judges of future events is key to moral decision-making. It’s worth pointing out, for example, that while 3 billion people died on Judgment Day, approximately 2.8 billion people survived.
8
Future history records that John Connor will lead them to a possible victory over Skynet. In fact, Kyle Reese tells Sarah in the first film, “The defense grid was smashed. We’d taken the mainframes. We’d won.” The last phrase is ambiguous: does it refer to simply winning a major battle, or could it mean that the resistance had won the war? If this latter interpretation is correct, then the “rise of the machines” would be only a short one. If we take this into consideration when calculating the possible satisfaction of interests hanging on future events, it may be preferable that the machines had never existed, that the human race did not have to go through a harsh trial and rebuilding of its society.
9
But Sarah’s decision in
T2
to spare Dyson flies in the face of this, despite Reese’s words. The utilitarian might point out that if her decision had been based on the belief that the amount of total satisfaction of machine interests would outweigh the total satisfaction of human interests served by a victory over the machines, she would have further obligations. First, she would have to
ensure
that the machines are victorious over the human resistance.
10
And to do that, she would have to terminate her son, John Connor.
 
So should Sarah kill John? From a Kantian perspective, the answer is clearly no. John hasn’t done anything to warrant that, and even if we take his future actions into consideration, he ultimately doesn’t do anything wrong. The only Kantian condition for legitimate violence against another—retribution—hasn’t been satisfied in this case. A utilitarian answer is more difficult to give, because we must weigh the potential benefits of a society run entirely by sentient machines (that may or may not enslave surviving human beings) versus the rebuilding of civilization after a bloody and possibly lengthy war. Despite Reese’s talk about the future, Sarah simply does not have the information needed to make an informed decision between the two choices. This illustrates the problem that I had raised earlier, that making accurate predictions of the future is inherently a problem. Before, Sarah had relatively good foreknowledge on which to base her moral analysis, but between these two choices, she is in the dark. Throw in the further complication that Reese and the Terminator guardian have changed the past even before Sarah decides to try to kill Dyson, and we have ourselves a very sketchy view of the future. Her guess about which future is morally preferable is as good as yours or mine.
 
Are We Learning Yet?
 
This doesn’t mean that we should just throw our hands into the air and give up without attempting to use good judgment. After all, Sarah still must make a decision. Here is where Mill’s thoughts about rights and security can help guide our choice: when we can’t accurately predict the future, we should rely on what would
typically
maximize interest satisfaction under normal circumstances. Clearly, under normal circumstances, killing innocent people, especially your own son, doesn’t maximize people’s interests. Only under very odd circumstances would it do so.
 
Happily, it turns out that Judgment Day isn’t the morally preferable outcome after all. So how did we get off on the wrong track? It might be because of the easily overlooked line of Reese’s dialogue in the original Terminator movie: “We’d won.” The line, however, forces us to reconcile the fact that we lack a great deal of future knowledge. In fact, it is the lack of precognitive powers that makes utilitarianism a difficult doctrine to implement practically. Even minor facts that are overlooked can have huge implications when projected out over the years and over the choices and actions of billions of people. This is not to say that we shouldn’t try, but in ethics, knowing the weaknesses of your theory is just as important as knowing the strengths.
 
Bertrand Russell may have been writing in a tongue-in-cheek fashion when he said philosophy begins with the obvious and ends with the absurd, but philosophy is often preoccupied with such arguments, as we’ve seen in this chapter. Sometimes the absurd is well justified, and sometimes, as in this case, it’s not. Strong reasoning makes good theory in philosophy, and even when we fail to reach the destination, there is plenty to learn along the way.
11
 
NOTES
 
1
Bertrand Russell, “The Philosophy of Logical Atomism,” in
The Philosophy of Logical Atomism and Other Essays, 1914-19, The Collected Papers of Bertrand Russell
, vol. 8, ed. J. Slater (London: Allen & Unwin), 172.
 
2
To be precise, in
Terminator 3: Rise of the Machines
, Skynet is a computer
program
that becomes self-aware and takes control of military computers after unleashing a computer virus.
 
3
A more exact wording of the categorical imperative can be found in Immanuel Kant’s
Groundwork of the Metaphysics of Morals
, ed. and trans. Mary Gregor (Cambridge: Cambridge Univ. Press, 1997): (1) “Act only in accordance with that maxim through which you can at the same time will that it become a universal law.” (2) “Every rational being exists as an end in itself, not merely as a means to be used by this or that will at its discretion; instead he must in all his actions whether directed to himself or also to other rational beings, always be regarded at the same time as an end.”
 
4
There are many kinds of utilitarianism, differing by whether they charge us to maximize happiness, pleasure, or some other good. It’s a good idea, though, to focus on the satisfaction of
interests
because there may be many times when something will make us happy that is not in our interest. For example, hopping on a plane to Hawaii tonight might make me happy, but it would not be in my interest to do so. I might lose my job, and I have great interest in keeping my job, even if it doesn’t make me as happy as a Hawaiian vacation would. I might also have to do things to satisfy my interests that would cause me some pain, like exercising to satisfy my interest to stay healthy.

Other books

Alphas Divided - Part 1 of 3 by Jamie Klaire, J. M. Klaire
Wrong Turn by Diane Fanning
Imagined Love by Diamond Drake
Red Moon Rising by Elizabeth Kelly