Just after 6:18 PM on Judgment Day, a Phone Rings. Should You Answer It?
So, let’s reject the idea that
only
John Connor can save the world. This isn’t of course the same thing as saying that John
should not
save the world. It pretty much leaves things up in the air. Since other possibilities remain open, John shouldn’t base his chairotic decision on the idea that the responsibility for the future is his alone. Sure, we must judge things by what we see by our lights, limited though they may be; and what we see keeps steering the future back toward John. But we must also understand the limits of what our lights can show us and leave room for the possibility that what’s actually coming our way may be different from what we expect (that’s just what makes sequels interesting). Figuring out that John’s not justified in thinking he’s the only one who can save the world, therefore, doesn’t provide a way of answering his chairotic question. But perhaps there are other ways to approach the issue.
Let’s first consider the possibility that John might actually have good reason to say no—even if by doing so he would be embracing the end of humanity. I know this option sounds bizarre, but look at it this way. In the universe of the
Terminator
films, human beings are responsible for creating devices that have destroyed most of the world, inflicted immeasurable suffering, and seriously compromised the biomes of most of the other living things on the planet, perhaps the only planet sustaining life. Is there to be no accountability for this? When you knock over a glass of milk, it’s reasonable to excuse the spill as an accident. Bad things happen. But when, after centuries of increasingly destructive conflict, and after decades of warning from thoughtful minds across the world, human beings still undertake actions that kill millions and ecologically devastate most of the world, it’s difficult to make excuses.
Along these lines, in what is perhaps the most absurd moment of the three films so far, the camera finds John Connor in
T3
standing atop a pile of debris, rallying people to organize and fight back against the machines, perched next to a tattered, but still waving, U.S. flag.
12
In the Terminators’ eyes’ red glare, with plasma bombs bursting in air, that star-spangled banner does yet wave.
13
This flash forward would have viewers believe that after the U.S. government has made decisions resulting in the end of civilization and the deaths of billions, people would look upon the national flag with anything other than the most profound contempt. From where I sit, though, it would have made more sense to emblazon the U.S. flag on the chest of every Terminator.
14
If there’s one thing that should not symbolize the good guys of the future’s Machine Wars, it’s the flag of the government that produced the holocaust of Judgment Day in the first place.
15
I’m sure the survivors in other parts of the world would agree. But in any case, the presentation of the U.S. flag in this circumstance bears on John’s chairotic choice by raising the question of whether there is a moral imperative to become the savior of foolish miscreants so robotically obedient in their patriotism. It’s hard to see one.
Still, even if it’s true that in some sense humans ought to be held responsible for Judgment Day, it doesn’t necessarily follow that humanity ought to be entirely obliterated. The work of many philosophers, in fact, has called into question the very idea of “collective responsibility,” as well as collective punishment. By “collective responsibility,” I mean the idea that a whole group may be held responsible for the conduct of some fraction of the group, perhaps even for the conduct of a single individual. Should all Germans have been held responsible for the Holocaust? Should all men be held responsible for rape?
Perpetrators defended the 1994 genocide in Rwanda and the 1915 genocide in Armenia on the basis of collective responsibility. And of course, for centuries Jews were held collectively responsible for the death of Jesus. For many, attributions of responsibility like this have seemed unjust. While there are philosophers who have argued in favor of collective responsibility,
16
punishing collectives has been prohibited as a war crime by the Geneva Conventions and condemned by many other philosophers because it can lead, especially in political contexts, to intolerable suffering.
17
More important for our case, civilians aren’t generally supposed to be responsible, even in democratic societies, for the misconduct of their military leaders. In light of all this, then, it seems like there’s good reason not to hold U.S. civilians responsible for the U.S. government having unleashed Skynet. And, of course, humanity as a whole should also be off the hook.
John, therefore, shouldn’t say no to the call because humans deserve to perish. But maybe he should say no to save his own skin. After all, the T-101 in
T3
tells John that in the future he has succeeded in killing John. It may, of course, be just as likely that if John doesn’t save the world, he and Kate will die anyway at the hands of the machines. We can’t know. But given the possibility that someone else might successfully lead the resistance and given the reliability of the Terminator’s information so far, refusing the call in the interests of self-preservation does not seem utterly irrational for John.
On the other hand, just as there may be a chance of defeating the machines with another leader, there may be a chance of surviving the Terminator’s assassination attempt, too, especially now that John is aware of it (and even the year when it will occur). The question of self-preservation, then, becomes whether there’s more of a chance of surviving as the leader or as a nonleader. So far as I can tell, it’s impossible for John to calculate the probabilities either way. Strictly speaking, it’s impossible to know whether survival is possible under any circumstances. We’re going to have to look elsewhere, then, to tip the scales in favor of answering the call to leadership in the affirmative.
Social Contracts, Divine Commands, and Utility
First, let’s take a look at social contract theory. Articulated in early modern times by philosophers such as Thomas Hobbes (1588-1679), John Locke (1632-1704), and Jean-Jacques Rousseau (1712-1778), social contract theories root our obligations to other human beings in our voluntary participation in, or consent to, something like a binding contract.
18
Hobbes, Locke, and Rousseau argued that a social contract is justifiable because people would find the conditions of life without it—the “state of nature” as they called it—unacceptable. The state-of-nature-like conditions following Judgment Day similarly put people’s liberty, possessions, and physical well-being in serious peril. And similarly, in order to transcend this unacceptable condition, post-Judgment Day humanity organizes a new social order with John as its leader. John Connor, then, since he is party to the social contract, would be held duty-bound to accept the call to leadership. Since, from this point of view, he would have, in some sense, contracted to respect the claims of others, John Connor, like any other member of society, would be obligated to answer the call to leadership affirmatively.
But we shouldn’t be persuaded by this strategy. It seems to beg the question that’s at issue here. Scottish philosopher David Hume (1711-1776) criticized social contract theory by pointing out that contracts cannot ground society because the institution of making contracts is possible only
after
society has been established. In the case of John Connor’s chairotic moment in Crystal Peak, appealing to the social contract begs the question since in the wake of Judgment Day, the question has become whether the social contract even exists; and if not, whether it would be reasonable or desirable to enter into a new contract.
Arguably, when General Robert Brewster presses the “execute” button that unleashes Skynet, he has effectively, even if unintentionally, nullified the social contract and returned people to the state of nature—perhaps to something even worse than the state of nature. Hobbes tells us that the social contract is nullified when the state tries to kill you. And Locke regards the contract as canceled when the state can no longer secure people’s natural rights or when it directly violates those rights (rights to life, liberty, and property). When Brewster presses that button, he turns the weapons of the state against the people.
Attacking the people, or at the minimum rendering the people and their rights unprotected, terminates the contract; but if Brewster has terminated the contract, then the contract’s obligations no longer bind John. The call that comes to John under Crystal Peak, therefore, can be read as an invitation to form a new contract. So, the chairotic question can now be put in this way: should John enter a new contract? If you were an alien landing on Earth for the first time during the Machine War, would you think it desirable to enter a binding social contract with the kinds of beings that produced and armed Skynet? Or would you straightaway jump back into your spaceship and flee the solar system as quickly as possible, leaving a “QUARANTINED” marker up in orbit around the Earth as you leave?
19
We, of course, aren’t extraterrestrial aliens; and, as we’ll see, I think that does matter. But in any case, if we are to find reasons for joining a new social contract, like the alien we’ll have to look beyond the contract itself.
As an alternative to grounding obligations in contracts, some philosophers look to commands from a deity, or “divine command.” The
Terminator
films are pretty thin on direct references to religious belief (this despite John Connor’s initials and the titles of
T2: Judgment Day
and
T4: Salvation
).
20
And there may be good philosophical reasons for this. As far as motivating people goes, religion is effective. So, arguably, it would have made more sense for John Connor to rally survivors around the cross, the crescent, or the Star of David than around a national flag—especially since by that time nation-states will no longer exist. But there are two countervailing reasons that, I think, militate against basing John’s chairotic decision on commands from the divine.
First, although it’s true that appeals to the divine are for many people good motivators, it’s difficult to
understand
and
respect
the commands of a divine being that would allow the machines to inflict so much damage in the first place. Perhaps more important, belief in the authority of divine commands requires certain commitments to the existence and nature of a deity (or deities) for which there seems little solid empirical evidence, and which many reasonable many philosophers find specious. So, appealing to the divine doesn’t seem to be a path to understanding John’s obligations.
Perhaps, then, we should look instead to utilitarianism, the school of thought that holds that we ought to do what maximizes happiness and minimizes unhappiness, as a basis for John’s chairotic decision.
21
Given the information about the future provided by Reese and the Terminators, it would seem that the best available option for minimizing the world’s suffering and achieving happiness is John Connor’s leadership—that he is, as the T-101 says, the “last best hope” of humanity. So even if it’s true, as we noticed before, that victory
might
be possible without John’s leadership, it’s reasonable to go for the
best available
option to secure victory—or at least, so a utilitarian would reason.
This line of reasoning, however, assumes that on balance there will be more happiness than suffering among humans should they survive and win the Machine Wars. It’s a hopeful line of thought, but it’s also one that hasn’t taken a serious look at Judgment Day. Isn’t the lesson of Judgment Day that humanity ultimately brings more suffering and destruction to the world than happiness and flourishing? Isn’t the proper judgment for humanity, in light of Judgment Day, “guilty”?
Utilitarianism, then, doesn’t look like a promising strategy for justifying an affirmative answer to John’s chairotic question. And things look even worse from a utilitarian point of view if we take into consideration the happiness of other animals or other self-aware beings on Earth.
22
Once again the destructive proclivities of human beings as they’re described in the
Terminator
films lead us into troubled waters. Difficult as it may be to do so, the narrative of the
Terminator
films compels us to face in a clear-eyed way the question: would the world be a better place without humans? Assuming that it’s meaningful to speak of the happiness of nonhuman animals and to give their interests moral consideration, perhaps putting an end to humanity would actually establish conditions for
greater
happiness—especially if the machines could be eliminated as well. After all, the nonhuman animals outnumber us. And consider how much weaker the utilitarian argument for saving humanity would become if the machines not only became self-aware but also capable of happiness—something that
T2
suggests is possible when the Terminator acknowledges at the end of the film that he now knows why humans cry. Utilitarianism, then, at best offers an ambiguous basis for John’s choice.