Terminators are complex machines that are physically superior to humans. Indeed, the models T-1000 and T-X can do many things that far exceed human capabilities. Terminators can process information and make inferences at a speed impossible for humans. If we took speed and efficiency as the only criteria for intelligence, then Terminators would be more intelligent than humans. But even the most physically developed cybernetic organisms, the T-1000 and T-X, lack the basic elements of human mental life. Although the behavior of Terminators is complex, there are various reasons to think that these machines in general do
not
have a mental life like ours.
Despite the fact that the T-101 in
The Terminator
doesn’t show any signs of emotion, we do get a look at its “inner life” in a scene when the machine answers to a person who is knocking on the door of its hotel room. The camera shows us the world through the T-101’s eyes, highlighting how the machine chooses an appropriate linguistic reply from a list deployed in its heads-up “user interface” (which suspiciously resembles the user interface of computers made around 1984, not 2029). But this procedure of choosing is a rote mechanical procedure, suggesting that Kyle and Sarah are fighting against a mere machine. In fact, the nature of the procedures by which the T-101 makes its decisions raises a doubt as to whether we should grant it intelligence, even if it could pass Turing’s Test.
Contemporary philosopher John Searle would deny that the Terminator has a mental life. In his famous “Chinese room example” Searle has argued that a machine could
simulate
human linguistic behavior simply by manipulating symbols that are inherently meaningless to it.
3
Searle’s point is that a computer has no
understanding
of what it is doing, and no comprehension of the significance of the words that it uses. Another contemporary philosopher, Ned Block, comes to the same conclusion from a different perspective: imagine a computer programmed with
every possible answer to every possible question
.
4
Such a machine, call it “blockhead,” could give an appropriate answer in each and every occasion, without ever really understanding anything it says. What we know about the “user interface” of the T-101 suggests that it uses a strategy very much like manipulation of symbols, a strategy that may very well be meaningless to the machine itself.
In
T2
, clues about the nature of Terminators’ mental lives are revealed more explicitly. Young John Connor is interested in the nature of his T-101 protector, wondering just how humanlike the T-101 really is. The machine doesn’t give him much to go on: it doesn’t understand why it shouldn’t kill humans, or the difference between right and wrong actions. The T-101 explains to John that it does not fear and that it does not have feelings. Simple things like crying, smiling, and swearing—all essential aspects of human life—are completely incomprehensible to the T-101. If Wittgenstein is right, then the Terminator’s emotional limitations are a reason to think that it doesn’t have a mental life. Perhaps a Terminator like the T-101 could deceive a human for a while, but a perceptive human would soon detect that something was wrong in the situation. And the human would be right if machines like the T-101 are simply symbol-processing machines, which, according to Searle and Block, don’t
understand
anything.
But what about more sophisticated models, like the T-1000 or the T-X? Although the T-1000 can
physically
mimic humans by assuming their physical appearance, it’s unlikely that its inner life is any more similar to humans’ than the T-101’s. The T-1000’s conduct demonstrates no signs of emotion, and while the machine can simulate other human behaviors, this does not mean that it understands what it is doing. On the other hand, both the T-1000 and the T-X do make, for example, aesthetic evaluations when they say things like “Say, that’s a nice bike” or “I like this car.” We have no reason, however, to believe these evaluations are accompanied with any inner feelings or sensations. In fact, in these cases the behavior of the Terminators resembles that of deceptive humans who claim to feel emotions without really having them. Models T-1000 and T-X could as well be the kind of complicated computers imagined by Block that contain all possible replies to all possible questions.
Still, there might be a reason to believe that these highly developed models might have
some
kind of understanding about the mental lives of humans. A scene from
T2
in which the T-1000 tortures Sarah Connor suggests that the machine understands something about the nature of pain because, when twisting a metal spike in Sarah’s shoulder, it comments, “I know this hurts.” But what kind of knowledge could the T-1000 have about pain or about hurting if the machine itself does not and
cannot
feel pain? Thinking along Wittgensteinian lines again, the T-1000 may have been programmed to know that tissue damage in humans is apt to cause an unpleasant sensation that can be called pain. But about this pain, we could still ask, could a machine that has never felt a sensation really understand what a sensation is? If a machine is never actually hurt, can it understand
how
others hurt? If the answers to these questions are no, then the T-1000’s statement is meaningless.
So it seems that whether the different Terminator models have mental lives or not is an open question. On the one hand, their behavior is very similar to that of humans, and this could or perhaps even
should
be a reason to think (in the spirit of Wittgenstein) that their inner lives must be similar to that of humans as well. On the other hand, we’ve seen that there are reasons to think that Terminators do not have mental states like ours at all. Yet I think it’s clear that the model T-101 differs from the other Terminators precisely because it
does
have a mental life. To see why, let’s turn to a difference in behaviors between the T-101 and other models.
John Connor: The T-101’s Everything
In the real world, the behavior of complex machines is guided by programming, and programmed machines are devoid of mental life. The T-101 seems to be an exception to the rule because it does show signs of mental life. And this is what ultimately explains why we are moved by the scene in which the T-101 is destroyed in the steel mill.
In
T2,
there’s a crucial scene in which John and Sarah open the head of the T-101 and set the machine to a “learning mode.” Before this switch, the T-101 has been set by Skynet to “read-only mode,” which prevents it from “thinking too much,” as the T-101 itself explains. When the machine is rebooted, it sees the world with “new eyes,” and the change is dramatic. By considering the behavior of the T-101 before and after the switch, we can see the impact of
learning
on the emergence of the machine’s mental life. When the Terminator is set to “read-only mode” it cannot smile, make promises, or understand the basics of human mental life. When the T-101 is prevented from learning, it’s incapable of understanding the connections between smiling and joy or between crying and sadness. Simply put, in its initial mode the T-101 simply can’t gather certain kinds of new information about the world in order to heighten its
understanding
. While its knowledge increases through experience, it does not understand anything
in a new way
. When the learning mode is set, the T-101 starts to grasp the connections between things and what those things
signify
. The first sign of this is the T-101’s ability to use language—in particular, slang—that it had never used before, and to combine new expressions in a meaningful way.
Ada Byron King, countess of Lovelace (1815-1852), worked with Charles Babbage to create an early mechanical computer. Considered to be one of the world’s first computer programmers, she claimed already in the nineteenth century that a machine couldn’t learn independently, so a machine couldn’t express originality. But the T-101, precisely because it has acquired the capacity to learn independently of its programming, is capable of expressing truly novel behavior. The T-101 would be able to pass the “Lovelace Test,” which is more challenging than the test later proposed by Turing. A machine passes the Lovelace Test if the designer of a machine can’t explain the novel output that the machine generates. In the T-101’s case, Skynet probably could not explain or predict the behavior of its creation after its mode had been changed by Sarah and John. Since the T-101 actively works against Skynet’s ultimate goals of wiping out the human resistance, we could certainly call this novel or creative action.
In addition to new forms of language, we see a change in the machine’s ability to
choose
its behavior instead of simply responding mechanically. From the human perspective, the T-101 often fashions correct reactions in appropriate situations. The T-101 after the switch finds itself in the situation of a child who is beginning to learn the basic aspects of human sociability. In the Terminator’s case, its teacher is John Connor, who explains to the machine what it needs to understand about human nature. Consequently, the T-101 chooses not to kill people because of its promise to John. This shows that the T-101 realizes that there are alternative modes of action. The Terminator acts in one way rather than the other
because it has a reason
for acting in this precise way, and it is the T-101 itself that realizes this. The reason
exists
as a result of learning.
Human mental life is also a result of learning; a young child develops a mental life like ours only as a result of education. And there is no principled reason why a machine that is capable of learning could not develop a mental life as a result. Wittgenstein reminds us that if the behavior of machines is identical to human behavior in every relevant respect, then we have little reason to believe that the machine has no mental life. If we make such a judgment, we’re simply being inconsistent. Exactly this kind of judgment is made about the philosopher’s favorite creation, the “philosophical zombie.”
The “monstrous” idea of the philosophical zombie revolves around the question, Could there be a creature that behaves
just like we do
but lacks any inner life? David Chalmers, among others, has argued that zombies like this are perfectly possible.
5
But notice that from the Wittgensteinian perspective, the idea of such a zombie is nonsense because once behavior is taken as the sole
criterion
of mental life, the possibility of separating mental life from behavior is eliminated.
6
If we dismiss the zombie objection, the T-101’s ability to learn from humans instead of routinely following the program set by Skynet makes it “one of us.” So if the T-101 is “one of us,” does it have
rights
, as we think we have? Yes, through understanding the value of life and acting upon it,
he
has earned the right to exist. Ultimately, destroying this feeling machine for the sake of humanity may have been the best solution for the greatest number of people, but it was also a grave violation of its rights.
7
No wonder we feel sorry for the T-101 when it is little by little lowered into the molten steel, never to reemerge.
NOTES
1
Ludwig Wittgenstein,
Philosophical Investigations
(Malden, MA: Blackwell, 1963), 281.
2
For more on Turing’s famous test, see Justin Leiber’s chapter in this volume, “Time for the Terminator: Philosophical Themes of the Resistance.”
3
John Searle, “Minds, Brains and Programs,”
Behavioral and Brain Sciences
3 (1980): 417-457.
5
David Chalmers,
The Conscious Mind: In Search of a Fundamental Theory
(New York: Oxford Univ. Press, 1996).
6
It has to be noted, though, that Wittgenstein was mainly interested in the question of what we can learn from a philosophical study of our actual language. The question about the possibility of zombies would have been completely alien to his thinking.
7
For another perspective on the morality of the T-101’s sacrifice, see “Self-Termination: Suicide, Self-Sacrifice, and the Terminator” by Daniel P. Malloy in this volume.
CONTRIBUTORS
Future Leaders of the Resistance
Jacob Berger
is a graduate student in philosophy at the Graduate Center of the City University of New York, as well as an instructor at Baruch College, CUNY. He’s interested in philosophy of language, philosophy of mind, and metaphysics. Come with him if you want to learn.
Jason P. Blahuta
is really a T-3000 model Terminator sent back in time to find a university-aged John Connor who is hiding out at Lakehead University in the sparsely populated wilds of Northern Ontario. His mission: inflict massive mental trauma on John Connor by subjecting him to Hegel, effectively rendering him useless to the resistance. His cover: mild-mannered assistant professor of philosophy researching Machiavelli, applied ethics, and Asian philosophy.