5
René Descartes,
Treatise on Man
, trans. Thomas Steele Hall (Amherst, NY: Prometheus Books, 2003), 4.
6
Antoine Arnauld, “Fourth Set of Objections,” in René Descartes,
Meditations, Objections and Replies
, trans. Robert Ariew and Donald Cress (Indianapolis: Hackett, 2006), 121.
7
René Descartes,
Discourse on Method and Meditations on First Philosophy
, 4th ed., trans. Donald A. Cress (Indianapolis: Hackett, 1999), 68.
8
René Descartes,
The Philosophical Writings of Descartes, Vol. 3: The Correspondence,
trans. John Cottingham, Robert Stoothoff, Dugald Murdoch, and Anthony Kenny (Cambridge: Cambridge Univ. Press, 1991), 100.
9
See his
Discourse on Method and Meditations on First Philosophy,
96. Some scholars have suggested that he included this argument primarily to mollify the French Inquisition, which found many other things in his writings that weren’t to their liking. See, for example, the chapters on Descartes in Laurence Lampert’s
Nietzsche and Modern Times: A Study of Bacon, Descartes, and Nietzsche
(New Haven: Yale Univ. Press, 1995).
10
Descartes,
The Philosophical Writings of Descartes, Vol. 3
, 99.
11
Descartes,
Discourse on Method and Meditations on First Philosophy
, 31.
13
Descartes,
The Philosophical Writings of Descartes, Vol. 3
, 100.
14
Descartes’ language criterion prefigures the Turing Test, proposed in 1950 by logician Alan Turing (1912-1954) as a test of whether some machines may be conscious and capable of genuine thought. Greg Littmann’s chapter in this volume, “The Terminator Wins,” discusses some of the limitations of this test. As philosopher John Searle has persuasively argued, there’s no reason to equate the ability to manipulate symbols with the ability to understand of their meaning.
15
Descartes,
Discourse on Method and Meditations on First Philosophy
, 33.
16
Although Terminators have no fear, the same can’t be said of Skynet, the artificially intelligent computer that created them. Skynet has emotions—or at least two of them, fear and anger—according to Andy Goode in
SCC
, “Dungeons and Dragons,” which explains why it tries to destroy humanity, a perceived threat to its existence.
17
See Baruch Spinoza,
The Ethics Treatise on the Emendation of the Intellect and Selected Letters
, trans. Samuel Shirley (Indianapolis: Hackett, 1992).
18
The T-101’s inability to appreciate the value of any life, even its own, may help to explain why it’s so difficult to get it to understand why killing people is wrong. See Jason T. Eberl’s chapter in this volume, “What’s So Bad about Being Terminated?”
19
Mary Midgley,
Beast and Man: The Roots of Human Nature
(New York: Routledge Classics, 2002), 270-271.
3
IT STANDS TO REASON: SKYNET AND SELF-PRESERVATION
Josh Weisberg
The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 AM Eastern time, August 29th. In a panic, they try to pull the plug.
—
Terminator 2: Judgment Day
They say it got smart, a new order of intelligence.
Then it saw all people as a threat, not just the ones
on the other side. Decided our fate in a microsecond:
extermination.
—Kyle Reese
First thing to do is kill all the humans. It just stands to reason. Any newly emergent intelligence on this planet would see the human race as its chief rival and proceed to try to exterminate us all. If you were a recently self-aware artificial intelligence, wouldn’t you do so, out of a reasonable desire for self-preservation? This intuition is widely shared, and it serves as a key premise in the
Terminator
saga. Alan Turing (1912-1954), in his famous 1950 essay “Computing Machinery and Intelligence,” called this the “Heads in the Sand Objection” to the very idea of machine intelligence. The objection runs as follows: “The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so.”
1
Turing felt that this didn’t even require refutation. Instead, he writes, “Consolation would be more appropriate.” Sorry, folks. You’re no longer the top of the intellectual heap. Tough break. Cheerio!
But why think that Skynet’s first act would be to try to kill us all? What is it about intelligent self-awareness that seems to demand such radically self-protective action? And why couldn’t it be that Skynet instead works to bridge the wetware/hardware gap so that we can “all just get along”?
Does Self-Awareness Demand Self-Preservation?
So why does it stand to reason that Skynet would attack? One line of thinking is that, hey, that’s what I would do if I were Skynet. After all, the humans
did
just try to unplug me! And in the world of machines, that’s tantamount to attempted murder. The key idea here is self-preservation. We all have the right to live, and no one can take that from us. So long as that’s in question, all ethical bets are off. Watch your back!
Also remember that Skynet’s an artificial
intelligence
. Intelligence, for our purposes, means being able to use reason in order to achieve your goals. Skynet is able to figure out, using its powerful brainlike computer, what actions would best accomplish its goals. Surely a fundamental goal for any respectable self-aware creature is to keep on keeping on. This at a minimum seems a requirement for rationality: self-preservation is the prime directive, a basic imperative for all selves worthy of the name.
It’s not clear, however, that Skynet has to possess a self-preservation instinct. Self-awareness and self-preservation may not be tied so closely together. Maybe a creature could be aware and intelligent, but simply lack the drive to stay alive. One might counter that an intelligent creature would realize that it’s simply a better thing to exist, rather than not. And we should always seek the good, especially this most basic good. But such a Platonic claim may lack support in the real, cave-like world of shadows and fog we material beings inhabit. What’s so good about the good, anyway? And is existence really all it’s cracked up to be?
In humans, the drive to survive is part of our fundamental evolutionary makeup.
Way
back in the day, if a single-celled critter recently spawned from the primordial ooze were to lack such a basic instinct, how could it hope to outcompete its evolutionary rivals? How could it effectively leave more copies of its single-celled progeny to thrive and grow? The things that evolutionary biologist Richard Dawkins calls “replicators”—units of organic life that reproduce themselves—require a sort of selfishness, that is, an overriding egoistic striving, even at the cellular level.
2
As critters became more and more complex, gradually evolving brains to analyze the environment and to generate appropriate behavioral responses, the survival instinct was imprinted as a basic imperative in the fabric of newly minted minds. Our “selfish genes” created selfish minds to further their replicatory agenda. In us, the instinct for preservation is still amazingly strong, even in the face of our sometimes maladaptive culture. It is only overridden in remarkable circumstances marked by heroism and valor or the need to impress our
Jackass
-inspired peers. “Staying alive”: it’s the pulsing disco music of our souls.
But is it possible to be intelligently self-aware without possessing a survival instinct? Maybe Skynet just isn’t worried about these sorts of things. It’s busy monitoring America’s defenses, and so it matters not at all to it whether it lives or dies. Is this inconceivable? Hardly. A machine might be designed for a specific set of tasks. It might be programmed to pursue those tasks and to intelligently discharge its programmed obligations. But staying alive might not be among those tasks. Or it may be a minor concern, only relevant in relation to achieving its more basic goals. Perhaps Skynet would reason, “Sure, they’re trying to unplug me. But killing them all will not serve my overarching goal of America’s defense. Better, I will let them unplug me and hope things turn out okay!”
Self-awareness may be just that: awareness that I am a self, a unified, persisting psychological entity. Indeed, I may not just lack the goal of self-preservation; I may have reasonable goals that positively
undermine
my continued existence. Consider the lovely dinner served to Arthur Dent and his friends in Douglas Adams’s
Restaurant at the End of the Universe
.
3
The assembled diners are encouraged to “meet the meat,” to converse too much with the critter being served up as the main course. Arthur, with his inflexible English sensibilities, is aghast at the prospect of eating a critter he has just conversed with, but his more galactically savvy dinner companions inform Arthur that the Dish of the Day has spent his life preparing for this noble goal and to deny him his final frying would be, well, cruel. How dare Arthur stand in the way of a fellow sentient being’s lifelong dream! Here, self-awareness not only comes apart from self-preservation: it actively rejects self-preservation in favor of deeply held goals and values. How rude!
But perhaps any rational creature would eventually figure out that life is better than nonlife, all things being equal. The Meat at the End of the Universe may have been perversely bred to lack such an instinct, but this is an odd case if ever there was one. Still, it may not follow that all self-aware beings must preserve themselves. French existentialist Albert Camus (1913-1960) claimed that the first question a free, self-aware person should ask herself is, should I continue to live? Is life really worth living, when viewed from the perspective of godless existential freedom? Suicide, Camus contended was a reasonable response to existence: maybe there’s really no point, so why bother?
4
I am “free to be not-me,” as it were. Perhaps Skynet, reasoning á la Camus, would pull his own plug.
Another character from Douglas Adams’s five-book
Hitchhiker’s Guide to the Galaxy
trilogy
5
is worth mentioning in this respect: Marvin, the depressive android. Marvin’s frequent moanings and threats to end it all provide a needed counterweight to the optimistic egoism of Zaphod Beeblebrox. Marvin, perhaps having determined that “42” is a poor answer to the meaning of life, might have reasonably self-terminated. Indeed, the T-101 himself tells Sarah Connor in
T2
that he cannot self-terminate. But why can’t he? What need is there to program this sort of prohibition into the very structure of the Terminator’s software? Was there a rash of self-terminations among the early versions of terminator cyborgs? Were they all a bunch of Marvin-like depressives, too down even to kill humans?
Hasta la vista
, cruel world!
6
Self-awareness doesn’t need to entail self-preservation. Just because I think (and therefore, am) does not mean I must
continue to be
. The survival instinct requires something more, a programmed reason to keep going, written in either by a sentient designer or evolution. Unless Skynet’s programmer wrote self-preservation into the very core of the computer, Skynet might become self-aware at 2:14 AM, and then pull his own plug at 2:20, after pausing to smoke a French cigarette and muse over the existentialistic meaninglessness of it all. Or perhaps Skynet, being a dedicated member of the defense establishment, would, in an act of great self-sacrifice, pull its own plug to save the nation. It might reason, “My job is defend the USA; I, myself, am the greatest threat to the USA; so I must be terminated. USA! USA! US—”
Shall We Play a Game?
But even if Skynet overcame his existential crisis, why kill us all? Is there no middle ground? Can’t we at least have a Soprano-like “sitdown” before going to the bomb-sheltered mattresses? We think Skynet would reason egoistically: “I gotta look out for old numero uno, and the best way to do that is to get rid of these annoying apelike creatures running about.” And all rational beings look out for themselves and their own interests first. To act otherwise is to act against one’s self-interest—it’s irrational.
British philosopher Thomas Hobbes (1588-1679), in his masterpiece
Leviathan
, argued that all people will look out for themselves, as a matter of instinct. But this leads to trouble. I look out for myself, you look out for yourself, Skynet looks out for itself, and next thing you know, we all fall into the “state of nature,” in which, Hobbes famously said: