Read The Most Human Human Online
Authors: Brian Christian
After briefing me a bit on the logistics of the competition, he gave me the advice I had heard from confederates past to expect: “There’s not much more you need to know, really. You
are
human, so just be yourself.”
“Just be yourself”
—this has been, in effect, the confederate motto since the first Loebner Prize in 1991, but seems to me like a somewhat naive overconfidence in human instincts—or at worst, fixing the fight. The AI programs we go up against are often the result of decades of work—then again, so are we. But the AI research teams have huge databases of test runs of their programs, and they’ve done statistical analysis on these archives: they know how to deftly guide the conversation away from their shortcomings and toward their strengths, what conversational routes lead to deep exchange and which ones
fizzle—the average confederate off the street’s instincts aren’t likely to be so good. This is a strange and deeply interesting point, of which the perennial demand in our society for conversation, public speaking, and dating coaches is ample proof. The transcripts from the 2008 contest show the judges being downright apologetic to the human confederates that they can’t make better conversation—“i feel sorry for the [confederates], i reckon they must be getting a bit bored talking about the weather,” one says, and another offers, meekly, “sorry for being so banal”—meanwhile, the computer in the other window is apparently charming the pants off the judge, who in no time at all is gushing lol’s and :P’s. We can do better.
So, I must say, my intention from the start was to be as thoroughly disobedient to the organizers’ advice to “just show up at Brighton in September and ‘be myself’ ” as possible—spending the months leading up to the test gathering as much information, preparation, and experience as possible and coming to Brighton ready to give it everything I had.
Ordinarily, there wouldn’t be very much odd about this notion at all, of course—we train and prepare for tennis competitions, spelling bees, standardized tests, and the like. But given that the Turing test is meant to evaluate
how human
I am, the implication seems to be that being human (and being oneself) is about more than simply showing up. I contend that it is. What exactly that “more” entails will be a main focus of this book—and the answers found along the way will be applicable to a lot more in life than just the Turing test.
A rather strange, and more than slightly ironic, cautionary tale: Dr. Robert Epstein, UCSD psychologist, editor of the scientific volume
Parsing the Turing Test
, and co-founder, with Hugh Loebner, of the Loebner Prize, subscribed to an online dating service in the winter of 2007. He began writing long letters to a Russian woman named
Ivana, who would respond with long letters of her own, describing her family, her daily life, and her growing feelings for Epstein. Eventually, though, something didn’t feel right; long story short, Epstein ultimately realized that he’d been exchanging lengthy love letters for
over four months
with—you guessed it—a computer program. Poor guy: it wasn’t enough that web ruffians spam his email box every day, now they have to spam his heart?
On the one hand, I want to simply sit back and laugh at the guy—he
founded
the Loebner Prize, for Christ’s sake! What a chump! Then again, I’m also sympathetic: the unavoidable presence of spam in the twenty-first century not only clogs the in-boxes and bandwidth of the world (roughly 97 percent of
all email messages
are spam—we are talking tens of billions a day; you could
literally
power a small nation
4
with the amount of electricity it takes to process the world’s daily spam), but does something arguably worse—it erodes our sense of trust. I hate that when I get messages from my friends I have to expend at least a modicum of energy, at least for the first few sentences, deciding whether it’s really
them
writing. We go through digital life, in the twenty-first century, with our guards up. All communication is a Turing test. All communication is suspect.
That’s the pessimistic version, and here’s the optimistic one. I’ll bet that Epstein learned a lesson, and I’ll bet that lesson was a lot more complicated and subtle than “trying to start an online relationship with someone from Nizhny Novgorod was a dumb idea.” I’d like to think, at least, that he’s going to have a lot of thinking to do about why it took him four months to realize that there was no actual exchange occurring between him and “Ivana,” and that in the future he’ll be quicker to the real-human-exchange draw. And that his
next
girlfriend, who hopefully not only is a bona fide
Homo sapiens
but also lives fewer than eleven time zones away, may have “Ivana,” in a weird way, to thank.
When Claude Shannon met Betty at Bell Labs in the 1940s, she was indeed a computer. If this sounds odd to us in any way, it’s worth knowing that nothing at all seemed odd about it to them. Nor to their co-workers: to their Bell Labs colleagues their romance was a perfectly normal one, typical even. Engineers and computers wooed all the time.
It was Alan Turing’s 1950 paper “Computing Machinery and Intelligence” that launched the field of AI as we know it and ignited the conversation and controversy over the Turing test (or the “Imitation Game,” as Turing initially called it) that has continued to this day—but modern “computers” are nothing like the “computers” of Turing’s time. In the early twentieth century, before a “computer” was one of the digital processing devices that so proliferate in our twenty-first-century lives—in our offices, in our homes, in our cars, and, increasingly, in our pockets—it was something else: a job description.
From the mid-eighteenth century onward, computers, frequently women, were on the payrolls of corporations, engineering firms, and universities, performing calculations and doing numerical analysis, sometimes with the use of a rudimentary calculator. These original, human computers were behind the calculations for everything from the first accurate predictions for the return of Halley’s comet—early proof of Newton’s theory of gravity, which had only been checked against planetary orbits before—to the Manhattan Project, where Nobel laureate physicist Richard Feynman oversaw a group of human computers at Los Alamos.
It’s amazing to look back at some of the earliest papers in computer science, to see the authors attempting to explain, for the first time, what exactly these new contraptions were. Turing’s paper, for instance, describes the unheard-of “digital computer” by making analogies
to a
human
computer: “The idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer.” Of course in the decades to come we know that the quotation marks migrated, and now it is the digital computer that is not only the default term, but the
literal
one. And it is the
human
“computer” that is relegated to the illegitimacy of the figurative. In the mid-twentieth century, a piece of cutting-edge mathematical gadgetry was “like a computer.” In the twenty-first century, it is the
human
math whiz that is “like a computer.” An odd twist: we’re
like
the thing that used to be
like
us. We imitate our old imitators, one of the strange reversals of fortune in the long saga of human uniqueness.
Harvard psychologist Daniel Gilbert says that every psychologist must, at some point in his or her career, write a version of “The Sentence.” Specifically, The Sentence reads like this: “The human being is the only animal that________.” Indeed, it seems that philosophers, psychologists, and scientists have been writing and rewriting this sentence since the beginning of recorded history. The story of humans’ sense of self is, you might say, the story of failed, debunked versions of The Sentence. Except now it’s not just the animals that we’re worried about.
We once thought humans were unique for having a language with syntactical rules, but this isn’t so;
5
we once thought humans were
unique for using tools, but this isn’t so;
6
we once thought humans were unique for being able to do mathematics, and now we can barely imagine being able to do what our calculators can.
There are several components to charting the evolution of The Sentence. One is a historical look at how various developments—in our knowledge of the world as well as our technical capabilities—have altered its formulations over time. From there, we can look at how these different theories have shaped humankind’s sense of its own identity. For instance, are artists more valuable to us than they were before we discovered how difficult art is for computers?
Last, we might ask ourselves: Is it appropriate to allow our definition of our own uniqueness to be, in some sense,
reactionary
to the advancing front of technology? And why is it that we are so compelled to feel unique in the first place?
“Sometimes it seems,” says Douglas Hofstadter, “as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is
not.
” While at first this seems a consoling position—one that keeps our unique claim to thought intact—it does bear the uncomfortable appearance of a gradual retreat, the mental image being that of a medieval army withdrawing from the castle to the keep. But the retreat can’t continue indefinitely. Consider: if
everything
of which we regarded “thinking” to be a hallmark turns out not to involve it, then … what is thinking? It would seem to reduce to either an
epiphenomenon—a kind of “exhaust” thrown off by the brain—or, worse, an illusion.
Where is the keep of our
selfhood
?
The story of the twenty-first century will be, in part, the story of the drawing and redrawing of these battle lines, the story of
Homo sapiens
trying to stake a claim on shifting ground, flanked on both sides by beast and machine, pinned between meat and math.
And here’s a crucial, related question: Is this retreat a good thing or a bad thing? For instance, does the fact that computers are so good at mathematics in some sense
take away
an arena of human activity, or does it
free
us from having to do a nonhuman activity, liberating us into a more human life? The latter view would seem to be the more appealing, but it starts to seem less so if we can imagine a point in the future where the number of “human activities” left to be “liberated” into has grown uncomfortably small. What then?
There
are
no broader philosophical implications …
It doesn’t connect to or illuminate anything
.
–NOAM CHOMSKY, IN AN EMAIL TO THE AUTHOR
Alan Turing proposed his test as a way to measure the progress of technology, but it just as easily presents us a way to measure our
own
. Oxford philosopher John Lucas says, for instance, that if we fail to prevent the machines from passing the Turing test, it will be “not because machines are so intelligent, but because humans, many of them at least, are so wooden.”
Here’s the thing: beyond its use as a technological benchmark, beyond even the philosophical, biological, and moral questions it poses, the Turing test is, at bottom, about the act of communication. I see its deepest questions as practical ones: How do we connect meaningfully with each other, as meaningfully as possible, within the limits
of language and time? How does empathy work? What is the process by which someone comes into our life and comes to mean something to us? These, to me, are the test’s most central questions—the most central questions of being human.
Part of what’s fascinating about studying the programs that have done well at the Turing test is that it is a (frankly, sobering) study of how conversation can work in the total absence of emotional intimacy. A look at the transcripts of Turing tests past is in some sense a tour of the various ways in which we demure, dodge the question, lighten the mood, change the subject, distract, burn time: what shouldn’t pass as real conversation at the Turing test probably shouldn’t be allowed to pass as real human conversation, either.
There are a number of books written about the technical side of the Turing test: for instance, how to cleverly design Turing test programs—called chatterbots, chatbots, or just bots. In fact, almost everything written at a practical level about the Turing test is about how to make good bots, with a small remaining fraction about how to be a good judge. But nowhere do you read how to be a good confederate. I find this odd, since the confederate side, it seems to me, is where the stakes are highest, and where the answers ramify the furthest.
Know thine enemy better than one knows thyself
, Sun Tzu tells us in
The Art of War
. In the case of the Turing test, knowing our enemy actually
becomes
a way of knowing ourselves. So we will, indeed, have a look at how some of these bots are constructed, and at some of the basic principles and most important results in theoretical computer science, but always with our eye to the human side of the equation.
In a sense, this is a book about artificial intelligence, the story of its history and of my own personal involvement, in my own small way, in that history. But at the core, it’s a book about living life.
We can think of computers, which take an increasingly central role in our lives, as nemeses: a force like
Terminator
’s Skynet, or
The Matrix
’s Matrix, bent on our destruction, just as we should be bent on theirs. But I prefer, for a number of reasons, the notion of
rivals
—who only ostensibly want to win, and who know that competition’s main purpose is to raise the level of the game. All rivals are symbiotes. They need each other. They keep each other honest. They make each other better. The story of the progression of technology doesn’t have to be a dehumanizing or dispiriting one. Quite, as you will see, the contrary.