The End of Absence: Reclaiming What We've Lost in a World of Constant Connection (8 page)

BOOK: The End of Absence: Reclaiming What We've Lost in a World of Constant Connection
13.5Mb size Format: txt, pdf, ePub
 

The quasi-biblical quest to bequeath unto computers an emotional intelligence—to “promote them” from darkness—has long occupied human imagination and appears to have proceeded down a pair of intertwining roads. Down one road, typified by Mary Shelley’s
Frankenstein,
the birth of artificial intelligence has harrowing repercussions, however soulful its beginnings. Down the other road, we encounter the robot as savior or selfless aide, as seen in Leonardo da Vinci’s 1495 designs for a mechanical knight. Mostly, though, these two roads have crossed over each other—the artificial being is both savior and villain. As Adam and Eve were corrupted and disappointed their God, so our science-fiction writers presume robot intelligence will start off with noble intentions and lead to an inevitable debasement. This twinned identity of friend and antagonist is evident as far back as 1921, when the Czech dramatist Karel Capek premiered his play
R.U.R.
The acronym stands for
Rossum’s Universal Robots
(this was in fact the work that popularized the term
robot
). In Capek’s play, a line of helpful mechanical servants degrades into hostile rebels that attack their human masters and eventually kill us off.

Shelves and shelves of such dystopian fantasies do not dull, though, the hope that our creations may know something we do not, may ease our human suffering. We turn to that promise of artificial intelligence almost with the instincts of lost children, asking them to make sense of our lives or help us to escape the silence of our solitude. It makes sense, then, that the first computer outside of science fiction to speak back with something akin to human concern was an infinitely patient and infinitely permissive therapist called ELIZA.

In the mid-1960s, Joseph Weizenbaum, a computer scientist at MIT, wrote a string of code that allowed a computer to process conversational text and then produce a new piece of text that looked an awful lot like the next step in a dialogue. ELIZA was named for Eliza Doolittle, the Cockney girl in George Bernard Shaw’s play
Pygmalion,
who must overcome her impoverished upbringing and learn to speak “like a lady.” The program’s efforts to communicate were rudimentary, and her speech was annoyingly reflective in the manner of a Rogerian psychologist. Tell ELIZA, “I don’t like going to school,” and she will respond with, “Tell me why you don’t like going to school.” Inform ELIZA that the room is dark, and she will ask if you really think that is so.

I was able to use the ELIZA program myself, thinking it could do a better job of explaining its workings than I could. Here is the transcript from our first session:

Hello. My name is Eliza. How may I help you?

 

I want to write about you.

 

Oh . . . ?

 

Yes, I’m writing a book and there’s a chapter where you appear.

 

How long have you been writing a book and there’s a chapter where me appear?

 

A few months now.

 

I’m not sure I understand you fully.

 

I’m not sure you do, either.

 

Do you believe it is normal to be not sure me does either?

 

I think it’s probably a typical response to your responses.

 

Why are you concerned about my responses?

 

Mainly because they’re nonsensical.

 

That is quite interesting.

 

Do you think so?

 

You’re not really talking about me, are you?

 

This quickly grew insufferable. But perhaps, I told myself, the experiment was unfair. Self-reflection is not what ELIZA was designed to excel at. She wanted to help me with my problems, not discuss her own secrets. I tried again, this time with a genuine emotional approach.

Hello. My name is Eliza. How may I help you?

 

I’m stressed about writing a book.

 

How long have you been stressed about writing a book?

 

Months
.

 

Okay. . . . “I’m stressed about writing a book.” Tell me more.

 

Like what?

 

Does that question interest you?

 

Oh, fuck off.

 

Tell me more. . . .

 

Fuck off. Fuck off. Fuck off.

 

Why do you repeat yourself?

 

Fuck off. Fuck off. Fuck off.

 

Come, come, elucidate your thoughts.

 

Fuck, fuck, fuckity fuck.

 

Say, do you have any psychological problems?

 

This second attempt at communicating with the antique program counted as a failure, too, I think. But I
was
surprised at how engaged I felt.

When Weizenbaum began allowing MIT students to interact with ELIZA, he, too, was surprised by how drawn in they were. Many of them found her approach charming and helpful. She was, in some ways, an ideal conversationalist: someone willing to parrot back your own opinions and ask over and over how you feel and why you feel that way. Some therapists (hardly looking out for their own financial interests) began suggesting that ELIZA would be a cheap alternative to pricey psychoanalysis sessions. As dull-witted as ELIZA actually was, she gave people exactly what they wanted from a listener—a sounding board. “
Extremely short exposures
to a relatively simple computer program,” Weizenbaum later wrote, “could induce powerful delusional thinking in quite normal people.”

Today, these delusions are everywhere. Often they manifest in ridiculous ways.
BMW was forced to recall
a GPS system because German men couldn’t take directions from the computer’s female voice. And when the U.S. Army designed its Sergeant Star, a chatbot that talks to would-be recruits at GoArmy.com, they naturally had their algorithm speak with a burly, all-American voice reminiscent of the shoot-’em-up video game
Call of Duty
. Fooling a human into bonding with inanimate programs (often of corporate or governmental derivation) is the new, promising, and dangerous frontier. But the Columbus of that frontier set sail more than half a century ago.

• • • • •

 

The haunted English mathematician Alan Turing—godfather of the computer—believed that a future with emotional, companionable computers was a simple inevitability. He declared, “
One day ladies will take their computers
for walks in the park and tell each other, ‘My little computer said such a funny thing this morning!’” Turing proposed that a machine could be called “intelligent” if people exchanging text messages with that machine could not tell whether they were communicating with a human. (There are a few people I know who would fail such a test, but that is another matter.)

This challenge—which came to be called “the Turing test”—lives on in an annual competition for the Loebner Prize, a coveted solid-gold medal (plus $100,000 cash) for any computer whose conversation is so fluid, so believable, that it becomes indistinguishable from a human correspondent.
7
At the Loebner competition (founded in 1990 by New York philanthropist Hugh Loebner), a panel of judges sits before computer screens and engages in brief, typed conversations with humans and computers—but they aren’t told which is which. Then the judges must cast their votes—which was the person and which was the program? Programs like Cleverbot (the “most human computer” in 2005 and 2006) maintain an enormous database of typical responses that humans make to given sentences, which they cobble together into legible (though slightly bizarre) conversations; others, like the 2012 winner, Chip Vivant, eschew the database of canned responses and attempt something that passes for “reasoning.” Human contestants are liable to be deemed inhuman, too: One warm-blooded contestant called Cynthia Clay, who happened to be a Shakespeare expert, was voted a computer by three judges when she started chatting about the Bard and seemed to know “too much.” (
According to Brian Christian’s account
in
The Most Human Human,
Clay took the mistake as a badge of honor—being inhuman was a kind of compliment.)

All computer contestants, like ELIZA, have failed the full Turing test; the infinitely delicate set of variables that makes up human exchange remains opaque and uncomputable. Put simply, computers still lack the empathy required to meet humans on their own emotive level.

We inch toward that goal. But there is a deep difficulty in teaching our computers even a little empathy. Our emotional expressions are vastly complex and incorporate an annoyingly subtle range of signifiers. A face you read as tired may have all the lines and shadows of “sorrowful” as far as a poorly trained robot is concerned.

What Alan Turing imagined, an intelligent computer that can play the human game at least almost as well as a real human, is now called “affective computing”—and it’s the focus of a burgeoning field in computer science. “Affective” is a curious word choice, though an apt one. While the word calls up “affection” and has come to reference moods and feelings, we should remember that “affective” comes from the Latin word
afficere,
which means “to influence” or (more sinisterly) “to attack with disease.”

Recently, a band of scientists at MIT has made strides toward the holy grail of
afficere
—translating the range of human emotions into the 1s and 0s of computer code.

• • • • •

 

Besides the progress of chatbots, we now have software that can map twenty-four points on your face, allowing it to identify a range of emotions and issue appropriate responses. We also have Q sensors—bands worn on the wrist that measure your “emotional arousal” by monitoring body heat and the skin’s electrical conductance.

But the root problem remains unchanged. Whether we’re talking about “affective computers” or “computational empathy,” at a basic level we’re still discussing pattern recognition technology and the ever more sophisticated terrain of data mining. Always, the goal is to “humanize” an interface by the enormous task of filtering masses of lived experience through a finer and finer mesh of software.

Many of the minds operating at the frontier of this effort come together at MIT’s Media Lab, where researchers are busy (in their own words) “inventing a better future.” I got to know Karthik Dinakar, a Media Lab researcher who moonlights with Microsoft, helping them improve their Bing search engine. (“Every time you type in ‘Hillary Clinton,’” he told me, “that’s me.”)

Dinakar is a handsome twenty-eight-year-old man with tight black hair and a ready smile. And, like Amanda Todd, he’s intimately acquainted with the harshness of childhood bullying. Dinakar was bullied throughout his teen years for being “too geeky,” and he would reach out online. “I would write blog posts and I would . . . well, I would feel lighter. I think that’s why people do all of it, why they go on Twitter or anywhere. I think they must be doing it for sympathy of one kind or another.”

Compounding Dinakar’s sense of difference was the fact that he lives with an extreme variety of synesthesia; this means his brain creates unusual sensory impressions based on seemingly unrelated inputs. We’ve all experienced synesthesia to some degree: The brain development of infants actually necessitates a similar state of being.
Infants at two or three months
still have intermingled senses. But in rare cases, the situation will persist. If you mention the number “seven” to Dinakar, he sees a distinct color. “Friday” is always a particular dark green. “Sunday” is always black.

Naturally, these differences make Dinakar an ideal member of the Media Lab team. A so-called geek with a brain hardwired to make unorthodox connections is exactly what a bastion of interdisciplinary academia most desperately needs.

When Dinakar began his PhD work at MIT, in the fall of 2010, his brain was “in pain for the entire semester,” he says. Class members were told to come up with a single large project, but nothing came to mind. “I wasn’t interested in what others were interested in. There was just . . . nothing. I assumed I was going to flunk.”

Then, one evening at home, Dinakar watched Anderson Cooper report on Tyler Clementi, an eighteen-year-old violin student at Rutgers University who had leapt from the George Washington Bridge and drowned in the Hudson River. Clementi’s dorm mate had encouraged friends to watch him kissing another boy via a secretly positioned webcam. The ubiquitous Dr. Phil appeared on the program, speaking with Cooper about the particular lasting power of cyberbullying, which does not disappear the way a moment of “real-life bullying” might: “
This person thinks, ‘I am damaged
, irreparably, forever.’ And that’s the kind of desperation that leads to an act of suicide. . . . The thought of the victim is that everybody in the world has seen this. And that everybody in the world is going to respond to it like the mean-spirited person that created it.” Dinakar watched the program, figuring there must be a way to stem such cruelty, to monitor and manage unacceptable online behavior.

Other books

Nothing Lost by John Gregory Dunne
Long Shot by Hanna Martine
Wholehearted by Cate Ashwood
The Onion Girl by Charles de Lint
Nowhere Boys by Elise Mccredie