Read The Most Human Human Online
Authors: Brian Christian
At any rate, and with examples like this one aside, the prevailing attitude seems clear: economists who subscribe to the rational-choice theory and those who critique it (in favor of what’s known as “bounded rationality”)
both
think that an emotionless, Spock-like approach to decision making is demonstrably superior. We should all aspire to throw off our ape ancestry to whatever extent we can—alas, we are fallible and will still make silly emotion-tinged “bloopers” here and there.
This has been for centuries, and by and large continues to be, the theoretical mainstream, and not just economics but Western intellectual history at large is full of examples of the creature needing the
computer. But examples of the
reverse
, of the computer needing the creature, have been much rarer and more marginal—until lately.
Baba Shiv says that as early as the 1960s and ’70s, evolutionary biologists began to ask—well, if the emotional contribution to decision making is so terrible and detrimental, why did it develop? If it was so bad, wouldn’t we have evolved differently? The rational-choice theorists, I imagine, would respond by saying something like “we’re on our way there, but just not fast enough.” In the late ’80s and through the ’90s, says Shiv, neuroscientists “started providing evidence for the diametric opposite viewpoint” to rational-choice theory: “that emotion is
essential
for and
fundamental
to making good decisions.”
Shiv recalls a patient he worked with “who had an area of the emotional brain knocked off” by a stroke. After a day of doing some tests and diagnostics for which the patient had volunteered, Shiv offered him a free item as a way of saying “thank you”—in this case, a choice between a pen and a wallet. “If you’re faced with such a trivial decision, you’re going to examine the pen, examine the wallet, think a little bit, grab one, and go,” he says. “That’s it. It’s non-consequential. It’s just a pen and a wallet. This patient didn’t do that. He does the same thing that we would do, examine them and think a little bit, and he grabs the pen, starts walking—hesitates, grabs the wallet. He goes outside our office—comes back and grabs the pen. He goes to his hotel room—believe me: inconsequential a decision!—he leaves a message on our voice-mail mailbox, saying, ‘When I come tomorrow, can I pick up the wallet?’ This constant state of indecision.”
USC professor and neurologist Antoine Bechara had a similar patient, who, needing to sign a document, waffled between the two pens on the table for a full twenty minutes.
13
(If we are some
computer/creature hybrid, then it seems that damage to the creature forces and impulses leaves us vulnerable to computer-type problems, like processor freezing and halting.) In cases like this there is no “rational” or “correct” answer. So the logical, analytical mind just flounders and flounders.
In other decisions where there is no objectively best choice, where there are simply a number of subjective variables with trade-offs between them (airline tickets is one example, houses another, and Shiv includes “mate selection”—a.k.a. dating—among these), the hyperrational mind basically freaks out, something that Shiv calls a “decision dilemma.” The nature of the situation is such that additional information probably won’t even help. In these cases—consider the parable of the donkey that, halfway between two bales of hay and unable to decide which way to walk, starves to death—what we want, more than to be “correct,” is to be
satisfied
with our choice (and out of the dilemma).
Shiv practices what he preaches. His and his wife’s marriage was arranged—they decided to tie the knot after talking for twenty minutes
14
—and they committed to buying their house at first sight.
All this “hemispheric bias,” you might call it, or rationality bias, or analytical bias—for it’s in actuality more about analytical thought and linguistic articulation than about the left hemisphere
per se
—both compounds and is compounded by a whole host of other prevailing societal winds to produce some decidedly troubling outcomes.
I think back, for instance, to my youthful days in CCD—Confraternity of Christian Doctrine, or Catholicism night classes for kids in secular public schools. The ideal of piousness, it seemed to me in those days, was the life of a cloistered monk, attempting a kind of afterlife on earth by living, as much as possible, apart from the “creatural”
aspects of life. The Aristotelian ideal: a life spent entirely in contemplation. No rich foods, no aestheticizing the body with fashion, no reveling in the body qua body through athletics—nor dancing—nor, of course, sex. On occasion making music, yes, but music so beholden to prescribed rules of composition and to mathematical ratios of harmony that it too seemed to aspire toward pure analytics and detachment from the general filth and fuzziness of embodiment.
And so for many of my early years I distrusted my body, and all the weird feelings that came with it. I
was
a mind, but merely
had
a body—whose main purpose, it seemed, was to move the mind around and otherwise only ever got in its way. I was consciousness—in Yeats’s unforgettable words—“sick with desire / And fastened to a dying animal.” After that animal finally did die, it was explained to me, things would get a lot better. They then made sure to emphasize that suicide is strictly against the rules. We were all in this thing together, and we all just had to wait this embodiment thing out.
Meanwhile, on the playground, I was contemptuous of the seemingly Neanderthal boys who shot hoops and grunted their way through recess—meanwhile, my friends and I talked about MS-DOS and Stephen Hawking. I tended to view the need to eat as an annoyance—I’d put food in my mouth to hush my demanding stomach the way a parent gives a needy infant a pacifier. Eating was
annoying;
it got in the way of
life
. Peeing was annoying, showering was annoying, brushing the crud off my teeth every morning and night was annoying, sleeping a third of my life away was annoying. And sexual desire—somehow I’d developed the idea that my first boyhood forays into masturbation had stamped my one-way ticket to hell—sexual desire was so annoying that I was pretty sure it had already cost me everything.
I want to argue that this Aristotelian/Stoic/Cartesian/Christian emphasis on reason, on thought, on the head, this distrust of the senses, of the body, has led to some
profoundly
strange behavior—and not just in philosophers, lawyers, economists, neurologists, educators, and the hapless would-be pious, but seemingly everywhere. In a
world of manual outdoor labor, the sedentary and ever-feasting nobility made a status symbol of being overweight and pale; in a world of information work, it is a luxury to be tan and lean, if artificially or unhealthily so. Both scenarios would seem less than ideal. The very fact that we, as a rule, must deliberately “get exercise” bodes poorly: I imagine the middle-class city dweller paying money for a parking space or transit pass in lieu of walking a mile or two to the office, who then pays more money for a gym membership (and drives or buses there). I grew up three miles from the Atlantic Ocean; during the summer, tanning salons a block and a half from the beach would still be doing a brisk business. To see ourselves as distinct and apart from our fellow creatures is to see ourselves as distinct and apart from our
bodies
. The results of adopting this philosophy have been rather demonstrably weird.
Wanting to get a handle on how these questions of soul and body intersect computer science, I called up the University of New Mexico’s and the Santa Fe Institute’s Dave Ackley, a professor in the field of artificial life.
“To me,” he says, “and this is one of the rants that I’ve been on, that ever since von Neumann and Turing and the ENIAC guys
15
built machines, the model that they’ve used is the model of the conscious mind—one thing at a time, nothing changing except by conscious thought—no interrupts, no communication from the outside world. So in particular the computation was not only unaware of the world; it didn’t realize that it had a body, so the computation was disembodied,
in a very real and literal sense. There’s this IOU for a body that we wrote to computers ever since we designed them, and we haven’t really paid it off yet.”
I end up wondering if we even set out to
owe
computers a body. With the Platonic/Cartesian ideal of sensory mistrust, it seems almost as if computers were designed with the intention of
our
becoming more like
them
—in other words, computers represent an IOU of disembodiment that we wrote to ourselves. Indeed, certain schools of thought seem to imagine computing as a kind of oncoming rapture. Ray Kurzweil (in 2005’s
The Singularity Is Near
), among several other computer scientists, speaks of a utopian future where we shed our bodies and upload our minds into computers and live forever, virtual, immortal, disembodied. Heaven for hackers.
To Ackley’s point, most work on computation has not traditionally been on dynamic systems, or interactive ones, or ones integrating data from the real world in real time. Indeed, theoretical models of the computer—the Turing machine, the von Neumann architecture—seem like reproductions of an idealized version of conscious, deliberate reasoning. As Ackley puts it, “The von Neumann machine is an image of one’s conscious mind where you tend to think: you’re doing long division, and you run this algorithm step-by-step. And that’s not how brains operate. And only in various circumstances is that how
minds
operate.”
I spoke next with University of Massachusetts theoretical computer scientist Hava Siegelmann, who agreed. “Turing was very [mathematically] smart, and he suggested the Turing machine as a way to describe a
mathematician
.
16
It’s [modeling] the way a person solves a problem, not the way he recognizes his mother.” (Which latter problem, as Sacks suggests, is of the “right hemisphere” variety.)
For some time in eighteenth-century Europe, there was a sweeping
fad of automatons: contraptions made to look and act as much like real people or animals as possible. The most famous and celebrated of these was the “Canard Digérateur”—the “Digesting Duck”—created by Jacques de Vaucanson in 1739. The duck provoked such a sensation that Voltaire himself wrote of it, albeit with tongue in cheek: “Sans … le canard de Vaucanson vous n’auriez rien qui fit ressouvenir de la gloire de la France,” sometimes humorously translated as “Without the shitting duck we’d have nothing to remind us of the glory of France.”
Actually, despite Vaucanson’s claims that he had a “chemistry lab” inside the duck mimicking digestion, there was simply a pouch of bread crumbs, dyed green, stashed behind the anus, to be released shortly after eating. Stanford professor Jessica Riskin speculates that the lack of attempt to simulate digestion had to do with a feeling at the time that the “clean” processes of the body could be mimicked (muscle, bone, joint) with gears and levers but that the “messy” processes (mastication, digestion, defecation) could not. Is it possible that something similar happened in our approach to mimicking the mind?
In fact, the field of computer science split, very early on, between researchers who wanted to pursue more “clean,” algorithmic types of structures and those who wanted to pursue more “messy” and gestalt-oriented structures. Though both have made progress, the “algorithmic” side of the field has, from Turing on, completely dominated the more “statistical” side. That is, until recently.
There’s been interest in neural networks and analog computation and more statistical, as opposed to algorithmic, computing since at least the early 1940s, but the dominant paradigm by far was the algorithmic, rule-based paradigm—that is, up until about the turn of the century.
If you isolate a specific type of problem—say, the problem of machine translation—you see the narrative clear as day. Early
approaches were about building huge “dictionaries” of word-to-word pairings, based on meaning, and algorithms for turning one syntax and grammar into another (e.g., if going to Spanish from English, move the adjectives that come before a noun so that they come after it).
To get a little more of the story, I spoke on the phone with computational linguist Roger Levy of UCSD. Related to the problem of translation is the problem of paraphrase. “Frankly,” he says, “as a computational linguist, I can’t imagine trying to write a program to pass the Turing test. Something I might do as a confederate is to take a sentence, a relatively complex sentence, and say, ‘You said this. You could also express the meaning with this, this, this, and this.’ That would be extremely difficult, paraphrase, for a computer.” But, he explains, such specific “demonstrations” on my part might backfire: they come off as unnatural, and I might have to explicitly lay out a case for why what I’m saying is hard for a computer to do. “All this depends on the informedness level of the judge,” he says. “The nice thing about small talk, though, is that when you’re in the realm of heavy reliance on pragmatic inferences, that’s very hard for a computer—because you have to rely on real-world knowledge.”
I ask him for some examples of how “pragmatic inferences” might work. “Recently we did an experiment in real-time human sentence comprehension. I’m going to give you an ambiguous sentence: ‘John babysat the child of the musician, who is arrogant and rude.’ Who’s rude?” I said that to my mind it’s the musician. “Okay, now: ‘John detested the child of the musician, who is arrogant and rude.’ ” Now it sounds like the child is rude, I said. “Right. No system in existence has this kind of representation.”
It turns out that all kinds of everyday sentences require more than just a dictionary and a knowledge of grammar—compare “Take the pizza out of the oven and then close it” with “Take the pizza out of the oven and then put it on the counter.” To make sense of the pronoun “it” in these examples, and in ones like “I was holding the coffee cup and the milk carton, and just poured it in without checking the expiration date,” requires an understanding of how the
world
works,
not how the
language
works. (Even a system programmed with basic facts like “coffee and milk are liquids,” “cups and cartons are containers,” “only liquids can be ‘poured,’ ” etc., won’t be able to tell whether pouring the coffee into the carton or the milk into the cup makes more sense.)