Authors: Ken Alder
The passage in 2005 of the McCain detainee amendment to a military appropriations bill and the subsequent redrafting of the Army Field Manual in 2006 would seem to prohibit many of the harsh interrogation tactics previously authorized for enemy combatants. Yet the Military Commissions Act, signed by President Bush in October 2006, for the first time opens the door to formally authorize CIA interrogators to use these techniques on suspected terrorists and permits the introduction in special military tribunals of evidence obtained with these methods. It would seem that the U.S. government has decided to actually sanction these techniques both as a matter of law and as a clandestine practice.
Under the circumstances, it is understandable that America has once again turned to science for a solution to the problem of how to coax reliable information from recalcitrant human beings. But a half century of controversy and failure seems to have finally eroded the public’s faith in the traditional polygraph. Surveys of jurors in civil cases suggest that the public has become skeptical about lie detectors that consist of pulsing tubes and dancing needles. When science gets old, it’s time to get some new science. In recent decades researchers have floated many new schemes for lie detection. Some have listened for the pitch of the human voice under stress; others for heat around the eyes.
At the medical school of the University of California at San Francisco, Paul Ekman has compiled a vast library of facial expressions, sorted by emotional type. The faces of liars, he says, involuntarily telegraph their deceit, sometimes in muscle reactions too quick to be consciously observed—unless you know what to look for or have the right equipment. Ekman’s methods build on a series of assumptions about the links between human biology and society: that thanks to evolutionary pressures certain human emotions—such as fear—produce a common and specific reaction in the human body; that their visible signs serve some role in social signaling and hence can be read by others, even if not always consciously; and that, with sufficient attention and training, the fear of being caught in a lie is one of the reactions that can be detected. But it is just as plausible to say that evolutionary pressures have made people good at getting away with lies as it is to say that evolution has made them good at catching falsehoods. And subsequent studies have shown—and Ekman himself admits—that even with training very few people do much better than guesswork, and some do worse!
The latest techniques offer an alluring combination of high-tech equipment and a direct assault on the fount of all duplicity: the brain. In place of the old correlation of the body and its emotions, neuroscientists promise to lead interrogators directly into the factory of our consciousness.
One technique is based on the electroencephalograph (EEG), which measures the electrical activity of the brain. The EEG has the advantage of being relatively easy to record, though it gives an aggregate picture of neuron activity, not readily localized. It was first proposed for lie detection in the 1940s, when it was used to evaluate a variety of mental illnesses; John Larson, for instance, used it to assess lobotomy patients. One recent adaptation looks for a rise in the P300 wave said to occur 300-plus milliseconds after a subject encounters a familiar, but rare, stimulus—such as when someone suddenly calls out a former lover’s name amid a stream of babble. Lawrence Farwell, its chief promoter, has patented an algorithm that records this and other signals to produce a "brain fingerprint" that he says can detect guilty knowledge 99.9 percent of the time. In the early 1990s Farwell received $1 million in funding from the CIA, although the agency soon ceased to support his research—as it explained in the months after September 11—because it decided the technique could not readily be adapted to screening purposes and because a full evaluation of the technique’s effectiveness was impossible so long as Farwell refused to divulge crucial aspects of his algorithm. At the time Farwell was being named one of the "TIME 100: The Next Wave, the 100 Innovators Who May Be the Picassos or Einsteins of the Twenty-First Century." In fact, Farwell is poised to become the Orlando Scott of our time. And like Scott, he gets results by "art rather than science." His own mentor says the technique suffers from many of the problems of traditional polygraphs, such as assuming that the subject’s memory is a storage bin from which exact matches can be drawn, rather than an active and creative faculty. The fear is that examiners may interpret brain activity as a sign that the subject has recalled a rare event which the person has never in fact experienced.
A more recent—but technically challenging—method is based on functional magnetic resonance imaging (fMRI). These are real-time scans that enable neuroscientists to gauge the level of activity in various regions of the brain by tracking the blood (oxygen) flow to each region. The basic assumption is that when subjects tell a lie, they draw on distinct cognitive abilities located in identifiable regions of the brain. Taking the images is laborious and requires full cooperation from the subject, who must lie supine inside a confining tube for nearly an hour. Yet researchers using these techniques have been able to reproduce Keeler’s card trick and catch volunteers who have "stolen" small sums of money.
For all their state-of-the-art plausibility, however, these new tests are plagued by many of the same ambiguities as old-style polygraphy. The studies still assume a link—via some unspecified mediation—between deception and bodily reaction, though of course they squeeze the "body" from the viscera to the brain—as if the brain’s "feelings" were necessarily more authentic, or less easily faked, than those in the lowly heart or stomach. (Not that these humble organs have been entirely neglected; one new lie-detection technique zeros in on the autonomous response of the stomach muscles.) More specifically, most fMRI researchers, when testing for guilty knowledge, look for extra activity in the prefrontal cortex—thought to be responsible for inhibiting responses—on the assumption that honest recollection is the default mode of human beings and that deceit involves a distinctive pathological behavior to cover it up. But as Marston discovered a century ago in his evaluation of Münsterberg’s word-association tests, some people seem to enjoy telling lies and do so without inhibition. Indeed, other fMRI researchers have blundered on what Montaigne bemoaned four centuries ago: that lies may assume a hundred thousand shapes—for a start, those we come up with on the fly and those we stake out in advance—and these varieties seem to draw on different regions of the brain, depending on the mix of memorization and invention. And what about those cases in which subjects feel guilty or defiant or simply refuse to play along with the experimenters’ "interesting scenarios" in return for a $20 reward? Besides, isn’t the human mind capable of fabricating memories, or feeling guilty for no good reason, or in some cases calmly and pathologically lying? Who knows, might we not even be evolutionarily equipped to lie, the way other animals (unconsciously) deceive predators?
On closer inspection the fMRI technique seems to return us to the era of phrenology. By their own admission many researchers in the field engage in "blobology," in which they reason back from their colorful images to the sorts of cognitive processes that "might" have been involved when the subject performed some task. These sorts of inverse inferences have always plagued lie detector research. The fMRI deception studies conducted thus far have been confined to laboratory tests, amalgamating the responses of half a dozen subjects asked to lie about an incident with little ambiguity and low consequences, lest their emotions swamp the processes involved in cognitive deceit. Even under such artificial conditions, fellow researchers caution, the results barely cross the threshold of significance. And the challenge for lie detection, as always, is to get good results in the messy world of tangled tales and fearful innocents.
Then there are the problems that plague any attempt to detect deception. Drugs, mental discipline, or other countermeasures might foil the new tests. False memories and other tricks of self-deception will always cast doubt on their conclusiveness, as might strong personal commitments. And finally, these new techniques all retain the old software of interrogation: the stimulation question technique, the guilty-knowledge test, etc. Thus they seem poised to renew the classical "misdirect" of the magician’s art, focusing the subject’s attention on the newfangled and intimidating equipment, when it is actually the examination ritual that has produced the results.
But a drummed-up need is the mother of marketing, and in the years since 9/11 many millions of dollars and considerable brainpower have been devoted to this field. In 2006 two commercial firms began to offer fMRI deception testing based on patented algorithms. These firms are offering their services to the Department of Homeland Security and the director of National Intelligence, as well as to civil courts, employers, advertisers, and movie studios interested in gauging viewers’ emotional responses to "media information." Leonarde Keeler would be proud, John Larson horrified, and William Marston amused.
Of course one can never rule out the possibility that researchers will someday find relatively reliable methods of distinguishing genuine belief from deliberate deceit. Nothing in this book suggests that such an outcome is impossible. To be sure, one may doubt that it would be welcome; ethicists are already on the case, worrying that the new brain techniques will actually succeed in invading the final realm of human privacy. By that time, of course, as the judge in the
Frye
case said, we may all be dead. In the meanwhile, the more worrisome and likely outcome is that new techniques, no more reliable than Larson and Keeler’s old-fashioned methods, will slip neatly into the role the polygraph once played. That is how badly we believe in science.
It would be comforting to think that science could sanitize the messy business of extracting reliable information from recalcitrant human beings. How much easier it would be if we could just ship off the problem of human deception to a remote and spotless laboratory, and get the report back overnight. The temptation is not new and it is perfectly understandable. But we cannot expect science to bear burdens that we ourselves won’t shoulder. Science does not reside outside the entanglements of human history, and it is when we delude ourselves into thinking that it does—by, say, treating people as bodily specimens when their lives and liberty are at stake—that we create monsters like Frankenstein’s.
If the fixture of Momus’s glass in the human breast…, had taken place,—…nothing more would have been wanting, in order to have taken a man’s character, but to have taken a chair and gone softly…, and looked in,—viewed the soul stark naked;—observed all her motions,—her machinations;—…then taken your pen and ink and set down nothing but what you have seen, and could have sworn to:—But this is an advantage not be had by the biographer in this planet;—…our minds shine not through the body, but are wrapt up here in the dark covering of uncrystallized flesh and blood; so that, if we would come to the specific characters of them, we must go some other way to work….I will draw my Uncle Toby’s character from his H
OBBY
-H
ORSE.
—LAURENCE STERNE,
THE LIFE AND OPINIONS OF TRISTRAM SHANDY,
1759
OUR OBSESSIONS DEFINE US. THEY TUG AT THE RHYTHMIC
trace of our life like the secret pull of our heart and the stifled hitch of our breath. They are the actions—both commonplace and idiosyncratic, willed and determined—that we cannot but choose to repeat. Our obsessions make us who we are. At this historical remove, we cannot of course interrogate men like Keeler, Larson, and Marston or make them answer to their own device. There are no polygraphs for the dead. Instead, their portraits have been assembled according to the patient methods of the old-fashioned private eye: by collecting and corroborating the scattered traces their obsession has left us—their public pronouncements, personal correspondence, private diaries, and secondhand slanders—and reading each piece of evidence for its own emotional slant. And in the gap between their public acts and private selves we catch a glimpse of that equivocal form of self-consciousness that used to be called the soul.
To deceive is human. There has never been, nor ever will be, an honest society. And so long as we lack the means to quantify lies or weigh hypocrisies, we have no basis for supposing any society more dishonest than any other. Rather, what distinguishes a culture is how it copes with deceit: the sorts of lies it denounces, the sorts of institutions it fashions to expose them. Only in America has the campaign to expose lies taken a techno-scientific turn. The polygraph is a banal assemblage of medical technologies, a concatenation of physiological instruments available throughout the developed world for more than a century. Yet only in America has it been repurposed for interrogation.
The lie detector has thrived in America because the instrument played into one of the great projects of the twentieth century: the effort to transform the central moral question of our collective life—how to fashion a just society—into a legal problem. To do so, it drew its legitimization from two noble half-truths about our political life: that democracy depends on transparency in public life, and that justice depends on equal treatment for all. As a nation founded on an explicit political contract rather than a common history or shared kinship, Americans have aspired to resolve social conflicts with explicit public rules—regardless of any chicanery taking place behind the scenes. And lest anyone protest that these rules themselves are rigged, we have often tried to justify them in the name of science, itself considered the least arbitrary and most transparent form of rule making. Hence, perhaps, our propensity to treat deceit, the original sin of the social contract, with a redoubled dose of science.
These noble half-truths about democracy have dovetailed with two popular half-truths about science: that it offers an unerring method for piercing the mysteries of nature, and that it does so by eliminating the personal predilections of the investigator. In the case of the lie detector, these half-truths have been coupled to a novel twentieth-century assumption: that as creatures of nature, human beings express their thoughts and feelings in bodily terms. On this basis, the proponents of lie detection have packaged their technique as a mechanical oracle that can read the body’s hidden signs for evidence of deceit—while they sidestep the skeptical interpretive labor that scientists ordinarily demand of such claims. The lie detector and its progeny have been repeatedly denounced by respectable science—but since when has that stopped millions of Americans from believing in something, especially when the public media breathlessly extol its successes? To a nation eager for justice that is swift and sure, it hardly matters that the lie detector succeeds by pretense. The lie detector "works" and that is enough. It resolves cases, extracts confessions, assures fidelity, and underwrites credibility. In short, it provides answers—and nothing feels better. The lie detector is less a "technology of truth" than a "technology of truthiness."
Americans, it would seem, still concur with Saint Augustine and the authors of romance fiction in believing that the truth resides in the heart and that deceit—like self-consciousness—is produced in the gap between our inner feelings and our calculated speech. Even the advocates of the new neuroimaging techniques still try to detect deception by prying apart the liar’s divided self, although instead of pitting the autonomic heart against the willful brain, they set the various regions of the brain against one another. The implication is that an honest person does not just match word to deed, but to sentiment as well. From the era of the Puritans to the Age of Aquarius, Americans have not been satisfied with proving our worth with worldly deeds. We have demanded authenticity too: often from our neighbors, always from our leaders, and sometimes from ourselves. In sum, we are a nation of sentimental materialists. When Father Brown, G. K. Chesterton’s ordained detective, first encountered the American lie detector, he nearly laughed out loud: "Who but a Yankee would think of proving anything from heart-throbs? Why, they must be as sentimental as a man who thinks a woman is in love with him if she blushes."
We believe in the lie detector for all sorts of elevated reasons: because we long for a form of justice that is swift, certain, and noncoercive; because we can’t imagine how the soul could not be manifest, somehow, in the body; because we expect that science can and will pierce the veil of earthly appearances. But perhaps, in the end, these are all just excuses. We believe in the lie detector because it promises us a chance to peek through Momus’s window. What will we see when we look within? We believe in the lie detector because—no matter what respectable science says—we are tempted.