ERODING THE WALLS OF SUBJECTIVITY
Throughout this chapter, the one enduring philosophical idea, the one argument that seems most robust and least answerable by science, is Descartes’ notion that our consciousness is inevitably subjective. We can never be certain what anyone else experiences, and vice versa. We can replicate any physical property in the world—for instance, manufacture an identical PC on every continent—but we can’t replicate experiences, which are locked inside of the head of the single owner of those experiences. This assertion seems obvious and right, but does it also rely on a set of untested intuitions?
In Vernon, British Columbia, lives a large extended family. Within this busy household, there is a pair of four-year-old identical twin girls, Tatiana and Krista Hogan, who are in many ways like any girls their age. They can be cheeky and playful, and sweet and caring, and when they get tired, they are just as talented as any other four-year-olds at becoming fractious and demanding. What makes them unique is that they are joined at the head and brain, with one twin forced always to face away from the other. Crucially, they seem to have a neural bridge between each other’s thalamus, one of the most central and important regions in the brain. Among other things, this area is a sensory relay station. Although no rigorous scientific studies have so far been carried out on Tatiana and Krista, the anecdotal reports are utterly tantalizing. For instance, if you cover one girl’s eyes and show a teddy bear to the other girl, the unsighted child can identify the toy. If you touch one girl, the other can point to where her twin has been touched. Tatiana hates ketchup, while Krista likes it, so when Krista eats something with ketchup on it, Tatiana sometimes tries to scrape it off her own tongue. Occasionally one twin will silently sense the thirst of the other and reach for a cup of water to hand it to her conjoined sibling. Therefore, one sibling seems to be able sense the vision, touch, taste, and even desires of the other. Most remarkably, each of the siblings appears to distinguish between those experiences that belong to herself and those that belong to her sister—though on rare occasions, such as involving ketchup tasting, deciphering whether oneself or one’s sister experienced an event can be a confusing matter.
This striking case could suggest that Descartes was not completely right about the perfect prison of subjectivity, for in this example, one person is indeed privy to the subjective experiences of another. How much further could you go, in principle, to merge your consciousness with someone else’s?
Returning to Professor Nao, we could raise the intriguing question of whether subjectivity need be private or special at all. Imagine what would happen if a few other people had their own conversions into silicon form, so that each person had a collection of chips dedicated to capturing and continuing their brain activity exactly. Imagine also that the programming unique to each individual and the record of the activity of every silicon node for each person was stored for posterity on vast hard drives. This immediately allows for a recreation of the Tatiana and Krista situation, with the sensory input from one person being fed via computer linkup into another’s mind. But perhaps you could go a lot further. Perhaps more than senses could be combined—if conscious thoughts were shared between computers, could you hear the thoughts of another? Could you mentally become a double person—or more than double? Once one’s mentality is in digital form as a series of algorithms in the computer, and everything boils down to information, a host of possibilities arises for how that information could be shared, each one breaking—or, more accurately, expanding—the walls of subjectivity.
Perhaps it would even be possible for one person to have his silicon mind gradually, over a few seconds, turn into the mind of another, maybe a long-dead relative, to explore the personality of that other person, relive that person’s experiences, become subject to another’s belief system, and so on, all via a computer algorithm that morphed their brain simulation into that of another. This could last a minute or two and then they’d revert back to themselves, but with some vague memory of what they’d just experienced inserted back into their own silicon minds. It would be an incredibly unnerving experience to have everything about you—your personality and all your memories—dissolve and be replaced by someone else’s for a short time. But this possibility indicates again just how effectively the solidity of personal experience could in principle be transformed into something more fluid. If you fully explore the idea that our minds could be merely a kind of physical computer, all kinds of possible scenarios open up.
Of course, I’m now also guilty of indulging in various wild thought experiments without fleshing out the details. But I’m simply trying to show that another seemingly watertight argument, that of the impenetrability of subjectivity, perhaps instead rests on weak intuitions, and that the alternative is plausible.
Human consciousness will appear inexorably subjective if we assume that consciousness is a mysterious entity, immune to the penetrative eye of science. But if we instead assume that consciousness is actually a process created by the biological computer of our brain, whose driving purpose is to process information, like any other computer around, then we can start to demystify both consciousness and subjectivity. If, following on from this, we are open to the possibility that we can make significant scientific and technological progress concerning consciousness, then who’s to say that subjectivity will at some point no longer be an inevitable feature of consciousness, but an accidental component, and one that is potentially easily corrected in various ways?
At the end of the day, therefore, even this last remaining philosophical mystery may dissolve. Instead of being a permanent, impenetrable barrier to the scientific exploration of consciousness, subjectivity might only reflect our lack of deep understanding
as yet
of how our brains process information and our current lack of technological expertise in capturing and manipulating that information.
Ultimately, the philosophical arguments summarized in this chapter claiming to show that consciousness cannot exist in a physical computational brain fail, not only because they neglect the details of how the brain actually functions, but also because they rely on intuitions, even if they at first appear watertight. But while intuitions can be a useful starting point in many topics, they should never be the endpoint. I believe instead that provisional ideas should inspire scientific investigation, where more solid answers lie.
OUR INDOMITABLE SPIRIT
When I was a child, my father read me bizarre, fantastical bedtime stories with vibrant characters, invariably set on alien worlds. One obscure, ailing, tatty book that utterly transfixed me was
The Space Willies
by the British writer Eric Frank Russell. The subtitle of the book,
You Can’t Keep an Earthman Down
, aside from capturing the plot of the novel perfectly, completely summarized, to my mind, what makes humanity so potentially incredible. For me, hidden in that one phrase was a surprisingly complex emotion: that of being unblinkingly positive, absolutely goal-focused, totally confident in one’s ingenuity to escape the tightest of traps, and even relishing the chance to exercise that ingenuity.
The novel, admittedly, was somewhat contrived and no doubt was dated even in my childhood, but it’s also so funny—and so well executed—that you hardly notice such failings. It concerns a chronically nonconformist army pilot, Leeming, who crashes a spaceship behind enemy lines. He is soon captured by his lizard-like enemies and interned in a prisoner-of-war camp, where he is the only human. The situation looks bleak for Leeming, but he has one trait that makes him far superior to his jailers: guile. Leeming soon hatches an ingenious, if improbable, plan. He begins to spread a rumor that he, like all Earthmen, has a secret, shadow-like, but ever so powerful and vengeful twin. He jerry-rigs a twisted piece of wire and a shabby wooden block and starts to hint surreptitiously that he can use this ultra-sophisticated device to communicate with his remote twin and call an attack at any time. Not only this, but he suggests that even the main allies of his captors have similar, secret doppelgangers, called “willies,” who could turn nasty in the blink of a reptilian eye. At first his guards are skeptical, but then they start sending out spies, asking humans if their enemies “have the willies.” Obviously the answers are a hearty assent, along with the optimistic conviction that their enemies will only have more willies as the battle continues. Following some beautiful finessing of the situation by Leeming, and fortunately timed catastrophes befalling the prison guards, these rumors slowly grow to such gargantuan proportions that his captors, for their own safety, do all they can to release him and send him back to Earth. Eventually, the whole enemy alliance collapses under the weight of this single rumor.
In a roundabout way, this story taught my younger self that in any apparently insoluble situation, human ingenuity can successfully forge a path through various imposing barriers. The history of the study of consciousness has represented this proud habit, but also highlighted other more frustrating aspects to our collective character. For centuries we have been overly influenced by viewpoints from many quarters defending the intractability of consciousness to science. As this chapter has shown, many modern philosophers have emulated this position, producing a multitude of arguments for why a scientific approach to awareness may be pointless. For much of the history of psychology, even scientists have jumped on this defeatist bandwagon and avoided anything close to the study of consciousness, assuming it was simply unavailable to experimentation. For instance, George Miller, one of the most prominent experimental psychologists of the past century, suggested of consciousness in 1962 that “we should ban the word for a decade or two.”
Luckily, from about a generation ago, we have also had scientists who shared Leeming’s personality. They cheerfully ignored the cries from their colleagues that consciousness was the most insoluble mystery in the universe, and plowed on with a positive, exploratory attitude—just for the hell of it, to see what they could find. Such stories have been repeated myriad times in the history of science, with unscientific conviction against our ability to understand a topic dissolving into fascinating scientific advance. But this time, the situation is unique; this time, the topic is the very heart of what it is to be human.
From now on, I’ll be abandoning philosophy. Instead, I’ll focus on the success story of the science of consciousness. I’ll describe how that brave, curious leap into the unknown has produced a cornucopia of fascinating evidence for what consciousness actually is, and how the brain generates our experiences.
2
A Brief History of the Brain
Evolution and the Science of Thought
THE FIRST LESSON IN NATURE IS FAILURE
Soon after I started my PhD at Cambridge’s Medical Research Council (MRC) Cognition and Brain Sciences Unit in 1998, the director of the department, William Marslen-Wilson, came into my office. A tall man with dark, slightly graying hair and a kindly face, he chatted amiably with me for a few minutes, welcoming me to the department, which I was touched by—aside from the fact that he kept calling me various wrong names (he turns name confusion into an art form). Then, as he turned to go, he paused at the door and, with a whimsical smile, said, “Remember, David, the first lesson in science is failure.” I took little notice of this rather mysterious piece of advice until I carried out my first ill-fated experiment, when, sure enough, my first lesson in science
was
failure.
Failures are an inevitable part of the process of doing science. As scientists, we are professionally trying to track the truth. We need to explore many different options in a creative, directed way in order to inch closer to what’s really occurring in nature. Quite a few of those ideas have to be wrong, particularly if you take the scientific community as a whole, with its millions of competing scientists, many with differing views.
Consider, for instance, that for much of scientific history, it was believed that the universe was bathed in an amorphous substance known as the
ether
. Even by the end of the nineteenth century there was near universal acceptance of the idea of a “luminiferous ether,” a medium to support the transmission of light waves across the vast expanses of space. Around the turn of the twentieth century, meticulous experiments carried out by Albert Michelson and Edward Morley, along with theoretical work by Albert Einstein, made this notion of a ubiquitous supporting substance untenable, and we now see “luminiferous ether” as a quaint, extinct theory.
In fact, calling long-rejected scientific theories “extinct” is a more apt metaphor than it might superficially appear. The similarities between the scientific method and biological evolution are surprisingly close because of the common underlying theme of information. The scientific method is concerned with data almost by definition. But perhaps not so obvious is the fact that the progression of scientific thinking is an evolutionary process: a preexisting idea mutates unexpectedly into a profound new theory, which captures something deep about the world, and gathers popularity, but always in competition with an array of differing hypotheses. It will continue to survive only if the proponents of rival theories fail to explain the world more accurately or to convince the minds of the collective scientific community to bank on their ideas instead. In this way, various species of useful potential information about our universe may emerge, thrive, and eventually die out, as if they were real biological species.