CHAPTER THREE
OF MIND AND MACHINES
PHILOSOPHICAL MIND EXPERIMENTS
“I am lonely and bored; please keep me company.”
If your computer displayed this message on its screen, would that convince you that your notebook is conscious and has feelings?
Well, clearly no, it’s rather trivial for a program to display such a message. The message actually comes from the presumably human author of the program that includes the message. The computer is just a conduit for the message, like a book or a fortune cookie.
Suppose we add speech synthesis to the program and have the computer speak its plaintive message. Have we changed anything? While we have added technical complexity to the program, and some humanlike communication means, we still do not regard the computer as the genuine author of the message.
Suppose now that the message is not explicitly programmed, but is produced by a game-playing program that contains a complex model of its own situation. The specific message may never have been foreseen by the human creators of the program. It is created by the computer from the state of its own internal model as it interacts with you, the user. Are we getting closer to considering the computer as a conscious, feeling entity?
Maybe just a tad. But if we consider contemporary game software, the illusion is probably short-lived as we gradually figure out the methods and limitations behind the computer’s ability for small talk.
Now suppose the mechanisms behind the message grow to become a massive neural net, built from silicon but based on a reverse engineering of the human brain. Suppose we develop a learning protocol for this neural net that enables it to learn human language and model human knowledge. Its circuits are a million times faster than human neurons, so it has plenty. of time to read all human literature and develop its own conceptions of reality. Its creators do not tell it how to respond to the world. Suppose now that it says, “I’m lonely ... ”
At what point do we consider the computer to be a conscious agent with its own free will? These have been the most vexing problems in philosophy since the Platonic dialogues illuminated the inherent contradictions in our conception of these terms.
Let’s consider the slippery slope from the opposite direction. Our friend Jack (circa some time in the twenty-first century) has been complaining of difficulty with his hearing. A diagnostic test indicates he needs more than a conventional hearing aid, so he gets a cochlear implant. Once used only by people with severe hearing impairments, these implants are now common to correct the ability of people to hear across the entire sonic spectrum. This routine surgical procedure is successful, and Jack is pleased with his improved hearing.
Is he still the same person?
Well, sure he is. People have cochlear implants circa .1999. We still regard them as the same person.
Now (back to circa sometime in the twenty-first century), Jack is so impressed with the success of his cochlear implants that he elects to switch on the built-in phonic-cognition circuits, which improve overall auditory perception. These circuits are already built in so that he does not require another insertion procedure should he subsequently decide to enable them. By activating these neural-replacement circuits, the phonics-detection nets built into the implant bypass his own aging neural-phonics regions. His cash account is also debited for the use of this additional neural software. Again, Jack is pleased with his improved ability to understand what people are saying.
Do we still have the same Jack? Of course; no one gives it a second thought.
Jack is now sold on the benefits of the emerging neural-implant technology. His retinas are still working well, so he keeps them intact (although he does have permanently implanted retinal-imaging displays in his corneas to view virtual reality), but he decides to try out the newly introduced image-processing implants, and is amazed how much more vivid and rapid his visual perception has become.
Still Jack? Why, sure.
Jack notices that his memory is not what it was, as he struggles to recall names, the details of earlier events, and so on. So he’s back for memory implants. These are amazing—memories that had grown fuzzy with time are now as clear as if they had just happened. He also struggles with some unintended consequences as he encounters unpleasant memories that he would have preferred to remain dim.
Still the same Jack? Clearly he has changed in some ways and his friends are impressed with his improved faculties. But he has the same self-deprecating humor, the same silly grin—yes, it’s still the same guy.
So why stop here? Ultimately Jack will have the option of scanning his entire brain and neural system (which is not entirely located in the skull) and replacing it with electronic circuits of far greater capacity, speed, and reliability There’s also the benefit of keeping a backup copy in case anything happened to the physical Jack.
Certainly this specter is unnerving, perhaps more frightening than appealing. And undoubtedly it will be controversial for a long time (although according to the Law of Accelerating Returns, a “long time” is not as long as it used to be). Ultimately, the overwhelming benefits of replacing unreliable neural circuits with improved ones will be too compelling to ignore.
Have we lost Jack somewhere along the line? Jack’s friends think not. Jack also claims that he’s the same old guy, just newer. His hearing, vision, memory, and reasoning ability have all improved, but it’s still the same Jack.
However, let’s examine the process a little more carefully. Suppose rather than implementing this change a step at a time as in the above scenario, Jack does it all at once. He goes in for a complete brain scan and has the information from the scan instantiated (installed) in an electronic neural computer. Not one to do things piecemeal, he upgrades his body as well. Does making the transition at one time change anything? Well, what’s the difference between changing from neural circuits to electronic/photonic ones all at once, as opposed to doing it gradually? Even if he makes the change in one quick step, the new Jack is still the same old Jack, right?
But what about Jack’s old brain and body? Assuming a noninvasive scan, these still exist. This is Jack! Whether the scanned information is subsequently used to instantiate a copy of Jack does not change the fact that the original Jack still exists and is relatively unchanged. Jack may not even be aware of whether or not a new Jack is ever created. And for that matter, we can create more than one new Jack.
If the procedure involves destroying the old Jack once we have conducted some quality-assurance steps to make sure the new Jack is fully functional, does that not constitute the murder (or suicide) of Jack?
Suppose the original scan of Jack is not noninvasive, that it is a “destructive” scan. Note that technologically speaking, a destructive scan is much easier—in fact we have the technology today (1999) to destructively scan frozen neural sections, ascertain the interneuronal wiring, and reverse engineer the neurons’ parallel digital-analog algorithms.
1
We don’t yet have the bandwidth to do this quickly enough to scan anything but a very small portion of the brain. But the same speed issue existed for another scanning project—the human genome scan—when that project began. At the speed that researchers were able to scan and sequence the human genetic code in 1991, it would have taken thousands of years to complete the project. Yet a fourteen-year schedule was set, which it now appears will be successfully realized. The Human Genome Project deadline obviously made the (correct) assumption that the speed of our methods for sequencing DNA codes would greatly accelerate over time. The same phenomenon will hold true for our human-brain-scanning projects. We can do it now—very slowly—but that speed, like most everything else governed by the Law of Accelerating Returns, will get exponentially faster in the years ahead.
Now suppose as we destructively scan Jack, we simultaneously install this information into the new Jack. We can consider this a process of “transferring” Jack to his new brain and body. So one might say that Jack is not destroyed, just transferred into a more suitable embodiment. But is this not equivalent to scanning Jack noninvasively subsequently instantiating the new Jack and then destroying the old Jack? If that sequence of steps basically amounts to killing the old Jack, then this process of transferring Jack in a single step must amount to the same thing. Thus we can argue that any process of transferring Jack amounts to the old Jack committing suicide, and that the new Jack is not the same person.
The concept of scanning and reinstantiation of the information is familiar to us from the fictional “beam me up” teleportation technology of Star Trek. In this fictional show, the scan and reconstitution is presumably on a nanoengineering scale, that is, particle by particle, rather than just reconstituting the salient algorithms of neural-information processing envisioned above. But the concept is very similar. Therefore, it can be argued that the
Star Trek
characters are committing suicide each time they teleport, with new characters being created. These new characters, while essentially identical, are made up of entirely different particles, unless we imagine that it is the actual particles being beamed to the new destination. Probably it would be easier to beam just the information and use local particles to instantiate the new embodiments. Should it matter? Is consciousness a function of the actual particles or just of their pattern and organization?
We can argue that consciousness and identity are not a function of the specific particles at all, because our own particles are constantly changing. On a cellular basis, we change most of our cells (although not our brain cells) over a period of several years.
2
On an atomic level, the change is much faster than that, and does include our brain cells. We are not at all permanent collections of particles. It is the patterns of matter and energy that are semipermanent (that is, changing only gradually), but our actual material content is changing constantly, and very quickly We are rather like the patterns that water makes in a stream. The rushing water around a formation of rocks makes a particular, unique pattern. This pattern may remain relatively unchanged for hours, even years. Of course, the actual material constituting the pattern—the water—is totally replaced within milliseconds. This argues that we should not associate our fundamental identity with specific sets of particles, but rather the pattern of matter and energy that we represent. This, then, would argue that we should consider the new Jack to be the same as the old Jack because the pattern is the same. (One might quibble that while the new Jack has similar functionality to the old Jack, he is not identical. However, this just dodges the essential question, because we can reframe the scenario with a nanoengineering technology that copies Jack atom by atom rather than just copying his salient information-processing algorithms.)
Contemporary philosophers seem to be partial to the “identity from pattern” argument. And given that our pattern changes only slowly in comparison to our particles, there is some apparent merit to this view. But the counter to that argument is the “old Jack” waiting to be extinguished after his “pattern” has been scanned and installed in a new computing medium. Old Jack may suddenly realize that the “identity from pattern” argument is flawed.
MIND AS MACHINE VERSUS MIND BEYOND MACHINE
Science cannot solve the ultimate mystery of nature because in the last analysis we are part of the mystery we are trying to solve.
—Max Planck
Is all what we see or seem, but a dream within a dream?
—Edgar Allan Poe
What if everything is an illusion and nothing exists? In that case, I definitely overpaid for my carpet.
—Woody Allen
The Difference Between Objective and Subjective Experience
Can we explain the experience of diving into a lake to someone who has never been immersed in water? How about the rapture of sex to someone who has never had erotic feelings (assuming one could find such a person)? Can we explain the emotions evoked by music to someone congenitally deaf? A deaf person will certainly learn a lot about music: watching people sway to its rhythm, reading about its history and role in the world. But none of this is the same as experiencing a Chopin prelude.
If I view light with a wavelength of 0.000075 centimeters, I see red. Change the wavelength to 0.000035 centimeters and I see violet. The same colors can also be produced by mixing colored lights. If red and green lights are properly combined, I see yellow. Mixing pigments works differently from changing wavelengths, however, because pigments subtract colors rather than add them. Human perception of color is more complicated than mere detection of electromagnetic frequencies, and we still do not fully understand it. Yet even if we had a fully satisfactory theory of our mental process, it would not convey the subjective experience of redness, or yellowness. I find language inadequate for expressing my experience of redness. Perhaps I can muster some poetic reflections about it, but unless you’ve had the same encounter, it is really not possible for me to share my experience.