The Illusion of Conscious Will (25 page)

Read The Illusion of Conscious Will Online

Authors: Daniel M. Wegner

Tags: #General, #Psychology, #Cognitive Psychology, #Philosophy, #Will, #Free Will & Determinism, #Free Will and Determinism

BOOK: The Illusion of Conscious Will
8.06Mb size Format: txt, pdf, ePub

The Rule and the Exception

The central idea of this chapter has been to explore the automatisms and study why and when they occur. The automatisms have a special place in the analysis of apparent mental causation, of course, because they represent a class of instances in which apparent mental causation fails. This means that if conscious will is illusory, automatisms are some-how the “real thing,” fundamental mechanisms of mind that are left over once the illusion has been stripped away. Rather than conscious will being the rule and automatism the exception, the opposite may be true: Automatism is the rule, and the illusion of conscious will is the exception.

The problem of explaining anomalies exists whether we assume conscious will or deny it. If we begin with “voluntary behavior can occur without conscious will” and so accept that people construct an experience of will for much of their voluntary action, we must then explain the common co-occurrence of intention and action. Where does intention come from, and why does it predict action so nicely so much of the time? On the other hand, if we begin with “all voluntary behavior is consciously willed,” then we must explain the many counterexamples— anomalies such as the automatisms and ideomotor cases when apparently voluntary behavior occurs without signs of conscious will. In either event, we must work to explain the exceptions.

And, unfortunately, it has to be one way or the other. Either the automatisms are oddities against the general backdrop of conscious behavior causation in everyday life, or we must turn everything around quite radically and begin to think that behavior that occurs
with
a sense of will is some-how the odd case, an add-on to a more basic underlying system. Now, people have been treating automatisms as exceptions for many years and relegating them to the status of mere oddity. And at the same time, conscious will has been elevated to the status of truism. The theory of apparent mental causation suggests it may be time for a reversal. If we transpose the assumptions completely and take the approach that voluntariness is what must be explained, we immediately begin to see some light. The automatisms and ideomotor effects become models of how thought can cause action in all those moments in everyday life when we don’t seem to be in conscious control. Knowing this, we can then focus on how the experience of conscious will is wrapped around our voluntary actions in normal behavior causation and stripped away from some few acts by conditions that reveal them in their nakedness as unwilled.

5

Protecting the Illusion

The illusion of will is so compelling that it can prompt the belief that acts were intended when they could not have been. It is as though people aspire to be ideal agents who know all their actions in advance.

My drawings have been described as pre-intentionalist, meaning that they were finished before the ideas for them had occurred to me. I shall not argue the point.

James Thurber (1960)

By some rules in the game of pool, you have to call your shots. You have to say where the ball is going to go, and sometimes even how it will get there (“Thirteen in the corner pocket off the nine”). This prevents you from claiming you meant to do that when half a dozen balls drop into pockets on a lucky shot. Life, however, is not always played by this rule. For some amount of what we do every day, our conscious intentions are vague, inchoate, unstudied, or just plain absent. We just don’t think consciously in advance about everything we do, although we try to maintain appearances that this is the case. The murkiness of our intentions doesn’t seem to bother us much, though, as we typically go right along doing things and learning only at the time or later what it is we are doing. And, quite remarkably, we may then feel a sense of conscious will for actions we did not truly anticipate and even go on to insist that we had intended them all along.

The fact is, each of us acts in response to an unwieldy assortment of mental events, only a few of which may be easily brought to mind and understood as conscious intentions that cause our action. We may find ourselves at some point in life taking off a shoe and throwing it out the window, or at another point being sickeningly polite to someone we detest. At these junctures, we may ask ourselves, What am I doing? or perhaps sound no alarms at all and instead putter blithely along assuming that we must have meant to do this for some reason. We perform many unintended behaviors that then require some artful interpretation to fit them into our view of ourselves as conscious agents. Even when we didn’t know what we were doing in advance, we may trust our theory that we consciously will our actions and so find ourselves forced to imagine or confabulate memories of “prior” consistent thoughts. These inventions become rewritings of history, protestations that “I meant to knock down all those balls,” when we truly had not called the shot at all.

This chapter examines how people protect the illusion of conscious will. People do this, it seems, because they have an
ideal of conscious agency
that guides their inferences about what they must have known and willed even when they perform actions that they did not intend. The chapter focuses first on this ideal agent. We examine the basic features of agency and then look at how people fill in these features based on their conception of the ideal. The expectancy that intention must be there, even when the action is wholly inscrutable, can lead people to infer that they intended even the most bizarre of actions. As a starting point, we look at the explanations people give for the odd acts they can be led to perform through posthypnotic suggestion. The ability to discern what might have been intended is something that people gain as they develop, so we look next at the development of the idea of intention. Then, we turn to the circumstance that first prompts the protection of the idea of will: unconscious action. When people’s actions are caused unconsciously, they depend on their ideal of agency to determine what they have done. Several theories in psychology—cognitive dissonance, self-perception, and the left brain interpreter theory—have focused on the way in which people fill in what might have been consciously intended even after one of these unconsciously caused actions is over.

The Ideal Agent

We perceive minds by using the idea of an agent to guide our perception. In the case of human agency, we typically do this by assuming that there is an agent that pursues goals and that the agent is conscious of the goals and will find it useful to achieve them. All this is a fabrication, of course, a way of making sense of behavior. It works much better, at least as a shorthand, than does perceiving only mechanistic causation. As in the case of any constructed entity (such as the idea of justice or the idea of the perfect fudge), the ideal can serve as a guide to the perception of the real that allows us to fill in parts of the real that we can’t see. We come to expect that human agents will have goals and that they know consciously what the goals are before they pursue them. This idealization of agency serves as the basis for going back and filling in such goal and intention knowledge even when it doesn’t exist. Eventually, however, this strategy leads us to the odd pass of assuming that we must have been consciously aware of what we wanted to do in performing actions we don’t understand or perhaps even remember—just to keep up the appearance that we ourselves are agents with conscious will.

The Architecture of Agency

An agent is anything that perceives its environment and acts on that environment. This means that an agent is not just someone who sells you life insurance or real estate. True, these people perceive things and act on them, and they are human beings, but humanness is not necessary for agency. Animals, plants, and many robots can be agents, too, as can some processes that take place in computers. Software agents are commonplace in several kinds of programming, and the essentials of agency are nicely set out in the study of artificial intelligence (AI). Whole indentbooks on AI focus on the concept of rational agency, on programs that get things done (e.g., Russell and Norvig 1995). It turns out that when you want to build an agent that does things, you need merely have three basic parts—a
sensor,
a
processor,
and an
effector
. In the hoary marmot, for example, these might be the nose, the brain, and the legs; in a robotic AI agent, they might be a light-sensitive diode, a processing circuit, and a motor attached to a flag. The marmot’s action might be to sniff something and approach it; the AI agent’s action might be to run the flag up the pole at dawn.

There are good and bad agents. Consider, for example, the thermostat in the typical hotel room. This is usually a dial on a window unit that you twist one way for “cooler” and the other way for “warmer.” With luck, there might also be a “low fan”/“high fan” switch. In order to get the temperature of the room right, you basically need to stay up all night fiddling with the controls. Unlike the normal home thermostat that takes a setting of 72° F and keeps the temperature there, the hotel unit can only be set for
different from now
. This thermostat is a bad agent. It can’t be aimed at a desired goal and instead can only be set to avoid what is at present undesired. It does have a goal: it will shut off when it has gotten to the point on its dial that it discerns is “cooler” or “warmer,” but it is very coy about this. It cannot even tell you where it is going. If I were a thermostat, I’d want to be the house, not the hotel, variety (Powers 1973; 1990).

When we make comparisons between agents, we imply that there is some sort of standard or ideal agent by which all of them are measured. This ideal agent does things in an ideal way, and it is this ideal to which we compare our own agency—and often come up wanting. What is an ideal agent like? The simple answer is that it is God. After all, God can do anything, and most religious people are ready to claim that God does things in the best way. So the traditional Judeo-Christian view of God, who is omniscient, omnipotent, and benevolent, captures the image of an ideal agent very well. This agent knows all (so it has perfect sensors), can do all (so it has perfect effectors), and always acts correctly (so it has a perfect processor). No wonder people pray. Getting an agent like this on your side certainly seems worth the effort. Meanwhile, though, even if we can’t always get God to talk back, we can aspire to be God-like. We can hope to be ideal agents and compare ourselves to this standard. Just as God has a will, we are ideal agents to the extent that we have wills as well. In his fourth
Meditation,
Descartes (1647) reflects on this: “It is chiefly my will which leads me to discern that I bear a certain image and similitude of Deity.”

Other than acting perfectly, what does an ideal agent do? As it happens, there are a variety of characteristics that agents might look for in themselves, or in each other, as indications of an approach toward the ideal agent. In an analysis of
How to Build a Conscious Machine,
Angel (1989) suggests that agents look for a variety of qualities in themselves and others as a way of learning whether or not an agent is even present. He calls these “basic interagency attributions,” and suggests that they may include indications of
agentive presence
(“That is an agent”), an
agent’s focus of attention
(“That agent is looking toward the kitchen”),
belief
(“That agent believes there is food in the kitchen”),
desire
(“That agent is hungry”),
plan
(“That agent plans to go to the kitchen and open the refrigerator”), and
agent’s movement
(“That agent walked to the kitchen and opened the refrigerator”).

An ideal agent should probably have all of these things and more. To be sure, however, it must have intention and conscious will. Just like a thermostat that doesn’t know what temperature it is set for, a person who doesn’t know what he or she is intending to do can’t be much of an agent. This is particularly true if the person is doing something. People who can’t answer the question What are you doing? are generally considered asleep, drugged, or crazy. Knowing what it is doing is a highly valued characteristic in an agent, and the aspiration to be an ideal agent must drive people to claim such knowledge a great deal of the time. It may, in fact, push them to claim they did things intentionally when this is provably false.

Posthypnotic Suggestions

A fine example of such filling in of intentions occurs in some responses to posthypnotic suggestion. People who have been hypnotized can be asked to follow some instruction later, when they have awakened (When I touch my nose, you will stand up, pinch your cheeks, and say “I feel pretty, oh so pretty”). And in some instances, such posthypnotic suggestions will be followed with remarkable faithfulness.
1
In one example, Moll (1889, 152) recounted saying to a hypnotized woman, “After you wake you will take a book from the table and put it on the bookshelf.” She awoke and did what he had told her. He asked her what she was doing when this happened, and she answered, “I do not like to see things so untidy; the shelf is the place for the book, and that is why I put it there.” Moll remarked that this subject specifically did not recall that she had been given a posthypnotic suggestion. So, embarking on a behavior for which no ready explanation came to mind, she freely invented one in the form of a prior intention.

1.
Of course, only about 10 to 20 percent of the population can be said to be highly susceptible to hypnosis (Hilgard 1965), and even within this group the susceptibility to posthypnotic suggestion varies. But such suggestions can work. Several historic reports announced this phenomenon (Bernheim 1889; Moll 1889; Richet 1884), and more modern assessments of posthypnotic responses concur with the early research (Edwards 1965; Erickson and Erickson 1941; Sheehan and Orne 1968).

Other books

Mesmerized by Audra Cole, Bella Love-Wins
Homeplace by Anne Rivers Siddons
Maxwell's Retirement by M. J. Trow
Capturing Angels by V. C. Andrews
Contract of Shame by Crescent, Sam
Bitch Creek by Tapply, William
This Time by Ingrid Monique
Romance in Dallas - Tycoon! by Nancy Fornataro