The Illusion of Conscious Will (56 page)

Read The Illusion of Conscious Will Online

Authors: Daniel M. Wegner

Tags: #General, #Psychology, #Cognitive Psychology, #Philosophy, #Will, #Free Will & Determinism, #Free Will and Determinism

BOOK: The Illusion of Conscious Will
3.4Mb size Format: txt, pdf, ePub

6
. The issue of conscious will is taken up in detail in the American Law Institute’s, (1995)
Model Penal Code
(Article 2, “General Principles of Liability”), where the requirements for calling an act “voluntary” are said to involve “inquiry into the mental state of the actor” (216). In a major text on criminal law, for instance, criminal responsibility is said to be precluded in situations involving “disturbances of consciousness in persons who retain the capacity to engage in goal-directed conduct based on prior learned responses” (Bonnie et al. 1997, 107). The problem of defining consciousness and mental states is nevertheless one of continual debate and contention in the law (e.g., Keasey and Sales 1977).

Both the legal and the religious free will theories assume that the person’s experience of conscious will is a direct sensation of the actual causal relation between the person’s thought and action
.
This is the point at which the theory of apparent mental causation diverges from these theories. Apparent mental causation suggests that the experience of consciously willing an act is merely a humble estimate of the causal efficacy of the person’s thoughts in producing the action. Conscious will is the mind’s way of signaling that it might have been involved in causing the action. The person’s experience of doing the act is only one source of evidence regarding the actual force of the person’s will in causing the action, however, and it may not even be the best source.

The gold standard of evidence here would be a scientific experiment in which we set up replicable conditions: The person would be led to think of the act in the exact circumstances in which it was performed, and we would observe repeatedly in these circumstances whether or not the act was done. These conditions are largely impractical, however, and often impossible to produce—we’d have great difficulty wiping the person’s mind clean of memory for all the experiments each time we wanted to check again whether the thought caused the action. So we fall back to the tin standard: We gather evidence from multiple sources about the causation of the person’s act. The person’s experience of conscious will is only one of these sources, not the definitive one.

In the law and in religion, this wishy-washy approach to reports of will makes everyone deeply uncomfortable. It would be nice, after all, to have an infallible source of information about whether the person caused the action. When life or death, or life after death, are at stake, people aspire to this ideal and rely very heavily on the person’s reports of the feeling of conscious will, sometimes ignoring in the process better evidence of the role of the force of will in the action. We look to confessions, to expressions of intent, and to other hints of conscious will as indications that a person’s mind indeed caused the action.

The reports people make of what they were thinking and experiencing when they committed crimes (or sins) are notoriously unreliable, however. Culprits often deny entirely having conscious thoughts about a crime in advance or deny having an experience of conscious will while performing the crime. The former U. S. Housing Secretary Henry Cisneros, for example, explained why he lied to the FBI about payments to his mistress by saying, “I’ve attributed it to the pressure and confused sort of fog of the moment where I gave an incorrect number” (
Newsweek,
September 20, 1999, 21).

The same
fog of the moment
is apparent in the statement of Lonnie Weeks, convicted killer of a state trooper who had pulled him over for speeding: “As I stepped out of the car, it was just like something just took over me that I couldn’t understand. . . . I felt like it was evil, evil spirit or something. That’s how I feel. That’s the way I describe it” (Associated Press, September 2, 1999). Another twisted sense of authorship of a crime was conveyed by Mitchell Johnson, one of two boys in jail for the Jonesboro, Arkansas, school shootings in which five people were killed in March 1998: “I honestly didn’t want anyone to get hurt. You may not think of it like this, but I have the same pain y’all have. I lost friends like you did. The only difference is, I was the one doing the killing” (Cuza 1999, 33). And perhaps the densest fog is reported by an anonymous respondent in the Feeney (1986) survey of robbers: “I have no idea why I did this” (57).

One view of foggy accounts of crimes is that criminals are ashamed of their actions after the fact and won’t admit conscious will. Maybe they try to divorce themselves from the acts by lying about the mental states they had during the action. This makes particular sense if there is something to be gained by the lying, and often there may be some such benefit. The matter-of-fact, concrete accounts of their acts of murder given by German police and soldiers following the Holocaust of World War II, for instance, reflect little experience of conscious will and more of the “I was only following orders” logic we have come to expect of people who are obeying commands (e.g., Browning 1992). Underplaying the sense of conscious will makes good sense if one is hoping to avoid moral condemnation for the sin or retribution for the crime. However, even among people who are unrepentant or have little to gain by disavowing their complicity, there is a widespread tendency to describe crimes and morally reprehensible actions in benign, mindless terms (Katz 1988; Schopp 1991; Wegner and Vallacher 1986).

The causes of evil acts are often only poorly represented in a person’s conscious mind. Studies of how criminals choose what crime opportunities to pursue, for example, suggest that they regularly lack insight into the variables that ultimately influence their judgment (e.g., Cornish and Clarke 1986). A criminal may report having robbed a store, for example, because there were no TV surveillance cameras and he hadn’t seen police nearby—when research has demonstrated that the strongest consideration influencing most robbers’ choice of crime opportunities is the size of the likely haul (Carroll 1978). Such lack of insight can even extend to whether a crime was committed, particularly when the offender is drunk, drugged, sleepwalking, insane, or otherwise incapacitated. People may claim they did nothing at all. In this light, although it makes sense to
ask
people whether they willed their actions, their answers would not seem to be the sole basis for a sound moral judgment of the role of their thoughts in causing their actions. Personal responsibility can’t be founded only on self-reports of will. The experience of conscious will is just not a very clear or compelling indication that actions were accomplished by force of will.

This realization has influenced the legal community, particularly among those concerned with the fairness of the insanity defense. John Monahan (1973) examined the way in which psychological science impinges on this legal issue and concluded that the “free will theory” is often challenged these days by the “behavioral position,” in which the trial court considers solely whether the defendant committed the physical act with which he or she was charged. After conviction,
mens rea
and any other information available regarding the defendant might then be considered by a group of experts in deciding on the disposition of the case. In this way, people who did not know what they were doing when they committed crimes would be treated just like those who did, at least during the trial. Whether they ended up being sentenced to prison or given psychological or other treatment would be a matter to be determined later. Monahan indicates that “the advocates of this position see it as more scientific, rational, humane, and forward-looking than the punishment-oriented free will system” (733). Monahan was not convinced of the correctness of this position, however, and went on to note the usefulness of considering the criminal’s mental state at several points in the legal judgment process.

Robot Morality

One useful way to consider the role of conscious will in moral judgment is to examine the extreme case. Imagine for a moment that we simply throw out the person’s statements about conscious will in every case. This extreme version of the “behavioral position” is reminiscent of a kind of moral system one might construct for robots. In a series of science fiction stories set forth in the book
I, Robot
(1970), Isaac Asimov recommended rules for the operating systems of an imaginary fleet of intelligent robots. His Three Laws of Robotics are as follows:

1. A robot may not harm a human being, or through inaction, allow a human being to come to harm.

2. A robot must follow the orders given it by a human being except where such orders would conflict with the First Law.

3. A robot must protect its own existence so long as such protection does not conflict with the First or Second Laws.

The plots of Asimov’s stories in this book investigate the potential conflicts among these laws and prompt some clever robot morality plays. The odd feature of these laws is that they make no mention of what the robot might think in advance of action, or what it might feel it is doing. There is no room for this kind of talk because we assume robots are not conscious.

A morality based on such laws might be crudely workable, like the “behavioral position” described by Monahan. If we applied only these basic robot rules to humans, we would judge all action according to its objective consequences. We could merely say that if someone killed a person, the culprit was essentially a faulty robot and could be dropped in the bad robot bin for reprocessing into spark plugs and radios. Intentional murder would be equivalent, in this way of judging morality, to clumsiness that happens to take a life. “Die, you scum!” would be equal to “Whoops, I’m really sorry.” This approach is something we may be tempted to take when a person performs a morally reprehensible action that the person claims not to have willed. Without a sense of conscious will, people do not claim to be conscious agents, and it is tempting to judge their behavior by its consequences alone.

On closer analysis, however, Asimov’s laws can be understood to imply something much like a concept of conscious will. We seem to need intention and will to keep robots out of the bin. Suppose, for example, that we built a robot that had no action previewing system. It would behave without any prior readout or announcement of its likely behavior, most of the time performing very effectively—but once in a while making the inevitable error. Without some internal mechanism for predicting its own action, and then warning others about it if the action might be harmful, the robot might break Asimov’s laws every day by noon. This could quickly doom it to reprocessing.

We would also want the robot to be able to keep track of what it was doing, to distinguish its own behavior from events caused by other things. A robot thus should have an automatic authorship detection system installed at the factory. A good system for detecting authorship would be one in which the robot’s previews of its actions were compared with the behaviors it observed itself doing. Matches between previews and actions would suggest that the action was authored by the robot, whereas mismatches would suggest that the action was caused by other forces. The robot would need to keep track of these things, after all, to be able to assess its behavior with respect to the laws. If it happened to be involved in the death of a human because the human ran full speed into it from the rear, the robot could report this and keep itself from being binned while innocent. A record of authorship would also be useful for the robot to record its own past problems and avoid future situations in which it might break laws. None of these functions of the robot would need to be conscious, but all should be in place to allow it to function successfully in Asimov’s robot world.

This way of viewing robot morality helps us make sense of the moral role of intention and will in human beings. The intentions and conscious thoughts we have about our actions are cues to ourselves and to others about the meaning and likely occurrence of our behavior. These thoughts about action need not be causes of the action in order to serve moral functions. In his analysis of the insanity defense, Monahan (1973) critiques the “behavioral position” on just this basis. He observes that information about a person’s state of mind is important for determining what the crime is, how the person should be treated after the crime, how the person’s tendency to commit further crime should be predicted, and whether the person’s tendency to perform the crime might be modified in the future. Just because the person may not have infallible knowledge of whether he willed the action is no reason to throw the rest of this crucial information away.

Illusory or not, conscious will is the person’s guide to his or her own moral responsibility for action. If you think you willed an act, your ownership of the act is established in your own mind. You will feel guilty if the act is bad, and worthy if the act is good. The function of conscious will is not to be absolutely correct but to be a compass. It tells us where we are and prompts us to feel the emotions appropriate to the morality of the actions we find ourselves doing. Guilt (Baumeister, Stillwell, and Heatherton 1994), pride, and the other moral emotions (Haidt 2001) would not grip us at all if we didn’t feel we had willed our actions. Our views of ourselves would be impervious to what we had done, whether good or bad, and memory for the emotional consequences of our actions would not guide us in making moral choices in the future.

We can feel moral emotions inappropriately, of course, because our experience of conscious will in any given case may be wrong. The guilt we feel for mother’s broken back may arise from the nonsensical theory that we caused her injury by stepping on a crack. More realistically, we can develop guilty feelings about all sorts of harms we merely imagine before they occur, simply because our apparent mental causation detector can be fooled by our wishes and guesses into concluding that we consciously willed events that only through serendipity have followed our thoughts about them. By the same token, the pride we feel in helping the poor may come from the notion that we had a compassionate thought about them before making our food donation, whereas we actually were just trying to clear out the old cans in the cupboard. But however we do calculate our complicity in moral actions, we then experience the emotional consequences and build up views of ourselves as certain kinds of moral individuals as a result. We come to think we are good or bad on the basis of our authorship emotion. Ultimately, our experience of conscious will may have more influence on our moral lives than does the actual truth of our behavior causation.

Other books

Runner: The Fringe, Book 3 by Anitra Lynn McLeod
Sprayed Stiff by Laura Bradley
Sweet on You by Kate Perry
Cop Out by Susan Dunlap
Master and Margarita by Mikhail Bulgakov
Bluebirds by Margaret Mayhew
Life After Taylah by Bella Jewel
Sorceress Awakening by Lisa Blackwood