Read The Story of Psychology Online
Authors: Morton Hunt
Imagine that the U.S. is preparing for the outbreak of a rare Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:
If Program A is adopted, 200 people will be saved.
If Program B is adopted, there is a ⅓ probability that 600 people will be saved, and a ⅔ probability that no people will be saved. Which of the two programs would you favor?
The second version gave the same story but worded the alternatives as follows:
If Program C is adopted, 400 people will die.
If Program D is adopted, there is a ⅓ probability that nobody will die, and a ⅔ probability that 600 people will die.
Subjects responded quite differently to the two versions: 72 percent chose Program A over Program B, but 78 percent (of a different group) chose Program D over Program C. Kahneman and Tversky’s explanation: In the first version, the outcomes are portrayed in terms of gains (lives saved), in the second version in terms of losses (lives lost). The same biases as shown by the experiments where money was at stake distorted subjects’ judgment in this case, where lives were at stake.
85
(In
2002, Kahneman won the Nobel prize in economics for his work on probabilistic reasoning; Tversky, who would have shared it, unfortunately was dead by then.)
We reason poorly in these cases because the factors involved are “nonintuitive”; our minds do not readily grasp the reality involved in probabilities. This shortcoming affects us both individually and as a society; the electorate and its leaders often make costly decisions because of poor probabilistic reasoning. As Richard Nisbett and Lee Ross point out in their book
Human Inference
, many governmental practices and policies adopted during crises are deemed beneficial because of what happens afterward, even though the programs are often useless or worse. The misjudgment is caused by the human tendency to attribute a result to the action meant to produce it, although often the result stems from the normal tendency of events to revert from the unusual to the usual.
86
It is reassuring, therefore, that a number of studies have found that unconscious mental processing often yields good evaluations and decisions—sometimes better than the results of conscious deliberation. In a series of studies reported in 2004, a Dutch psychologist asked subjects to make choices about complex real-world matters that had many positive and negative features such as choosing an apartment. One group was told to make an immediate (no thought) choice, another group to think for three minutes and then choose (conscious thought), and a third group to work for three minutes on a difficult distracting task and then choose (unconscious thought). In all three studies, the subjects in the unconscious thought condition made the best choices.
87
Analogical reasoning:
By the 1970s, cognitive psychologists had begun to recognize that much of what logicians regard as faulty reasoning is, in
fact, “natural” or “plausible” reasoning—inexact, loose, intuitive, and technically invalid, but often competent and effective.
One such form of thinking is the analogical. Whenever we recognize that a problem is analogous to a different problem, one we are familiar with and know the answer to, we make a leap of thought to a solution. Many people, for instance, when they have to assemble a piece of knocked-down furniture or machinery, ignore the instruction manual and work by “feel”—looking for relationships among the parts that are analogous to the relationships among the parts of different kinds of furniture or machinery they assembled earlier.
Analogical reasoning is acquired in the later stages of childhood mental development. Dedre Gentner, a cognitive psychologist, asked five-year-olds and adults in what way a cloud is like a sponge. The children replied in terms of similar attributes (“They’re both round and fluffy”), adults in terms of relational similarities (“They both store water and give it back to you”).
88
Gentner interprets analogical reasoning as a “mapping” of high-level relations from one domain to another; she and two colleagues even wrote a computer program, the “Structure-Mapping Engine,” that simulates the process. When it was run on a computer and provided with limited data about both the atom and the solar system, the program, like the great physicist Lord Rutherford, recognized that they are analogous and drew appropriate conclusions.
89
With difficult or unfamiliar problems, people generally do not use analogical reasoning because they only rarely spot a distant analogy, even when it would provide the solution to their problem. But if they consciously make the effort to look for an analogy, they are far more apt to see one that is not at all obvious. M. L. Gick and Keith Holyoak used Duncker’s classic problem, of which we read earlier, about how one can use X-rays to destroy a stomach tumor without harming the surrounding healthy tissue. Most of their subjects did not spontaneously discover the solution; Gick and Holyoak then provided them with a story that, they hinted, might prove helpful. It told of an army unable to capture a fortress by a single frontal attack but successful when its general divided it into separate bands that attacked from all sides. Having read this and consciously sought an analogy to the X-ray problem, most subjects saw that many sources of weak X-rays placed all around the body and converging on the tumor would solve the problem.
90
Expert reasoning:
Many cognitive psychologists, intrigued by Newell and Simon’s work, assumed that their theory would apply to problem solving by experts in fields of special knowledge, but found, to their surprise, that it did not. In a knowledge-rich domain, experts do more forward searching than backward searching or means-end analysis, and their thinking often proceeds not step by step but in leaps. Rather than starting with details, they perceive overall relationships; they know which category or principle is involved and work top-down. Novices, in contrast, lack perspective and work bottom-up, starting with details and trying to gather enough data to gain an overview.
91
Since the 1980s, a number of cognitive psychologists have been exploring the characteristics of expert reasoning in different fields. They have asked experts in cardiology, commodity trading, law, and many other areas to solve problems; again and again they have found that experts, rather than pursuing a logical, step-by-step search (as a newly trained novice or an artificial intelligence program would do), often leap from a few facts to a correct assessment of the nature of the problem and the probable solution. A cardiologist, for instance, might from only two or three fragments of information correctly diagnose a specific heart disorder, while a newly graduated doctor, presented with the same case, would ask a great many questions and slowly narrow down the range of possibilities. The explanation: Unlike novices, experts have their knowledge organized and arranged in schemas that are full of special shortcuts based on experience.
92
Even in the first flush of enthusiasm for IP theory and computer simulations of reasoning, some psychologists, of a more humanistic than computer-technical bent, had reservations about the comparability of mind and machine. There are, indeed, major dissimilarities. For one, the computer searches for and retrieves items as needed—at blinding speed, nowadays—but human beings retrieve many items of information without any search: our own name, for instance, and most of the words we utter. For another, as the cognitive scientist Donald Norman has pointed out, if you are asked “What’s Charles Dickens’s telephone number?” you know right away that it’s a silly question, but a computer would not, and would go looking for the number.
93
For a third, the mind knows the meaning of words and other symbols, but the computer does not; to it they’re only labels. Nor does anything about the computer resemble the unconscious or all that goes on in it.
These are only a few of the differences that have been obvious since the first experiments in computer reasoning. Yet, no less an authority than Herbert Simon categorically asserted that mind and machine were kin. In 1969, in a series of lectures published as
The Sciences of the Artificial
, he argued that the computer and the human mind are both “symbol systems”—physical entities that process, transform, elaborate, and generally manipulate symbols of various kinds.
Throughout the 1970s, small cadres of dedicated psychologists and computer scientists at MIT, Carnegie-Mellon, Stanford, and a handful of other universities, possessed of a zealotlike belief that they were on the verge of a great breakthrough, developed programs that were both theories of how the mind works and machine versions of human thinking. By the 1980s the work had spread to scores of universities and to the laboratories of a number of major companies. The programs carried out such varied activities as playing chess, parsing sentences, deducing the laws of planetary motion from a mass of raw data, translating elementary sentences from one language to another, and inferring the structure of molecules from mass spectrographic data.
94
The enthusiasts saw no limit to the ability of IP theory to explain how the mind works and of AI to verify those explanations by carrying out the same processes—and eventually doing so far better than human beings. In 1981 Robert Jastrow, director of the Goddard Institute for Space Studies, predicted that “around 1995, according to current trends, we will see the silicon brain as an emergent form of life, competitive with man.”
95
But some psychologists felt that the computer was only a mechanical simulation of certain aspects of the mind and that the computational model of mental processing was a poor fit. The eminent cognitivist Ulric Neisser had become “disillusioned” with information-processing models by 1976, when he published
Cognition and Reality.
Here, much influenced by James Gibson and his “ecological” psychology, Neisser made the case that IP models were narrow and far removed from real-life perception, cognition, and purposeful activity, and fail to take into account the richness of experience and information we continually receive from the world around us.
96
A number of other psychologists, though not saying they were disillusioned, sought to broaden the IP view to include the mind’s use of
schemas, shortcuts, and intuitions, and its ability to function simultaneously on both the conscious and unconscious levels to conduct simultaneous processes in parallel (a critical issue we shall hear more of in a moment).
Still others challenged the notion that computors programmed to think like humans actually think. AI, they maintained, isn’t anything like human intelligence, and though it may vastly outperform the human mind at calculations, it would never do easily, or at all, many things the human mind does routinely and effortlessly.
The most important difference is the computer’s inability to understand what it is thinking about. John Searle and Hubert Dreyfus, both philosophy professors at Berkeley, the computer scientist Joseph Weizenbaum at MIT, and others argued that computers, even when programmed to reason, merely manipulate symbols without having any idea what they mean and imply. General Problem Solver, for instance, may have figured out how the father and two sons could get across the river, but only in terms of algebraic symbols; it did not know what a father, son, or boat were, what “sink” meant, what would happen if they sank, or anything else about the real world.
But many programs written in the 1970s and 1980s did seem to deal with real-world phenomena. This was especially true of “expert systems,” computer programs written to simulate the reasoning, and make use of the special knowledge, of experts in fields ranging from oncology to investment and from locating veins of ore to potato farming.
Typically, such programs, designed to aid problem solving, ask the person operating them questions in English, use the answers and their own stored knowledge to move through a decision-tree pattern of reasoning, close off dead ends, narrow down the search, and finally reach a conclusion to which they assign a certainty ratio (“Diagnosis: systemic lupus erythematosus, certainty .8”). By the mid-1980s, scores of such programs were in routine use in scientific laboratories, government, and industry, and before the end of the decade many hundreds were.
97
Probably the oldest and best-known expert system is MYCIN, created in 1976 and improved in 1984, which can be used to detect and identify (and potentially even treat) about a hundred different kinds of bacterial infections, and announce what degree of certainty it puts on its findings. In tests against human experts, “MYCIN’s performance compared favorably with that of faculty members in the Stanford School of Medicine… [and] outperformed medical students and residents in the same school,” notes the distinguished cognitivist Robert J. Sternberg in
Cognitive
Psychology
(2006), “[and]… had been shown to be quite effective in prescribing medication for meningitis.” internist, another expert system, diagnoses a broader range of diseases, although in doing so, it loses some precision, resulting in diagnostic powers less than that of an experienced internist.
But although these and other expert systems are intelligent in a way that banking computers, airline reservation computers, and others are not, in reality they do not know the meaning of the real-world information they deal with, not in the sense that we know. caduceus, an internal medicine consultation system, can diagnose five hundred diseases nearly as well as highly qualified clinicians, but an authoritative textbook,
Building Expert Systems
, long ago pointed out that it “has no understanding of the basic pathophysiological processes involved” and cannot think about medical problems outside or at the periphery of its area of expertise, even when plain common sense is all that is needed.
98
One medical diagnostic program failed to object when a human user asked whether amniocentesis might be useful; the patient was male and the system simply wasn’t “aware” that the question was absurd. As John Anderson has said, “The major difficulty which human experts handle well is that of understanding the context in which knowledge is to be used. A logical engine will only yield appropriate results if that context has been carefully defined.”
99
But to define contexts as broadly and richly as the human mind does would require an unimaginable amount of data and programming.