Read Modern Mind: An Intellectual History of the 20th Century Online
Authors: Peter Watson
Tags: #World History, #20th Century, #Retail, #Intellectual History, #History
Herrnstein and Murray are traditionalists. They would like to see a return to old-fashioned families, small communities, and the familiar forms of education, where pupils are taught history, literature, arts, ethics, and the sciences in such a way as to be able to weigh, analyse, and evaluate arguments according to exacting standards.
69
For them, the IQ test not only works – it is a watershed in human society. Allied to the politics of democracy and the homogenising successes of modern capitalism, the IQ aids what R. A. Fisher called runaway evolution, promoting the rapid layering of society, divided according to IQ – which, of course, is mainly inherited. We are indeed witnessing the rise of the meritocracy.
The Bell Curve
provoked a major controversy on both sides of the Atlanric. This was no surprise. Throughout the century white people, people on the ‘right’ side of the divide they were describing, have concluded that whole segments of the population were dumb. What sort of reaction did they expect? Many people countered the claims of Herrnstein and Murray, with at least six other books being produced in 1995 or 1996 to examine (and in many cases refute) the arguments
of The Bell Curve.
Stephen Jay Gould’s
The Mismeasure of Man
was reissued in 1996 with an extra chapter giving his response to
The Bell Curve.
His main point was that this was a debate that needed technical expertise. Too many of the reviewers who had joined the debate (and the book provoked nearly two hundred reviews or associated articles) did not feel themselves competent to judge the statistics, for example. Gould did, and dismissed them. In particular, he attacked Herrnstein and Murray’s habit of giving the
form
of the statistical association but not the
strength.
When this was examined, he said, the links they had found always explained less than 20 percent of the variance, ‘usually less than 10 percent and often less than 5 percent. What this means in English is that you cannot predict what a given person will do from his IQ
score.’
70
This was the conclusion Christopher Jencks had arrived at, thirty years before.
By the time
The Bell Curve
rumpus erupted, the infrastructure was in place for a biological project capable of generating controversy on an even bigger scale. This was the scramble to map the human genome, to draw up a plan to describe exactly all the nucleotides that constitute man’s inheritance and that, in time, will offer at least the possibility of interfering in our genetic makeup.
Interest in this idea grew throughout the 1980s. Indeed, it could be said that the Human Genome Project (HGP), as it came to be called, had been simmering since Victor McKusick, a Boston doctor, began collecting a comprehensive record, ‘Mendelian Inheritance in Man,’ a list of all known genetic diseases, first published in 1966.
71
But then, as research progressed, first one scientist then another began to see sense in mapping the entire genome. On 7 March 1986, in
Science,
Renato Dulbecco, Nobel Prize-winning president of the Salk Institute, startled his colleagues by asserting that the war on cancer would be over quicker if geneticists were to sequence the human genome.
72
Various U.S. government departments, including the Department of Energy and the National Institutes of Health, became interested at this point, as did scientists in Italy, the United Kingdom, Russia, Japan, and France (in roughly that order; Germany was backward, owing to the controversial role biology had played in Nazi times). A major conference, organised by the Howard Hughes Medical Institute, was held in Washington in July 1986 to bring together the various interested parties, and this had two effects. In February 1988 the US. National Research Council issued its report,
Mapping and Sequencing the Human Genome,
which recommended a concerted research program with a budget of $200 million a year.
73
James Watson, appropriately enough, was appointed associate director of NIH, later that year, with special responsibility for human genome research. And in April 1988, HUGO, the Human Genome Organisation, was founded. This was a consortium of international scientists to spread the load of research, and to make sure there was as little duplication as possible, the aim being to finalise the mapping as early as possible in the twenty-first century. The experience of the Human Genome Project has not been especially happy. In April 1992 James Watson resigned his position over an application by certain NIH scientists to patent their sequences. Watson, like many others, felt that the human genome should belong to everyone.
74
The genome project came on stream in 1988–89. This was precisely the time that communism was collapsing in the Soviet Union and the Berlin Wall was dismantled. A new era was beginning politically, but so too in the intellectual field. For HUGO was not the only major innovation introduced in 1988. That year also saw the birth of the Internet.
Whereas James Watson took a leading role in the genome project, his former colleague and co-discoverer of the double helix, Francis Crick, took a similar position in what is perhaps the hottest topic in biology as we enter the twenty-first century: consciousness studies. In 1994 Crick published
The Astonishing
Hypothesis,
which advocated a research assault on this final mystery/problem.
75
Consciousness studies naturally overlap with neurological studies, where there have been many advances in identifying different structures of the brain, such as language centres, and where MRI, magnetic resonance imaging, can show which areas are being used when people are merely thinking about the meaning of words. But the study of consciousness itself is still as much a matter for philosophers as biologists. As John Maddox put it in his 1998 book,
What Remains to be Discovered,
‘No amount of introspection can enable a person to discover just which set of neurons in which part of his or her head is executing some thought-process. Such information seems to be hidden from the human user. ‘
76
It should be said that some people think there is nothing to explain as regards consciousness. They believe it is an ‘emergent property’ that automatically arises when you put a ‘bag of neurons’ together. Others think this view absurd. A good explanation of emergent property is given by John Searle, Mills Professor of Philosophy at the University of California, Berkeley, regarding the liquidity of water. The behaviour of the H
2
0 molecules explains liquidity, but the individual molecules are not liquid. At the moment, the problem with consciousness is that our understanding is so rudimentary that we don’t even know how to talk about it – even after the ‘Decade of the Brain,’ which was adopted by the U.S. Congress on 1 January 1990.
77
This inaugurated many innovations and meetings that underlined the new fashion for consciousness studies. For example, the first international symposium on the science of consciousness was held at the University of Arizona at Tucson in April 1994, attended by no fewer than a thousand delegates.
78
In that same year the first issue of the
Journal of Consciousness Studies
was published, with a bibliography of more than 1,000 recent articles. At the same time a whole raft of books about consciousness appeared, of which the most important were:
Neural Darwinism: The Theory of Neuronal Group Selection,
by Gerald Edelman (1987),
The Remembered Present: A Biological Theory of Consciousness,
by Edelman (1989),
The Emperor’s New Mind,
by Roger Penrose (1989),
The Problem of Consciousness,
by Colin McGinn (1991),
Consciousness Explained,
by Daniel Dennett (1991),
The Rediscovery of the Mind,
by John Searle (1992),
Bright Air, Brilliant Fire,
by Edelman (1992),
The Astonishing Hypothesis,
by Francis Crick (1994),
Shallows of the Mind: A Search for the Missing Science of Consciousness,
by Roger Penrose (1994), and
The Conscious Mind: In Search of a Fundamental Theory,
by David Chalmers (1996). Other journals on consciousness were also started, and there were two international symposia on the subject at Jesus College, Cambridge, published as
Nature’s Imagination
(1994) and
Consciousness and Human Identity
(1998), both edited by John Cornwell.
Thus consciousness has been very much the flavour of the decade, and it is fair to say that those involved in the subject fall into four camps. There are those, like the British philosopher Colin McGinn, who argue that consciousness is resistant to explanation
in principle
and for all time.
79
Philosophers we have met before – such as Thomas Nagel and Hilary Putnam – also add that at the present (and maybe for all time) science cannot account for qualia, the first-person
phenomenal experience that we understand as consciousness. Then there are two types of reductionist. Those like Daniel Dennett, who claim not only that consciousness can be explained by science but that construction of an artificially intelligent machine that will be conscious is not far off, may be called the ‘hard’ reductionists.
80
The soft reductionists, typified by John Searle, believe that consciousness does depend on the physical properties of the brain but think we are nowhere near solving just how these processes work, and dismiss the very idea that machines will ever be conscious.
81
Finally, there are those like Roger Penrose who believe that a new kind of dualism is needed, that in effect a whole new set of physical laws may apply inside the brain, which account for consciousness.
82
Penrose’s particular contribution is that quantum physics operate within tiny structures, known as tubules, within the nerve cells of the brain to produce – in some as yet unspecified way – the phenomena we recognise as consciousness.
83
Penrose actually thinks that we live in three worlds – the physical, the mental, and the mathematical: ‘The physical world grounds the mental world, which in turn grounds the mathematical world and the mathematical world is the ground of the physical world and so on around the circle.’
84
Many people, who find this tantalising, nonetheless don’t feel Penrose has
proved
anything. His speculation is enticing and original, but it is still speculation.
Instead, it is the two forms of reductionism that in the present climate attract most interest. For people like Dennett, human consciousness and identity arise from the narrative of their lives, and this can be related to specific brain states. For example, there is growing evidence that the ability to ‘apply intentional predicates to other people is a human universal’ and is associated with a specific area of the brain (the orbitofrontal cortex); in certain states of autism, this ability is defective. There is also evidence that the blood supply to the orbitofrontal cortex increases when people ‘process’ intentional verbs as opposed to non-intentional ones, and that damage to this area of the brain can lead to a failure to introspect.
85
Suggestive as this is, it is also the case that the microanatomy of the brain varies quite considerably from individual to individual, and that a particular phenomenal experience is represented at several different points in the brain, which clearly require integration. Any ‘deep’ patterns relating experience to brain activity have yet to be discovered, and seem to be a long way off, though this is still the most likely way forward.
A related approach – perhaps to be expected, given other developments in recent years – is to look at the brain and consciousness in a Darwinian light. In what sense is consciousness adaptive? This approach has produced two views – one that the brain was in effect ‘jerry-built’ in evolution to accomplish very many and very different tasks. On this view, there are at base three organs: a reptilian core (the seat of our basic drives), a palaeomammalian layer, which produces such things as affection for offspring, and a neomammalian brain, the seat of reasoning, language, and other ‘higher functions.’
86
The second approach is to argue that throughout evolution (and throughout our bodies) there have been emergent properties: for example, there is always a biochemical explanation underlying a physiological phenomenon – sodium/potassium flux
across a membrane being also nerve action potential.
87
In this sense, then, consciousness is nothing new in principle even if, at the moment, we don’t fully understand it.
Studies of nerve action through the animal kingdom have also shown that nerves work by either firing or not firing; intensity is represented by the rate of firing – the more intense the stimulation, the faster the turning on and off of any particular nerve. This of course is very similar to the way computers work, in ‘bits’ of information, where everything is represented by a configuration of either os or is. The arrival of the concept of parallel processing in computing led the philosopher Daniel Dennett to consider whether an analogous process might happen in the brain between different evolutionary levels, giving rise to consciousness. Again such reasoning, though tantalising, has not gone much further than preliminary exploration. At the moment, no one seems able to think of the next step.
Francis Crick’s aim has been fulfilled. Consciousness is being investigated as never before. But it would be rash to predict that the new century will bring advances quickly. No less a figure than Noam Chomsky has said, ‘It is quite possible – overwhelmingly probably, one might guess – that we will always learn more about human life and personality from novels than from scientific psychology.’