Read Modern Mind: An Intellectual History of the 20th Century Online
Authors: Peter Watson
Tags: #World History, #20th Century, #Retail, #Intellectual History, #History
For Denby, a much greater danger came from the media. ‘Most high schools can’t begin to compete against a torrent of imagery and sound that makes every moment but the present seem quaint, bloodless, or dead.’
55
In fact, he said, the modern world has turned itself upside down. On his first time round, in 1961, the immediacy of pop had been liberating, a wonderful antidote to the stifling classroom; but now ‘the movies have declined;
pop
has become a field of conformity and complacency, while the traditional high culture, by means of its very strangeness and difficulty, strikes students as odd. They may even be shocked by it…. The [great] books are less a conquering army than a kingdom of untameable beasts, at war with one another and with readers.’
56
In 1999 Harold Bloom returned to his first love. In
Shakespeare: The Invention of the Human,
Bloom argued that the great poet ‘invented us,’ that ‘personality, in our sense, is a Shakespearean invention.’
57
Before Shakespeare, Bloom claims, characters did not grow and develop. ‘In Shakespeare, characters develop rather than unfold, and they develop because they reconceive themselves. Sometimes this comes about because they
overhear
themselves talking, whether to themselves or to others. Self-overhearing is the royal road to individuation.’
58
Bloom’s book is deeply unfashionable, not only in its message but in the way it is written. It is an act of worship. He freely concedes that Bardolatry is and has been ‘a secular religion’ for some two hundred years, and he enjoys being in that tradition because he believes that the very successes of Shakespeare transcend all ways of approaching him: he is simply too brilliant, too intelligent, to be cut down to size, as the feminists, cultural materialists, and Marxists would like to do. ‘Shakespeare, through Hamlet, has made us skeptics in our relationships with anyone, because we have learned to doubt articulateness in the realm of affection…. Our ability to laugh at ourselves as readily as we do at others owes
much to Falstaff…. Cleopatra [is the character] through whom the playwright taught us how complex eros is, and how impossible it is to divorce acting the part of being in love and the reality of being in love…. Mutability is incessant in her passional existence, and it excludes sincerity as being irrelevant to eros.’
59
‘When we are wholly human, and know ourselves, we become most like either Hamlet or Falstaff.’
60
There is something magnificent about this ‘Bloom in love,’ dismissing his critics and opponents without even naming them. It is all very unscientific, but that is Bloom’s point: this is what art should seek to emulate, these are the feelings great art exists for. Individuation may have been one of the great issues of the century, but Shakespeare got there first, and has still not been equalled. He is the one man worth worshipping, and we are, if we will only see it, surrounded by his works.
One more distinguished combatant joined the Blooms on the barricades, an academic Boadicea whose broadsides went wider even than theirs: Gertrude Himmelfarb, the historian wife of Irving Kristol, founder (with Daniel Bell) of the
Public Interest.
In
On Looking into the Abyss
(1994), Himmelfarb, professor emeritus of history at the Graduate School of the City University of New York, attacked postmodernism in whatever guise it raised its head, from literary theory to philosophy to history.
61
Her argument against literary theory was that the theory itself had displaced literature as the object of study and in the process taken away the ‘profound spiritual and emotional’ experience that comes with reading great works, the ‘dread beasts’ as she put it, ‘lurking at the bottom of the “Abyss.” ‘
62
As a result, she said, ‘The beasts of modernism have mutated into the beasts of postmodernism – relativism into nihilism, amorality into immorality, irrationality into insanity, sexual deviancy into polymorphous perversity.’
63
She loathed the ‘boa-deconstructors’ like Derrida and Paul de Man and what they had done to literature, thinking their aim more political than literary (they would have agreed). She attacked the
Annales
school: she admired Fernand Braudel’s fortitude in producing his first great book in a concentration camp, from memory, but thought his concept of
la longue durée
gave him a fatally skewed perspective on such events as, say, the Holocaust. She thought that the new enemy of liberalism had become – well, liberalism itself. Liberalism was now so liberal, she argued, that it absolved postmodern historians, as they saw it, from any duty to the truth. ‘Postmodernists deny not only absolute truth but contingent, partial, incremental truth…. In the jargon of the school, truth is “totalising,” “hegemonic,” “logocentric,” “phallocentric,” “autocratic,” “tyrannical.” ‘
64
She turned on Richard Rorty for arguing there is no ‘essential’ truth or reality, and on Stanley Fish for arguing that the demise of objectivity ‘relieves me of the obligation to be right.’
65
But her chief point was that ‘postmodernism entices us with the siren call of liberation and creativity,’ whereas there is a tendency for ‘absolute liberty to subvert the very liberty it seeks to preserve.’
66
In particular, and dangerously, she saw about her a tendency to downplay the importance and horror of the Holocaust, to argue that it was something ‘structural,’ rather than a personal horror for which real individuals were responsible, which need not have happened, and which needed to be
understood, and reunderstood by every generation. She tellingly quotes the dedication in David Abraham’s book
The Collapse of the Weimar Republic,
published in 1981, which contained the dedication, ‘For my parents – who at Auschwitz and elsewhere suffered the worst consequences of what I can merely write about.’ In Himmelfarb’s view, the reader is invited to think that the author’s parents perished in the camps, but they did not. This curious phraseology was later examined by Natalie Zemon Davis, an historian, who concluded that Abraham’s work had been designed to show that the Holocaust was not the work of devils ‘but of historical forces and actors.’
67
This was too much for Himmelfarb, a relativising of evil that was beyond reason. It epitomised the postmodern predicament: the perfect example of where too much liberty has brought us.
There is a sense in which the culture wars are a kind of background radiation left over from the Big Bang of the Russian Revolution. At exactly the time that political Marxism was being dismantled, along with the Berlin Wall, postmodernism achieved its greatest triumphs. For the time being at least, the advocates of local knowledge have the edge. Gertrude Himmelfarb’s warning, however timely, and however sympathetic one finds it, is rather like trying to put a genie back into a bottle.
*
The Committee on Social Thought was ‘a herd of independent minds,’ in Harold Rosenberg’s phrase, a group of socially concerned intellectuals centred on Chicago University and which included among many others Rosenberg himself, Saul Bellow, and Edward Shils.
In 1986 Dan Lynch, an ex-student from UCLA, started a trade fair for computer hardware and software, known as Interop. Until then the number of people linked together via computer networks was limited to a few hundred ‘hardcore’ scientists and academics. Between 1988 and 1989, however, Interop took off: hitherto a fair for specialists, it was from then on attended by many more people, all of whom suddenly seemed to realise that this new way of communicating – via remote computer terminals that gave access to very many databases, situated across the world and known as the Internet – was a phenomenon that promised intellectual satisfaction and commercial rewards in more or less equal measure. Vint Cerf, a self-confessed ‘nerd’, from California, a man who set aside several days each year to re-read
The Lord of the Rings,
and one of a handful of people who could be called a father of the Internet, visited Lynch’s fair, and he certainly noticed a huge change. Until that point the Internet had been, at some level, an experiment. No more.
1
Different people place the origins of the Internet at different times. The earliest accounts put it in the mind of Vannevar Bush, as long ago as 1945. Bush, the man who had played such a prominent role in the building of the atomic bomb, envisaged a machine that would allow the entire compendium of human knowledge to be ‘accessed’. But it was not until the Russians surprised the world with the launch of the
Sputnik
in October 1957 that the first faltering steps were taken toward the Net as we now know it. The launch of a satellite, as was discussed in chapter 27, raised the spectre of associated technologies: in order to put such an object in space, Russia had developed rockets capable of reaching America with sufficient accuracy to do huge damage if fitted with nuclear warheads. This realisation galvanised America, and among the research projects introduced as a result of this change in the rules of engagement was one designed to explore how the United States’ command and control system – military and political – could be dispersed around the country, so that should she be attacked in one area, America would still be able to function elsewhere. Several new agencies were set up to consider different aspects of the situation, including the National Aeronautics and Space Administration (NASA) and the Advanced Research Projects Agency, or ARPA.
2
It was this outfit which was charged with investigating the safety of command and control structures after
a nuclear strike. ARPA was given a staff of about seventy, an appropriation of $520 million, and a budget plan of $2 billion.
3
At that stage computers were no longer new, but they were still huge and expensive (one at Harvard at the time was fifty feet long and eight feet high). Among the specialists recruited by ARPA was Joseph Licklider, a tall, laconic psychologist from Missouri, who in 1960 had published a paper on ‘man-computer symbiosis’ in which he looked forward to an integrated arrangement of computers, which he named, ironically, an ‘intergalactic network.’ That was some way off. The first breakthrough came in the early 1960s, with the idea of ‘packet-switching,’ developed by Paul Baran.
4
An immigrant from Poland, Baran took his idea from the brain, which can sometimes recover from disease by switching the messages it sends to new routes. Baran’s idea was to divide a message into smaller packets and then send them by different routes to their destination. This, he found, could not only speed up transmission but avoid the total loss of information where one line is faulty. In this way technology was conceived that reassembled the message packets when they arrived, and tested the network for the quickest routes. This same idea occurred almost simultaneously to Donald Davies, working at the National Physical Laboratory in Britain – in fact,
packet-switching
was his term. The new hardware was accompanied by new software, a brand-new branch of mathematics known as queuing theory, designed to prevent the buildup of packets at intermediate nodes by finding the most suitable alternatives.
5
In 1968 the first ‘network’ was set up, consisting of just four sites: UCLA, Stanford Research Institute (SRI), the University of Utah, and the University of California at Santa Barbara.
6
The technological breakthrough that enabled this to proceed was the conception of the so-called interface message processor, or IMP, whose task it was to send bits of information to a specified location. In other words, instead of ‘host’ computers being interconnected, the IMPs would be instead, and each IMP would be connected to a host.
7
The computers might be different pieces of hardware, using different software, but the IMPs spoke a common language and could recognise destinations. The contract to construct the IMPs was given by ARPA to a small consulting firm in Cambridge, Massachusetts, called Bolt Beranek and Newman (BBN) and they delivered the first processor in September 1969, at UCLA, and the second in October, at SRI. It was now possible, for the first time, for two disparate computers to ‘talk’ to each other. Four nodes were up and running by January 1970, all on the West Coast of America. The first on the East Coast, at BBN’s own headquarters, was installed in March. The ARPANET, as it came to be called, now crossed the continent.
8
By the end of 1970 there were fifteen nodes, all at universities or think tanks.
By the end of 1972 there were three cross-country lines in operation and clusters of IMPs in four geographic areas – Boston, Washington D.C., San Francisco and Los Angeles – with, in all, more than forty nodes. By now ARPANET was usually known as just the Net, and although its role was still strictly defence-oriented, more informal uses had also been found: chess games, quizzes, the Associated Press wire service. It wasn’t far from there to personal
messages, and one day in 1972, e-mail was born when Ray Tomlinson, an engineer at BBN, devised a program for computer addresses, the most salient feature of which was a device to separate the name of the user from the machine the user was on. Tomlinson needed a character that could never be found in any user’s name and, looking at the keyboard, he happened upon the ‘@’ sign.
9
It was perfect: it meant ‘at’ and had no other use. This development was so natural that the practice just took off among the ARPANET community. A 1973 survey showed that there were 50 IMPs on the Net and that three-quarters of all traffic was e-mail.
By 1975 the Net community had grown to more than a thousand, but the next real breakthrough was Vint Cerf’s idea, as he sat in the lobby of a San Francisco hotel, waiting for a conference to begin. By then, ARPANET was no longer the only computer network: other countries had their own nets, and other scientific-commercial groups in America had begun theirs. Cerf began to consider joining them all together, via a series of what he referred to as gateways, to create what some people called the Catenet, for Concatenated Network, and what others called the Internet.
10
This required not more machinery but design of TCPs, or transmission-control protocols, a universal language. In October 1977 Cerf and his colleagues demonstrated the first system to give access to more than one network. The Internet as we now know it was born.
Growth of the Net soon accelerated. It was no longer purely a defence exercise, but, in 1979, it was still largely confined to (about 120) universities and other academic/scientific institutions. The main initiatives, therefore, were now taken over from ARPA by the National Science Foundation, which set up the Computer Science Research Network, or CSNET, and in 1985 created a ‘backbone’ of five supercomputer centres scattered around the United States, and a dozen or so regional networks.
11
These supercomputers were both the brains and the batteries of the network, a massive reservoir of memory designed to soak up all the information users could throw at it and prevent gridlock. Universities paid $20,000 to $50,000 a year in connection charges. More and more people could now see the potential of the Internet, and in January 1986 a grand summit was held on the West Coast and order put into the e-mail, to create seven domains or ‘Frodos.’ These were universities (edu), government (gov), companies (com), military (mil), nonprofit organisations (org), network service providers (net), and international treaty entities (int). It was this new order that, as much as anything, helped the phenomenal growth of the Internet between 1988 and 1989, and which was seen at Dan Lynch’s Interop. The final twist came in 1990 when the World Wide Web was created by researchers at CERN, the European Laboratory for Particle Physics near Geneva.
12
This used a special protocol, HTTP, devised by Tim Berners-Lee, and made the Internet much easier to browse, or navigate. Mosaic, the first truly popular browser, devised at the University of Illinois, followed in 1993. It is only since then that the Internet has been commercially available and easy to use.
The Internet has its critics, such as Brian Winston, who in his 1998 history of media technology warns that ‘the Internet represents the final disastrous
application of the concept of commodification of information in the second half of the twentieth century.’
13
But few now doubt that the Internet
is
a new way of communicating, or that soon a new psychology will emerge from relationships forged in ‘cyberspace.’
14
In years to come, 1988 may be revealed as a turning point so far as science is concerned. Not only did the Internet and the Human Genome Organisation get under way, bringing about the ultramodern world and setting the shape of the twenty-first century, but a book appeared that had the most commercially successful publishing history of any work of science ever printed. It set the seal on the popular acceptance of science but, as we shall see in the epilogue, in some ways marked its apogee.
A Brief History of Time from the Big Bang to Black Holes,
by the Cambridge cosmologist Stephen Hawking, had been five years in the making and in some senses was just as much the work of Peter Guzzardi, a New York editor with Bantam Books.
15
It was Guzzardi who had persuaded Hawking to leave Cambridge University Press. CUP had been planning to publish Hawking’s book, because they had published his others, and had offered an advance of £10,000 – their biggest ever. But Guzzardi tempted Hawking to Bantam, though it perhaps wasn’t too difficult a choice for the scientist, since the firm’s editorial board had been won over by Guzzardi’s enthusiasm, to the point of offering a $250,000 advance. In the intervening years, Guzzardi had worked hard to make Hawking’s dense prose ever more accessible for a general audience.
16
The book was released in early spring 1988 – and what happened then quickly passed into publishing history. More than half a million hardback copies of the book were sold in both the United States and Britain, where the title went through twenty reprints by 1991 and remained in the best-seller lists for no fewer than 234 weeks, four and a half years. The book was an almost equally great success in Italy, Germany, Japan, and a host of other countries across the world, and Hawking quickly became the world’s most famous scientist. He was given his own television series, made cameo appearances in Hollywood films, and his public lectures filled theatres the size of the Albert Hall in London.
17
There was one other unusual element in this story of success. In 1988 Hawking was aged forty-six, but since 1963, when he was twenty-one, he had been diagnosed as suffering from amyotrophic lateral sclerosis, ALS, also known (in the U.K.) as motor neurone disease and (in the United States) as Lou Gehrig’s disease, after the Yankee baseball player who died from it.
18
What had begun as mere clumsiness at the end of 1962 had progressed over the intervening years so that by 1988 Hawking was confined to a wheelchair and able to communicate only by means of a special computer connected to a voice synthesiser. Despite these handicaps, in 1979 he had been appointed Lucasian Professor of Mathematics at Cambridge, a post that Isaac Newton had held before him, he had won the Einstein medal, and he had published a number of well-received academic books on gravity, relativity, and the structure of the universe. As Hawking’s biographers say, we shall never know to what extent
Stephen Hawking’s considerable disability contributed to the popularity of his ideas, but there was something triumphant, even moving, in the way he overcame his handicap (in the late 1960s he had been given two years to live). He has never allowed his disability to deflect him from what he knows are science’s central intellectual concerns. These involve black holes, the concept of a ‘singularity,’ and the light they throw on the Big Bang; the possibility of multiple universes; and new ideas about gravity and the fabric of reality, in particular ‘string theory.’
It is with black holes that Hawking’s name is most indelibly linked. This idea, as mentioned earlier, was first broached in the 1960s. Black holes were envisaged as superdense objects, the result of a certain type of stellar evolution in which a large body collapses in on itself under the force of gravity to the point where nothing, not even light, can escape. The discovery of pulsars, quasars, neutron stars, and background radiation in the 1960s considerably broadened our understanding of this process, besides making it real, rather than theoretical. Working with Roger Penrose, another brilliant physicist then at Birkbeck College in London, this pair first argued that at the centre of every black hole, as at the beginning of the universe, there must be a ‘singularity,’ a moment when matter is infinitely dense, infinitely small, and when the laws of physics as we know them break down. Hawking added to this the revolutionary idea that black holes could emit radiation (this became known as Hawking radiation) and, under certain conditions, explode.
19
He also believes that, just as radio stars had been discovered in the 1960s, thanks to new radio-telescopes, so X rays should also be detectable from space via satellites above the atmosphere, which otherwise screened out such rays. Hawking’s reasoning was based on calculations that showed that as matter was sucked into a black hole, it would get hot enough to emit X rays. Sure enough, four X-ray sources were subsequently identified in a survey of the heavens and so became the first candidates for observable black holes. Hawking’s later calculations showed that, contrary to his first ideas, black holes did not remain stable but lost energy, in the form of gravity, shrank, and eventually, after billions of years, exploded, possibly accounting for occasional and otherwise unexplained bursts of energy in the universe.
20