Authors: Nick Harkaway
The answer to the first question lies, in the UK at least, in a culture of public discussion that is to some extent hostile to science as a cultural force and as a political influence. Media coverage
of science issues is always looking for an angle; precisely what the original documents and researchers, bound by notions of objectivity and accuracy, are in general seeking to avoid. Splashy headlines about the dangers of vaccinations are more interesting and emotive – and draw more eyeballs – than quiet explanations of why you cannot get flu from the flu shot (answer: because the shot does not contain a complete virus; it would be like running a car with two-thirds of the engine missing). A case in point is the issue of whether the
swine flu vaccine can cause
Guillain-Barré Syndrome, a nerve problem that can be painful and lead to paralysis and even, in extreme cases, death. The syndrome results from a given number of cases of influenza, but some studies seemed to show that there was an increased incidence when using the vaccine. Headlines in the UK were frantic: the
Daily Mail
went with ‘Swine flu jab link to killer nerve disease’.
9
Actually, there was no clear link, and the largest subsequent study of data appeared to show a slight drop in the incidence of Guillain-Barré in those vaccinated.
10
Part of the problem is that we have few models in mainstream cultural life that interpret the way the world is, what it may become, or how we arrived at this point, with reference to science. The focus of culture – of theatre, fiction and art – is on personal, interior journeys and emotional and moral truths. These are what might be termed ‘eternal human stories’. It’s odd, but the science that played its part in bringing us here, and the technologies and ethos that go along with it and which create the world we inhabit, are oddly unrepresented both in these eternal stories and in public intellectual discourse. The classical Greek myth of Prometheus, the thief of fire, and the story of Icarus, which is in some ways the counterpart narrative, are re-told from time to time, but almost always as tragedies of over-reaching. Science is the place from which trouble comes; solutions derive from the human heart, with its capacity to balance the excesses of the brain.
In government, the situation is if anything worse. Science – providing as best it can statements of truth – is but one part of a decision-making process that must also satisfy or at least take into account the wild and inaccurate received positions of MPs and pressure groups. The tired anti-sex rhetoric of religious conservatives shades inevitably into a stout-hearted denial of compelling evidence that sex education reduces STDs and teen pregnancy, and for some unfathomable reason this denial of reality is not grounds for de-selection but for celebration. In the twenty-first century we still have elected leaders who choose policy on the basis of what they wish were true rather than what is known to be so. I’ve heard this called ‘policy-based evidence-making’; our politics, endlessly negotiating and compromising, has no space for the exigencies of the scientific world, so science often seems to carry the can, on the one hand for having the temerity to report facts that are unwanted, and on the other for generating technology that proves – as any widely adopted and significant technology must do – disruptive.
To make things worse, much of the scientific world is apprehended only by cognition. The truths of atomic structure and gravitation are not perceptible to human senses. Our natural understanding of the universe occurs at the human scale, where objects are made of solid chunks of matter, heat is heat rather than molecular vibration, and the sun rises (rather than the Earth rotating to reveal our star where it has always been in relation to our planet). Einstein’s world, in which our velocity affects our mass and the flow of time around us is different from that around a body moving more slowly, is a strangeness almost no one considers in their day-to-day lives. It simply makes no sense, so we don’t see it. The weirdness of
quantum theory, in which information appears to travel backwards through time and a cat may be both alive and dead until an observation is made, is – in so far as it intrudes on our notice at all – a fictional device, a thought experiment or an opportunity for humour.
And yet, projects to construct the first quantum computers are under way even now, and so far seem likely to succeed. If they do, the world will change again, as processing becomes ridiculously rapid and previously imponderable mathematical problems can be dealt with in minutes. The practical upshots will be an end to current methods of cryptography – which are used to secure everything from credit card transactions and diplomatic communications to air traffic control – and a huge boost to biological and medical research, not to mention physics. Climate modelling will get better, or at least faster. The list of things we cannot do will once more get shorter. And yet, almost no one is thinking about it, or, at least, not aloud. Has the Department of Health considered the budgetary implications? Has the Chancellor discussed the issue with the Governor of the Bank of England? If they have, they surely have not done so publicly. Why not? When these developments happen – if they do – the results will shunt us into another series of shifts in the way the world works, and we’ll have to adjust. It might help to see them coming up over the horizon.
They don’t talk about them, because we as a society are unprepared for the discussion. Where for a while no one could be considered well-educated without a grounding in mathematics as well as literature, biology as well as music, some time around the early 1900s the perception changed – at least in the UK.
F.R. Leavis, in reviewing
H.G. Wells, argued that Wells should be considered a portent, a type, rather than a proper writer. Leavis also pre-echoed part of today’s angst about information technology: ‘the efficiency of the machinery becomes the ultimate value, and this seems to us to mean something very different from expanding and richer human life.’ That distinction is in my view fundamental to the discussion here: Leavis makes a separation between machinery, and by implication mechanisms and logic, and ‘richer human life’ which is achieved elsewhere.
The writer and physicist
C.P. Snow retaliated that the
mainstream intellectual culture was ‘behaving like a state whose power is rapidly declining’. The mood got worse from there, with Snow asserting that there was a growing schism between ‘feline’ literary culture – which he felt was redefining the term ‘intellectual’ to exclude figures like Ernest Rutherford (generally considered the father of nuclear physics) but include T.S. Eliot – and scientific culture, which was ‘heroic’, and heterosexual. Leavis replied that Snow’s pontifical tone was such that ‘while only genius could justify it, one cannot readily think of genius adopting it’. He went on to clarify, in case any scientists in the room might have missed the point, that he considered Snow as ‘intellectually undistinguished as it is possible to be’.
Leaving aside Snow’s evident homophobia as an ugly aspect of his time and a sorry anticipation of the hounding of
Alan Turing after the Second World War, the spat has resonance today. The present literary establishment’s relationship with science is profoundly uncomfortable, and literary fiction predicated on science is rare, perhaps because any that touches upon science is liable to be reclassified as
science fiction, and therefore not ‘intellectual’.
The Time Machine, Brave New World
and
1984
are all strikingly important novels, and all of them are pretty clearly science fiction, but it can be hard to get anyone to acknowledge that out loud. The science fiction aspect is generally dismissed as ‘the least important part’. Time has washed them, acknowledged importance has removed the uncomfortable trace of genre. And try telling anyone that
Cold Comfort Farm –
a novel written in 1932 about a near future some time after a 1946 war with Nicaragua, in which everyone communicates by video phone – is science fiction. Most people I talk to about it don’t remember the setting at all; it’s as if it just can’t possibly be there, so it never was.
When
Jeanette Winterson wrote a novel with elements that could be tracked as science fiction, she had to fight a species of rearguard action against mutterings of uncertainty and disapproval,
giving a rationale for including these taboo topics. She told
New Scientist
magazine in 2007:
I hate science fiction. But good writers about science, such as
Jim Crace or
Margaret Atwood, are great. They take on science because it’s crucial to our world, and they use language to give energy to ideas. But others just borrow from science and it ends up like the emperor’s new clothes, with no understanding of the material. But you shouldn’t fake it because science is too important, it’s the basis for our lives. I expect a lot more science in fiction because science is so rich.
Which sounds to me rather severe: the element of play, of wonder, that characterizes much science fiction and which brings science into the living world rather than making it something that can only be observed at a great distance, is missing.
Consider this rather different perspective: the writer
Neil Gaiman, as a guest of China’s largest state-approved science fiction convention, wondered aloud why China had changed its mind about a genre it previously discouraged. (Science fiction, among its many other evils, has long been a way for cheeky dissidents in any country to express political, social and sexual ideas that would otherwise get them locked up.) Gaiman was told that China had researched the innovation powerhouses of the United States, and discovered that the common factor among all the companies of note in the technological arena was simple: people in those outfits read and were inspired by science fiction. So now China was encouraging its own people to read it, too, in order to become a creator of new technology rather than just an industrial powerhouse turning out tech products for the United States.
11
The dispute doesn’t begin with Leavis and Snow, of course; it’s the clash of two competing interpretations of life. On the one hand, you have the
Romantic movement, which is fundamentally mystical and seeks meaning in peak experiences and considers all that is important in life to be poetic and irreducible.
On the other, you have
the Enlightenment, which believed everything would eventually be explained by science and reason, and promised a world founded upon clearly understandable principles of rational thought. Neither church has ever been able to deliver entirely, and the present situation is a typically modern compromise, a kind of patchwork in which both sides achieve primacy in a circumscribed arena: politics and daily life are generally governed, in those regions where the influence of these competing ideas is felt, by a sort of watered-down rationalism that is most pragmatic, and which makes room for anti-scientific balderdash if it appeals to the popular perception. Appeals to idealism – a Romantic trait – are shrugged off as impracticable and naïve so that business may continue as usual. Culture, meanwhile, is owned by the mystical Romantic thread, suitably embellished with borrowings from psychoanalysis and science where appropriate, but still fundamentally touting a notion that some experience cannot be codified, it must be lived, and any attempt to replicate it is not only doomed to failure but more importantly a fundamental failure to understand the world.
And yet so often, our majority culture doesn’t talk about the sciences at all, seeing them as an irrelevance at best, and a distraction from real human truth at worst. This is a fundamental error. We as human beings are not separate from our tools or the environment we make with them. We are not separate from one another. We are individuals, yes, but individuals defined in part by our relationships with others and with what is around us. The investigation of the inner self is vital, but it is not comprehensive as a statement of who and what we are. We need to learn to speak the language of science and follow its logic, to incorporate it into our understanding of what is real and above all what is meaningful. It is definitive of our world, like it or not, unless we intend to drift back to pre-Pasteur medievalism and die at forty with no teeth. It is part of the human condition, in some ways definitive of us as creatures, that we reshape our environment,
that we seek understanding of the universe – for control, yes, but also as part of who we are. We make our world, and any discourse of culture that ignores that aspect of us is as false as one that affords no importance to the interior life.
Which leaves my second question: why do some people react to any suggestion that the Internet and its related technologies may not be an
a priori
good as if it were a violent attack?
I
F OUR TROUBLED
relationship with science is partly to blame for the willingness of some to project the modern sense of confusion on to devices that emerged after that confusion had already settled upon us, what about the almost religious zeal with which others defend digital technology? The answer to that question is actually more interesting to me, because I think it goes to the heart of the Internet’s role in the human world and the relationship that currently exists – as well as the one we desperately need to forge – between ourselves, our society and our tools.
Between 1980 and the millennium, the Internet became a play space, an ‘anarchistic electronic freeway’. Looking once more at the 1993
Time
article, it’s noticeable that both of the things mentioned by
Glee Willis – family and sex – are private matters, things that belong to the home and the individual, not the state. They are aspects of the hearth, the personal space I discussed earlier, governed not by sternly codified laws or regulations (unless something goes very, very wrong) but by feelings of natural fairness, desire and emotional reciprocity. They are both venues for relaxation and non-cognitive satisfaction: for immanent, biological living. You could argue that that kind of living is what the hearth is, or is for, and the first online communities retained that ethos. They were, in the philosophical sense, naïve: they were unconsidered, did not spend a huge amount of time examining their own meaning. They were just made up of people living,
sharing experiences, helping one another, falling in love, rowing and fighting, and so on. In other words, many of the first colonists of the digital realm – those who arrived just after the frontiersmen from MIT – weren’t there for professional reasons. They did not erect a shopfront, because there was no one to sell to. They were homesteaders, and they extended the hearth into the online world, and they did so mostly not for intellectual interest, but because it was fun. It was a strange new thing, and they went about it
playfully.