The Beginning of Infinity: Explanations That Transform the World (24 page)

BOOK: The Beginning of Infinity: Explanations That Transform the World
13.2Mb size Format: txt, pdf, ePub
ads

Babbage originally had no conception of computational universality. Nevertheless, the Difference Engine already comes remarkably close to it – not in its repertoire of computations, but in its physical constitution. To program it to print out a given table, one initializes certain cogs. Babbage eventually realized that this programming phase could itself be automated: the settings could be prepared on punched cards like Jacquard’s, and transferred mechanically into the cogs. This would not only remove the main remaining source of error, but also increase the machine’s repertoire. Babbage then realized that if the machine could also punch new cards for its own later use, and could control which punched card it would read next (say, by choosing from a stack of them, depending on the position of its cogs), then something qualitatively new would happen: the jump to universality.

Babbage called this improved machine the
Analytical Engine
. He and his colleague the mathematician Ada, Countess of Lovelace, knew that it would be capable of computing anything that human ‘computers’ could, and that this included more than just arithmetic: it could do algebra, play chess, compose music, process images and so on. It would be what is today called a universal classical computer. (I shall explain the significance of the proviso ‘classical’ in
Chapter 11
, when I discuss quantum computers, which operate at a still higher level of universality.)

Neither they nor anyone else for over a century afterwards imagined today’s most common uses of computation, such as the internet, word processing, database searching, and games. But another important application that they did foresee was making scientific predictions. The Analytical Engine would be a universal simulator – able to predict the behaviour, to any desired accuracy, of any physical object, given the relevant laws of physics. This is the universality that I mentioned in
Chapter 3
, through which physical objects that are unlike each other
and dominated by different laws of physics (such as brains and quasars) can exhibit the same mathematical relationships.

Babbage and Lovelace were Enlightenment people, and so they understood that the universality of the Analytical Engine would make it an epoch-making technology. Even so, despite great efforts, they failed to pass their enthusiasm on to more than a handful of others, who in turn failed to pass it to anyone. And so the Analytical Engine became one of the tragic might-have-beens of history. If only they had looked around for other implementations, they might have realized that the perfect one was already waiting for them: electrical relays (switches controlled by electric currents). These had been one of the first applications of fundamental research into electromagnetism, and they were about to be mass produced for the technological revolution of telegraphy. A redesigned Analytical Engine, using on/off electrical currents to represent binary digits and relays to do the computation, would have been faster than Babbage’s and also cheaper and easier to construct. (Binary numbers were already well known. The mathematician and philosopher Gottfried Wilhelm Leibniz had even suggested using them for mechanical calculation in the seventeenth century.) So the computer revolution would have happened a century earlier than it did. Because of the technologies of telegraphy and printing that were being developed concurrently, an internet revolution might well have followed. The science-fiction authors William Gibson and Bruce Sterling, in their novel
The Difference Engine
, have given an exciting account of what that might have been like. The journalist Tom Standage, in his book
The Victorian Internet
, maintains that the early telegraph system, even without computers, did create an internet-like phenomenon among the operators, with ‘hackers, on-line romances and weddings, chat-rooms, flame wars . . . and so on’.

Babbage and Lovelace also thought about one application of universal computers that has not been achieved to this day, namely so-called
artificial intelligence
(AI). Since human brains are physical objects obeying the laws of physics, and since the Analytical Engine is a universal simulator, it could be programmed to think, in every sense that humans can (albeit very slowly and requiring an impractically vast number of punched cards). Nevertheless, Babbage and Lovelace denied that it could. Lovelace argued that ‘The Analytical Engine has no
pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.’

The mathematician and computer pioneer Alan Turing later called this mistake ‘Lady Lovelace’s objection’. It was not computational universality that Lovelace failed to appreciate, but the universality of the laws of physics. Science at the time had almost no knowledge of the physics of the brain. Also, Darwin’s theory of evolution had not yet been published, and supernatural accounts of the nature of human beings were still prevalent. Today there is less mitigation for the minority of scientists and philosophers who still believe that AI is unattainable. For instance, the philosopher John Searle has placed the AI project in the following historical perspective: for centuries, some people have tried to explain the mind in mechanical terms, using similes and metaphors based on the most complex machines of the day. First the brain was supposed to be like an immensely complicated set of gears and levers. Then it was hydraulic pipes, then steam engines, then telephone exchanges – and, now that computers are our most impressive technology, brains are said to be computers. But this is still no more than a metaphor, says Searle, and there is no more reason to expect the brain to be a computer than a steam engine.

But there is. A steam engine is not a universal simulator. But a computer is, so expecting it to be able to do whatever neurons can is not a metaphor: it is a known and proven property of the laws of physics as best we know them. (And, as it happens, hydraulic pipes could also be made into a universal classical computer, and so could gears and levers, as Babbage showed.)

Ironically, Lady Lovelace’s objection has almost the same logic as Douglas Hofstadter’s argument for reductionism (
Chapter 5
) – yet Hofstadter is one of today’s foremost
proponents
of the possibility of AI. That is because both of them share the mistaken premise that low-level computational steps cannot possibly add up to a higher-level ‘I’ that affects anything. The difference between them is that they chose opposite horns of the dilemma that that poses: Lovelace chose the false conclusion that AI is impossible, while Hofstadter chose the false conclusion that no such ‘I’ can exist.

Because of Babbage’s failure either to build a universal computer or
to persuade others to do so, an entire century would pass before the first one was built. During that time, what happened was more like the ancient history of universality: although calculating machines similar to the Difference Engine were being built by others even before Babbage had given up, the Analytical Engine was almost entirely ignored even by mathematicians.

In 1936 Turing developed the definitive theory of universal classical computers. His motivation was not to build such a computer, but only to use the theory abstractly to study the nature of mathematical proof. And when the first universal computers were built, a few years later, it was, again, not out of any special intention to implement universality. They were built in Britain and the United States during the Second World War for specific wartime applications. The British computers, named Colossus (in which Turing was involved), were used for code-breaking; the American one, ENIAC, was designed to solve the equations needed for aiming large guns. The technology used in both was electronic vacuum tubes, which acted like relays but about a hundred times as fast. At the same time, in Germany, the engineer Konrad Zuse was building a programmable calculator out of relays – just as Babbage should have done. All three of these devices had the technological features necessary to be a universal computer, but none of them was quite configured for this. In the event, the Colossus machines never did anything but code-breaking, and most were dismantled after the war. Zuse’s machine was destroyed by Allied bombing. But ENIAC
was
allowed to jump to universality: after the war it was put to diverse uses for which it had never been designed, such as weather forecasting and the hydrogen-bomb project.

The history of electronic technology since the Second World War has been dominated by miniaturization, with ever more microscopic switches being implemented in each new device. These improvements led to a jump to universality in about 1970, when several companies independently produced a microprocessor, a universal classical computer on a single silicon chip. From then on, designers of
any
information-processing device could start with a microprocessor and then customize it – program it – to perform the specific tasks needed for that device. Today, your washing machine is almost certainly controlled by a computer that could be programmed to do astrophysics
or word processing instead, if it were given suitable input–output devices and enough memory to hold the necessary data.

It is a remarkable fact that, in that sense (that is to say, ignoring issues of speed, memory capacity and input–output devices), the human ‘computers’ of old, the steam-powered Analytical Engine with its literal bells and whistles, the room-sized vacuum-tube computers of the Second World War, and present-day supercomputers all have an identical repertoire of computations.

Another thing that they have in common is that they are all
digital
: they operate on information in the form of discrete values of physical variables, such as electronic switches being on or off, or cogs being at one of ten positions. The alternative, ‘analogue’, computers, such as slide rules, which represent information as continuous physical variables, were once ubiquitous but are hardly ever used today. That is because a modern digital computer can be programmed to imitate any of them, and to outperform them in almost any application. The jump to universality in digital computers has left analogue computation behind. That was inevitable, because there is no such thing as a universal analogue computer.

That is because of the need for
error correction
: during lengthy computations, the accumulation of errors due to things like imperfectly constructed components, thermal fluctuations, and random outside influences makes analogue computers wander off the intended computational path. This may sound like a minor or parochial consideration. But it is quite the opposite. Without error-correction all information processing, and hence all knowledge-creation, is necessarily bounded. Error-correction is the beginning of infinity.

For example, tallying is universal only if it is digital. Imagine that some ancient goatherds had tried to tally the total
length
of their flock instead of the number. As each goat left the enclosure, they could reel out some string of the same length as the goat. Later, when the goats returned, they could reel that length back in. When the whole length had been reeled back in, that would mean that all the goats had returned. But in practice the outcome would always be at least a little long or short, because of the accumulation of measurement errors. For any given accuracy of measurement, there would be a maximum number of goats that could be reliably tallied by this ‘analogue tallying’
system. The same would be true of all arithmetic performed with those ‘tallies’. Whenever the strings representing several flocks were added together, or a string was cut in two to record the splitting of a flock, and whenever a string was ‘copied’ by making another of the same length, there would be errors. One could mitigate their effect by performing each operation many times, and then keeping only the outcome of median length. But the operations of comparing or duplicating lengths can themselves be performed only with finite accuracy, and so could not reduce the rate of error accumulation per step below that level of accuracy. That would impose a maximum number of consecutive operations that could be performed before the result became useless for a given purpose – which is why analogue computation can never be universal.

What is needed is a system that takes for granted that errors will occur, but
corrects
them once they do – a case of ‘problems are inevitable, but they are soluble’ at the lowest level of information-processing emergence. But, in analogue computation, error correction runs into the basic logical problem that there is no way of distinguishing an erroneous value from a correct one at sight, because it is in the very nature of analogue computation that every value
could
be correct. Any length of string might be the right length.

And that is not so in a computation that confines itself to whole numbers. Using the same string, we might represent whole numbers as lengths of string in whole numbers of inches. After each step, we trim or lengthen the resulting strings to the nearest inch. Then errors would no longer accumulate. For example, suppose that the measurements could all be done to a tolerance of a tenth of an inch. Then all errors would be detected and eliminated after each step, which would eliminate the limit on the number of consecutive steps.

So all universal computers are digital; and all use error-correction with the same basic logic that I have just described, though with many different implementations. Thus Babbage’s computers assigned only ten different meanings to the whole continuum of angles at which a cogwheel might be oriented. Making the representation digital in that way allowed the cogs to carry out error-correction automatically: after each step, any slight drift in the orientation of the wheel away from its ten ideal positions would immediately be corrected back to the
nearest one as it clicked into place. Assigning meanings to the whole continuum of angles would nominally have allowed each wheel to carry (infinitely) more information; but, in reality, information that cannot be reliably retrieved is not really being stored.

BOOK: The Beginning of Infinity: Explanations That Transform the World
13.2Mb size Format: txt, pdf, ePub
ads

Other books

Going for Gold by Annie Dalton
Hunger: Volume 4 by Ella Price
The Measure of the Magic by Terry Brooks
Love Storm by Jennifer McNare
Rough Magic by Caryl Cude Mullin
Don't Drink the Holy Water by Bailey Bradford
Sawbones by Melissa Lenhardt
Surviving Passion by Maia Underwood