Computing with Quantum Cats (11 page)

BOOK: Computing with Quantum Cats
8.89Mb size Format: txt, pdf, ePub

There is something even more significant that Turing and von Neumann left us to ponder. How does our kind of intelligence “work” in the first place? Each of them was convinced that an essential feature of the human kind of intelligence is the capacity for error. In a lecture he gave in February 1947, Turing said:

…fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not
counting and give him another chance, but the machine would probably be allowed no mercy. In other words, then,
if a machine is expected to be infallible, it cannot also be intelligent
.
20

Wrapped up in this passage are two of Turing's ideas about our kind of intelligence. One is the process of learning by trial and error, the way, say, that a baby learns to walk and talk. We make mistakes, but we learn from the mistakes and make fewer errors of the same kind as time passes. His dream was to have a computer prepared in a blank state, capable of learning about its environment and growing mentally as it did so. That dream is now becoming a reality, at least in a limited sense—for example, with robots that learn how their arms move by watching their own image in a mirror. The second idea concerns intuition, and the way humans can sometimes reach correct conclusions on the basis of limited information, without going through all the logical steps from A to Z. A computer programmed to take all those logical steps could never make the leap if some steps were missing.

Von Neumann shared this view of the importance of errors. In lectures he gave at Caltech in 1952, later published as a contribution to a volume edited by John McCarthy and Claude Shannon,
21
he said:

Error is viewed, therefore, not as an extraneous and misdirected or misdirecting accident, but as an essential part of the process.

If the capacity to make mistakes of the kind just discussed is what distinguishes the human kind of intelligence from the machine kind of intelligence, would it ever be possible to program a classical computer, based on the principles
involved in the kind of machines discussed so far, to make deliberate mistakes and become intelligent like us? I think not, for reasons that will become clear in the rest of this book, but basically because I believe that the mistakes need to be more fundamental—part of the physics rather than part of the programming. But I also think it will indeed soon be possible to build non-classical machines with the kind of intelligence that we have, and the capacity for intellectual growth that Turing dreamed of.

Two questions that von Neumann himself raised are relevant to these ideas, and to the idea of spacefaring von Neumann machines:

Can the construction of automata by automata progress from simpler types to increasingly complicated types?

and

Assuming some suitable definition of efficiency, can this evolution go from less efficient to more efficient automata?

That provides plenty of food for thought about the future of computing and self-reproducing robots. But I'll leave the last word on Johnny von Neumann to Jacob Bronowski, no dullard himself, who described him as “the cleverest man I ever knew, without exception…but not a modest man.” I guess he had little to be modest about.

In the decades since EDSAC calculated the squares of the numbers from 0 to 99, computers have steadily got more powerful, faster and cheaper. Glowing valves have been replaced by transistors and then by chips, each of which contains the equivalent of many transistors; data storage on punched cards has been superseded by magnetic tape and discs, and then by solid state memory devices. Even so, the functioning of computers based on all of these innovations would be familiar to the pioneers of the 1940s, just as the functioning of a modern airplane would be familiar to the designers of the Hurricane and Spitfire. But the process cannot go on indefinitely; there are limits to how powerful, fast and cheap a “classical” computer can be.

One way of getting a handle on these ideas is in terms of a phenomenon known as Moore's Law, after Gordon Moore, one of the founders of Intel, who pointed it out in 1964. It isn't really a “law” so much as a trend. In its original form,
Moore's Law said that the number of transistors on a single silicon chip doubles every year; with another half-century of observation of the trend, today it is usually quoted as a doubling every eighteen months. And to put that in perspective, the number of transistors per chip has now passed the billion mark. That's like a billion-valve Manchester Baby or EDVAC on a single chip, occupying an area of a few hundred square millimeters.
1
At the same time, the cost of individual chips has plunged, they have become more reliable and their use of energy has become more efficient.

But there are problems at both ends of the scale. Although the cost of an individual chip is tiny, the cost of setting up a plant to manufacture chips is huge. The production process involves using lasers to etch patterns on silicon wafers on a tiny scale, in rooms which have to be kept scrupulously clean and free from any kind of contamination. Allowing for the cost of setting up such a plant, the cost of making a single example of a new type of chip is in the billions of dollars; but once you have made one, you can turn out identical chips at virtually no unit cost at all.

There's another large-scale problem that applies to the way we use computers today. Increasingly, data and even programs are stored in the Cloud. “Data” in this sense includes your photos, books, favorite movies, e-mails and just about everything else you have “on your computer.” And “computer,” as I have stressed, includes the Turing machine in your pocket. Many users of smartphones and tablets probably neither know nor care that this actually means that the data are stored on very large computers, far from where you or I are using our Turing machines. But those large computer installations have two problems. They need a lot of energy in
the form of electricity; and because no machine is 100 percent efficient they release a lot of waste energy, in the form of heat. So favored locations for the physical machinery that represents the ephemeral image of the Cloud are places like Iceland and Norway, where there is cheap electricity (hydrothermal or just hydroelectric) and it is cold outside. Neither of these problems of the large scale is strictly relevant to the story I am telling here, but it is worth being aware that there must be limits to such growth, even if we cannot yet see where those limits are.

It is on the small scale that we can already see the limits to Moore's Law, at least as it applies to classical computers. Doubling at regular intervals—whether the interval is a year, eighteen months or some other time step—is an exponential process which cannot continue indefinitely. The classic example of runaway exponential growth is the legend of the man who invented chess. The story tells us that the game was invented in India during the sixth century by a man named Sissa ben Dahir al-Hindi to amuse his king, Sihram. The king was so pleased with the new game that he allowed Sissa to choose his own reward. Sissa asked for either 10,000 rupees or 1 grain of corn for the first square of the chess board, two for the second square, four for the third square and so on, doubling the number for each square. The king, thinking he was getting off lightly, chose the second option. But the number of grains Sissa had requested amounted to 18,446,744,073,709,551,615—enough, Sissa told his king, to cover the whole surface of the Earth “to the depth of the twentieth part of a cubit.” Alas, that's where the story ends, and we don't know what became of Sissa, or even if the story is true. But either way, the numbers are correct; and the point
is that exponential growth cannot continue indefinitely or it would consume the entire resources not just of the Earth but of the Universe. So where are the limits to Moore's Law?

At the beginning of the twenty-first century, the switches that turned individual transistors on microchips on and off—the equivalent of the individual electromechanical relays in the Zuse machines or the individual valves in Colossus—involved the movement of a few hundred electrons. Ten years later, it involved a few dozen. We are rapidly
2
approaching the stage where an individual on-off switch in a computer, the binary 0 or 1 at the heart of computation and memory storage, is controlled by the behavior of a single electron, sitting in (or on) a single atom; indeed, in 2012, while this book was in preparation, a team headed by Martin Fuechsle, of the University of New South Wales, announced that they had made a transistor from a single atom. This laboratory achievement is only a first step towards putting such devices on your smartphone, but it must herald a limit to Moore's Law as we have known it, simply because miniaturization can go no further; there is nothing smaller than an electron that could do the job in the same way. If there is to be future progress in the same direction, it will depend on something new, such as using photons to do the switching: computers based on optics rather than on electricity.

There is, though, another reason why the use of single-electron switches takes us beyond the realm of classical computing. Electrons are quintessentially quantum entities, obeying the rules of quantum mechanics rather than the rules of classical (Newtonian) mechanics. They sometimes behave like particles, but sometimes behave like waves, and they
cannot be located at a definite point in space at a definite moment of time. And, crucially, there is a sense in which you cannot say whether such a switch is on or off—whether it is recording a 1 or a 0. At this level, errors are inevitable, although below a certain frequency of mistakes they might be tolerated. Even a “classical” computer using single-electron switches would have to be constructed to take account of the quantum behavior of electrons. But as we shall see, these very properties themselves suggest a way to go beyond the classical limits into something completely different, making quantum indeterminacy an asset rather than a liability.

In December 1959, Richard Feynman gave a now-famous talk with the title “There's Plenty of Room at the Bottom,”
3
in which he pointed the way towards what we now call nanotechnology, the ultimate forms of miniaturization of machinery. Towards the end of that talk, he said:

When we get to the very, very small world—say circuits of seven atoms—we have a lot of new things that would happen that represent completely new opportunities for design. Atoms on a small scale behave like nothing on a large scale, for they satisfy the laws of quantum mechanics. So, as we go down and fiddle around with the atoms down there, we are working with different laws, and we can expect to do different things. We can manufacture in different ways. We can use, not just circuits, but some system involving the quantized energy levels, or the interactions of quantized spins, etc.

As I have mentioned, just half a century later we have indeed now got down to the level of “circuits of seven atoms”; so it
is time to look at the implications of those laws of quantum mechanics. And there is no better way to look at them than through the eyes, and work, of Feynman himself.

The “interference pattern” built up by electrons passing one at a time through “the experiment with two holes.” How do they know where to go?

Other books

The King's Peace by Walton, Jo
Snow Wolf by Meade, Glenn
Second Intention by Anthony Venner
For Your Sake by Elayne Disano
His Secrets by Lisa Renee Jones
I'll Be There by Iris Rainer Dart
La piel by Curzio Malaparte
Western Star by Bonnie Bryant
Just Destiny by Theresa Rizzo