Read Computing with Quantum Cats Online
Authors: John Gribbin
Any infinity that can be placed in a one-to-one correspondence with the set of natural numbers is called a countable infinity. The set of “all the natural numbers except 1” is itself an example of a countable infinity. There are other kinds of infinity, which are larger than countable infinities and are called uncountable infinities. An example is the set of all possible decimal numbers (known to mathematicians as “real” numbers). We can see this by assigning decimal numbers in the range from 0 to 1 to the integers, so that, for example, 1 is paired with 0.12345â¦, 2 is paired with 0.23456â¦, 3 is
paired with 0.34567â¦and so on.
13
Now, make a new decimal number by making the first digit anything
except
the first digit of the first number (that is, not 1), the second digit anything
except
the second digit of the second number (not 3), the third digit anything
except
the third digit of the third number (not 5) and so on. This new decimal number will differ from the decimal number assigned to the natural number 1 in the first digit, from the number assigned to 2 in the second digit, from the number assigned to 3 in the third digit, and so on. So it does not correspond to any of the numbers that are paired with the natural numbers. And this is true for an uncountably large infinity of real numbers.
Unfortunately, as far as we can tell the Multiverse consists of an uncountably infinite set of universes. That includes, as I have discussed in my book
In Search of the Multiverse
, such exotic possibilities as universes in which the laws of physics are different from those in our Universe. But, happily, if we restrict ourselves to the set of all universes in which the laws of physics are the same as in our Universe (itself an uncountably large infinite set) it is meaningful to talk in terms of, say, “half the universes” doing one thing and “half the universes” doing something else, even though each half is itself infinite. This is because the phenomenon of quantum interference involves interactions between universesâinternal interactions
within
the Multiverse, making the Multiverse itself a single entityâthat make such talk in terms of proportions or ratios meaningful. This way of measuring infinities is known, logically enough, as a “measure.” Deutsch has described the Multiverse as the set of all such universes evolving together, like a machine in which cogwheels are interconnected so that you cannot move one without moving the rest.
So when we talk about the cat in the box, we should not be thinking of a single cat in a single box, or even two cats in two boxes in two parallel worlds, but an uncountably infinite number of cats in an uncountably infinite number of boxes. In half of those parallel universes the cat dies, and in the other half it lives. For more subtle situations where there are different possible outcomes with different probabilities, it is meaningful to say that, for example, in 25 percent of the universes one history develops, in 60 percent an alternative history develops, and in 15 percent a third history develops. And that is why, in situations such as radioactive decay, in any one universe the outcome is entirely random (that is, it obeys the laws of probability) from the point of view of an observer in that universe.
There's another aspect of the infinite Multiverse that is worth mentioning, even though it is not directly relevant to the story of quantum computation. In Deutsch's words, “all fiction that does not violate the laws of physics is fact.” So all of Jane Austen's stories recount real events in parallel realities to our own; but
The Lord of the Rings
does not. And in an infinite number of universes there are writers busily producing what they regard as a fictional tale about a quantum physicist called David Deutsch who wrote a landmark paper about the concept of a universal quantum computer.
The connection between the Multiverse and the quantum computer is made explicit by Deutsch using a concept known as “fungibility,” taken from the lexicon of legal language. In law, two objects are deemed to be fungible if they are identical FAPP. The usual example is a banknote. If I borrow a £10 note from you and promise to pay it back on Friday, you do
not expect me to give you back the same £10 note; any £10 note will do. All (genuine) £10 notes are fungible in law. But if I borrow your car for the afternoon to visit my cousin, you expect me to give you the same car back in the evening; cars are not fungible. Elaborating the financial theme slightly, my wife and I have a joint bank account. Half the money is mine, and half is hers. But there is no sense in which there are two piles of money in the bank, labeled “his” and “hers”âand there wouldn't be even if the money was literally “in the bank” and not merely an electronic record inside a computer. Deutsch argues that universes with identical histories are literally and physically fungible. This goes beyond the concept of parallel universesâwhich implies running side by side, somehow separated in space, or superspace, or whatever you want to call it. Two or more fungible universes become differentiated when forced to by quantum phenomena, such as the experiment with two holes, leading to the diversity of universes within the Multiverse, but in accordance with the laws of probability, as discussed above. These universes can briefly interfere with one another as this process occurs. But quantum interference is suppressed by entanglement, so the larger and more complex an object is, the less it will be affected by interference. Which is why we have to set up careful experiments (like the one with half-silvered mirrors) to see interference at work, and why there are such large practical problems involved in building quantum computers, where entanglement has to be avoided except in special places. A universe like ours, suggests Deutsch, is really made up of a bundle of “coarse-grained” histories differing from each other only in sub-microscopic detail, but affecting each other through interference. Each such bundle of coarse-grained
histories “somewhat resembles the universe of classical physics.” And these coarse-grained universes really do fit the description of parallel universes familiar from science fiction.
Which brings us back to the quantum computer, where, as I explained in the Introduction, processing involves qubits, not bits, of information. The key feature of a qubit is that it can exist not only in the state 0 or the state 1, but in a superposition of both states. But they have another important property. Qubits, like other quantum entities, can be entangled with one another. In carrying out a calculation using a quantum computer, the set of instructionsâthe program, or algorithmâinitially sets out the “problem” by putting some of an array of qubits (the input register) into a superposition of states. If each qubit is regarded as a register for a number then it stores two (or more) numbers simultaneously. During the next stage, a computation is carried out which spreads information to other qubits in the array (the output register), but not into the outside world (stopping the information leaking out too soon and being destroyed by entanglement is one of the key practical problems involved in building quantum computers). Crudely speaking, each component of the input register is entangled with a corresponding component of the output register, like the photons in the EPR experiment. The information has been processed simultaneously in all the histories represented by the superposition of statesâin all the parallel universes, in everyday language. Finally, the qubits are allowed to interfere in a controlled fashion that provides a combination of the information from all those historiesâan “answer.” The computer has carried out separate parts of the calculation in separate histories (separate universes) and produced an answer based on the interference between those histories.
But quantum computers are not magic. Like all tools, they have their limitations. There are some kinds of problems that they can solve with ease that classical computers cannot begin to tackle on any timescale relevant to human activities; but there are other problems that are not susceptible to this approach. When quantum computers are good, they are very, very good; but when they are out of their depth, they are not so much bad as no better than classical computers. To explain this, I'll start with the good.
THE GOOD: CRACKING CODES CONVENIENTLY
The best example of the power of quantum computation is the one that has inspired (if that is the right word) large amounts of money to be spent on making it a practical realityâcodebreaking. There is a pleasing resonance with the reason why so much effort was devoted to building Colossus, the first computer, but this time the impetus comes as much from big business as from the military.
Cryptography has, of course, moved on a long way since the days of Colossus and Enigma. Modern codes are devised specifically to be difficult, if not impossible, to break using classical computers. The classic example of an unbreakable code is the “one time pad” system, where two protagonists (usually known as Alice and Bob) each have a pad made up of sheets of paper each listing a coding systemâthe first sheet might begin A codes as D, B codes as A, C codes as Z, and so on.
14
Alice encodes her message using the first sheet on the pad, then rips it out and destroys it. Bob receives the message and decodes it using the first sheet on his pad, then rips it out and destroys it. The next message uses sheet two of the code pad, and so on. The “one time” aspect is vital, because if
the same code is used more than once an eavesdropper (Eve) can crack the code using the kind of techniques pioneered at Bletchley Park. This method is utterly secure as long as Eve doesn't get hold of a copy of the pad, but suffers from the difficulty of getting fresh pads to Alice and Bob as required. This used to involve trusted couriers carrying locked briefcases between embassies or the offices of global businesses in different parts of the world (and Enigma was essentially based on this kind of idea, generating its own “pads” as it went along). But the system has been superseded by so-called public key cryptography.
This technique uses a freely available systemâthe public keyâto encode messages, which can only be decoded by the intended recipient using a private key unknown to anyone else, including the sender of the message. Anybody can encrypt messages using Alice's public key, but only Bob can decrypt them. The mathematics behind this technique was first published in 1974, by Ralph Merkle, a graduate student at the University of California, Berkeley, and developed by Whitfield Diffie and Martin Hellman at Stanford University. The system depends on generating a public key using a combination of two numbers (M and N) which are openly published and a third “secret” number (actually a third and a fourth number, since each cryptographerâsender and receiverâalso has their own secret number). It is very difficult (though not actually impossible) to work out what the secret numbers are from M, N and the public key. The system was developed further in 1977 by a team at MIT: Ronald Rivest, Adi Shamir and Len Adleman. In this form, the cryptographic technique is known as the RSA algorithm, from their initials. Using RSA, all Alice has to do is publish a public key, which
anyone can use to send her a message that she alone can read.
The names of all of the people mentioned in the previous paragraph are enshrined in the history of cryptography. It was only in 1997 that the British government released previously secret records revealing that the whole package of ideas had been developed earlier by mathematicians working at GCHQ, the heirs to the Bletchley Park codebreakers, and kept under wraps for obvious reasons.
Without delving deeply into the full mathematical details, the secrecy of codes based on the RSA algorithm depends on the difficulty of finding the factors of a very large number. Factors are something we all learned in schoolâthe factors of the number 15, for example, are 3 and 5, because they are both primes and 3 Ã 5 = 15. Numbers can have more than two factorsâ3 Ã 5 Ã 2 = 30âbut in the present context we are interested only in numbers which are obtained by multiplying two large prime numbers together. These numbers are the factors of the resulting very large number. It's easy to find the factors of 15 by trial and error, but the difficulty grows exponentially as the size of the number increases. The reasonâthe only reasonâEve cannot crack Alice's code is because she does not know the factors of the large number used in Alice's public key. A key of 1,024 bits (one kilobit) would represent a decimal number with more than 300 digits; while such a number is simple to work with for coding purposes, finding its factors using a classical computer would take longer than the time that has elapsed since the Big Bang in which the Universe as we know it was born.
15
No wonder big business and the military make use of the RSA algorithmâwhich is also the basis (using much shorter keys) of the secure systems used to preserve your privacy when you use
your credit card over the World Wide Web. So imagine the mixture of horror and delight when it was realized that even a very simple quantum computer could factorize this kind of 300-digit numberâthereby cracking the codeâin about four minutes. More advanced machines could do it in milliseconds. Horror, because “our” secrets might be revealed; delight because “their” secrets might be revealed to us.
If a quantum computer is asked the question “What are the factors of 15?” after performing the calculation its output register contains information which can be processed using classical techniques to give answers corresponding to 3 and 5. When the output is read, it will give one of these numbersâand if you know one, you know the other. Even with large numbers, by repeating the process it is easy to find all the factors. But how does it perform the calculation?
The breakthrough came from Peter Shor, of Bell Labs, in 1994. In his landmark paper of 1985, David Deutsch himself had given an example of a kind of problem susceptible to parallel processing that could be solved twice as fast by a quantum computer using an algorithm he devised (the Deutsch algorithm) as by a classical computer; but this was not impressive enough to encourage efforts to build a working quantum computer, since it would be easier and cheaper to build two classical computers and use them working in parallel to solve such problems twice as fast. Deutsch himself was only interested in this example as proof of the reality of parallel universes. But the Shor algorithm was a whole new ball game.