Read Computing with Quantum Cats Online
Authors: John Gribbin
But I don't want to end this chapter on such a daunting note. The optimistic news is that quantum computers have already been built and have used Shor's algorithm in the factorization problem. To be sure, only in a modest way; but it's a beginning. In 2001, a team led by Isaac Chuang at the IBM Almaden Research Center in San Jose, California, used a different method of correctingâor rather, compensating forâerrors to find the factors of the number 15. The essence of
their approach was to work with a molecule which contains five fluorine atoms and two carbon atoms, each of which has its own nuclear spin state.
30
This means that each single molecule can in effect be used as a seven-qubit quantum computer, equivalent to a classical computer with 2
7
bits (128 bits). But they didn't work with just a single molecule. You can't clone a quantum entity into multiple copies of itself, but you can prepare a multitude of quantum entities all in the same state. They used about a thimbleful of liquid containing about a billion billion molecules, probed with pulses of radio-frequency electromagnetic waves and monitored using the technique of nuclear magnetic resonance (NMR) familiar today from hospital scanning systems (where it is known as magnetic resonance imaging, or MRI, because the word “nuclear” frightens some people).
Left to their own devices, the spins of the nuclei of the atoms in all those molecules “point” in different directions. For computational purposes they can be regarded as a random string of 0s and 1s. Applying the appropriate magnetic field changes the spins of all the nuclei inside some of the molecules, so that in about one in a hundred million of the molecules all seven nuclei are in the same stateâsay, 1. That's like 10 billion identical computers all in the state 1111111. More subtle applications of the magnetic influence can flip one particular nucleus in each molecule, so that all 10 billion of them now read, say, 1111011. The pattern can be read by NMR because 10 trillion molecules are giving out the same signal against a background of random noise produced by all the other molecules. And if, say, 10 percent of the 10 trillion are in the “wrong” state because of errors, that just gets lost in the noise. The experimenters can flip the spin of each type of
atom in all the molecules, effectively at will; but the atoms in each molecule interact with their neighbors, so the molecule used was chosen to have certain properties, so that, for example, one of the nuclei will flip only if its neighbor is in the spin state 1, forming a CNOT gate. Which is how the “computer” was made to factorize the number 15.
Of course, the experimenters were careful not to read the patterns in the molecules during the computation because that would make them decohere. They only looked at the pattern when the computation was complete. In effect, the “readout” from the computer averaged over all the molecules, with the huge number of right answers effectively swamping the much smaller number of errors introduced by decoherence and other difficulties. You will not be surprised to learn that the computer found that the factors of 15 are 3 and 5. But still, it worked. It even made the pages of the
Guinness Book of Records
.
Unfortunately, because of the limitations of the monitoring technique, the method will not work for molecules with more than about ten qubits, so at present there is no prospect of building a larger quantum computer using this method. But there are, as we shall see, realistic prospects of building bigger quantum computers using other techniques.
Everything I have described so far demonstrates that quantum computers workânot just in principle, but at a practical level. Individual qubits can be prepared and manipulated, with the aid of individual logic gates, including the vital CNOT gate. But the enormous challenge remains of constructing a quantum computer on a scale large enough to beat classical computers at the range of tasks, such as factorization of large numbers, for which they are suited. Even given the power of superposition, this will (conservatively) involve manipulating at the very least hundreds of qubits using dozens of gates, within the time limits set by decoherence and with inbuilt error correction. It's a sign of the immaturity of the field that many competing approaches are being tried out in an attempt to find one that works on the scale required. And it's a sign of how fast the field is developing that while I was writing this book a technology that had seemed like an also-ran when I started had emerged
as one of the favorites by the time I got to this chapter. I've no idea what will seem the best bet by the time you read these words, so I shall simply set out a selection of the various stalls to give you a flavor of what is going on. The techniques include the trapped ion and nuclear magnetic resonance approaches that we have already met, superconductors and a quantum phenomenon known as the Josephson junction, so-called “quantum dots,” using photons of visible light as qubits, and a technique called cavity quantum electrodynamics (involving atoms).
THE KEY CRITERIA
If any of these techniques is to work as a “proper” quantum computer, as opposed to the “toy” versions so far constructed, it will have to satisfy five criteria, spelled out by David DiVincenzo, of IBM's Physics of Information group, at the beginning of the present century:
1
1.        Each qubit has to be well defined (“well characterized” in quantum jargon) and the whole system must be scalable to sufficiently large numbers. In computer jargon, each qubit must be separately addressable. It is also desirable, if possible, to have a single quantum system acting as different kinds of qubits, as with the single ion we met in the
previous chapter
that stores 1 from the energy mode and 0 from the rocking mode, giving a two-bit register.
2.        There has to be a way of initializing the computer by setting all the qubits to zero at the start of a computation (re-setting the register). This may sound trivial, but it is a big problem for some techniques, including the NMR system that has proved so effective on a small scale. In addition, quantum error correction needs a continuous supply of fresh qubits in the 0 state, requiring what DiVincenzo calls a “qubit conveyor belt” carrying qubits in need of initialization away from the region where computation is being carried out to be initialized, then bringing them back when they have been set to 0.
3.        There has to be a way of solving the old problem of decoherence, or specifically, decoherence time. A classical computer is good for as long as the hardware lasts, and my wife is not alone in having a computer nearly ten years old that she is still entirely happy with. By contrast, a quantum computerâthe virtual Turing machine inside the hardwareâ“lasts” for about a millionth of a second. In fact, this is not quite the whole story. What really matters are the relative values of the decoherence time and the time it takes for a gate to operate. The gate operation time may be pushed to a millionth of a millionth of a second, allowing for a million operations before complete decoherence occurs. Putting it another way, during the course of the operation of a gate, only one in a million qubits will “dephase.” This just about makes quantum computing feasible; or, in DiVincenzo's words, it is, “to tell the truth, a rather stringent condition.”
4.        As I have already discussed, we need reversible gates. In particular, we need to incorporate CNOT gates into the “circuitry” of the quantum computer, but these gates have short decoherence times and are difficult to construct. (Incidentally,
if
3-bit gates, equivalent to Fredkin gates, could be made out of qubits, quantum computers would be more efficient; 2-bit gates are the minimum requirement, not the best in computational terms.) It is also necessary, of course, to turn the gates on and off as required. Quantum computer scientists have identified two potential problems that might be encountered. The first involves gates which are switched on by natural processes, as the computation proceeds, but which are hard to switch off; the second involves gates which have to be switched on (and off) from outside as required. “Outside” to the gate would be a “bus qubit” which would have to be able to interact with each of the qubits in the computer, and which would itself be prone to decoherence. But the bottom line is that quantum gates cannot be implemented perfectly, and errors are inevitable. The trick will be to minimize the errors and find ways of working around them.
5.        Finally, it has to be possible to measure the qubits in order to read out the “answer” to a problem. Inevitably, because this involves quantum processes, the measurement cannot give a unique answer with 100 percent accuracy, so it also has to be possible to repeat the computation as many times as is required to achieve the desired level of accuracy. This may not be too arduous. DiVincenzo points out that if the “quantum efficiency” is 90 percent, meaning that the “answer” is right nine times out of ten, then 97 percent reliability can be achieved just by running the calculation three times.
DiVincenzo also added two other criteria, not strictly relevant to computation itself, but important in any practical quantum computer. They are a result of the need for communication, by which, says DiVincenzo, “we mean quantum communication: the transmission of intact qubits from place
to place.” This is achieved with so-called “flying qubits” that carry information from one part of a quantum computer to another part. The first criterion is the ability to convert stationary qubits into flying qubits, and vice versa; the second is to ensure that the flying qubits fly to the right places, faithfully linking specified locations inside the computer.
Nobody has yet found a system which achieves a satisfactory level for all five criteria at the same time, although ion trap devices come closest, having fulfilled all the criteria separately and several in conjunction with one another.
2
Some systems are (potentially) good in one or two departments, others are good in other departments. It may be that the best path will involve combining different techniques in some sort of hybrid quantum computer, to get around all these difficulties. But here are some of the contenders, as of the end of 2012, starting with one of my favorites.
JOSEPHSON AND THE JUNCTION
When I first started writing about quantum physics, I was particularly intrigued by work being carried out at Sussex University on Superconducting Quantum Interference Devices, or SQUIDs. These are, by the standards of quantum physics, very large (macroscopic) objects, a bit smaller than the size of a wedding ring. Yet they can behave, under the right circumstances, like single quantum entities, which is what makes them so fascinating. Now, three decades after I first wrote about them, they have the potential to contribute to the construction of quantum computers. They are based on a phenomenon known as the Josephson effect, discovered in 1962 by a 22-year-old student, who later received a Nobel Prize for the work.
Brian Josephson was born in 1940 in Cardiff. He was educated at the Cardiff High School for Boys, at the time a grammar school (it has since merged with two other schools to form the modern comprehensive Cardiff High School). From there he went on to Trinity College, Cambridge, at the age of seventeen, graduating in 1960. As an undergraduate he had already published a significant scientific paper concerning a phenomenon known as the Mösbauer effect, and was marked out as a high flier. Josephson stayed on in Cambridge to work for a PhD, awarded in 1964, two years after his Nobel Prizeâwinning breakthrough; also in 1962, while still a student, he was elected a Fellow of Trinity. After completing his PhD, Josephson spent a year as a visitor at the University of Illinois before returning to Cambridge, where he stayed for the rest of his career (apart from brief visits to universities around the world), becoming a professor in 1974 and retiring in 2007. But after he encountered Bell's theorem in the mid-1960s, Josephson drifted away from mainstream physics and became increasingly intrigued by “mind-matter interactions,” directing the Mind-Matter Unification Project at the Cavendish Laboratory (a Nobel Prize allows researchers considerable leeway in later life), studying Eastern mysticism, and becoming convinced that entanglement provides an explanation for telepathy. Most physicists regard this as complete rubbish, and feel that Josephson's brilliant mind was essentially lost to physics by the time of the award of his Nobel Prize in 1973.
When the Royal Mail produced a set of stamps to mark the centenary, in 2001, of the Nobel Prizes, they asked laureates, including Josephson, to contribute their thoughts on their own field of study. In his comments, Josephson referred to
the possibility of “an explanation of processes still not understood within conventional science, such as telepathy.” This provoked a fierce response from several physicists, including David Deutsch, who said: “It is utter rubbish. Telepathy simply does not existâ¦complete nonsense.”
3
But none of this detracts from the importance of the discovery Josephson made in 1962, which is straightforward to describe but runs completely counter to common sense. He was studying the phenomenon of superconductivity, which had fascinated him since he was an undergraduate. This happens in some materials when cooled to very low temperatures, below their appropriate “critical temperature,” at which point they have no electrical resistance at all. It was discovered in 1911, by Kamerlingh Onnes, in Leiden; but although clearly a quantum effect, it was still not fully understood in 1962.
Josephson found that, according to the equations of quantum physics, under the right circumstances, once started a current would flow forever through a superconductor without any further voltage being applied. The “right conditions” involve what have become known as Josephson junctions: two superconductors joined by a “weak link” of another kind of material, through which electrons
4
can tunnel. There are three possible forms of this junction: first, superconductor-insulator-superconductor, or S-I-S; secondly, superconductor-nonsuperconductor-superconductor, or S-N-S; and finally one with a literally weak link in the form of a thin neck of the superconductor itself, known as S-s-S.
The story of how Josephson came up with his insight has been well documented, notably by Josephson himself in his Nobel lecture, and by Philip Anderson (himself a later Nobel
Prize winner), who was visiting Cambridge from Bell Labs in 1962. Anderson wrote about the discovery of the Josephson effect in an article in the November 1970 issue of
Physics Today
, recounting how he met Josephson when the studentânearly seventeen years his juniorâattended a course he gave on solid-state and many-body theory: “This was a disconcerting experience for a lecturer, I can assure you, because everything had to be right or he would come up and explain it to me after class.” Josephson had learned about experiments involving tunneling in superconductors, and was working on the underlying theory when “one day Anderson showed me a preprint he had just received from Chicago in which Cohen, Falicov and Philips calculated the current flowing in a superconductor-barrierânormal-metal systemâ¦. I immediately set to work to extend the calculation to a system in which both sides of the barrier were superconducting.” Josephson discussed his work with Anderson, who was encouraging but, he emphasizes, made no direct contribution: “I want to emphasize that the whole achievement, from the conception to the explicit calculation in the publication, was entirely Josephson'sâ¦this young man of twenty-two conceived the entire thing and carried it through to a successful conclusion.”