Computing with Quantum Cats (25 page)

BOOK: Computing with Quantum Cats
6.13Mb size Format: txt, pdf, ePub

Another variation on the NOT gate turns out to be a crucial component in quantum computation. This is the quantum CNOT (Controlled NOT) gate: a two-bit gate in which the control bit, C, stays the same during an operation, but the second bit changes (from 0 to 1, or from 1 to 0) if C is 1 and stays the same if C is 0. In classical computation, with a set of three-bit gates, such as Fredkin gates, it is possible to build any logic circuit. But much to the relief of experimenters trying to build a quantum computer, it turns out that a set of quantum CNOT gates, each made up of just two qubits, plus a few single-qubit gates, can be combined to make any logic circuit. Their relief stems from the difficulty of isolating even two qubits and getting them to cooperate in the right way; getting three qubits to work together would be much harder. In quantum computing, but not in classical computing, it is possible to carry out all the roles of three-bit
gates using combinations of gates with one or two qubits. The quantum CNOT gate also induces entanglement. I won't go into details, but under the right circumstances a pair of qubits which are initially unentangled can be affected by the gate in such a way that both qubits end up in the same state, without it being determined whether this state is 0 or 1.

There's another feature of the CNOT gate in particular, and quantum computers in general, that is sometimes overlooked. Everything is reversible, as I explained in
Chapter 3
. It ought to be possible, with a fully operational quantum computer, to reconstruct all the workings that led to a particular answer, and even to determine the question that it answers. If only Douglas Adams were alive to see this. In
The Hitchhiker's Guide to the Galaxy
, a supercomputer gives the answer to “life, the Universe and everything” as 42, but nobody knows what the question was. If the computer, Deep Thought, were operating on quantum principles, it could simply be asked to run the calculation backwards to reveal the forgotten question!

We are a long way from Deep Thought. But just as the longest journey starts with a single step, the road to a working quantum computer began with the making of a single quantum switch that could be flipped from 0 to 1 and back again; and the experimenters were prodded into taking the first step down this road when Artur Ekert gave a talk about quantum computers to a meeting of atomic physicists held in Boulder, Colorado, in 1994. After spelling out the theory behind quantum computation, Ekert challenged his audience to come up with a single working CNOT gate to demonstrate that the theory could be put into practice. The challenge was taken up by Juan Ignacio Cirac and Peter
Zoller, of the University of Innsbruck. The way they did it provides some insight into the practical difficulties of quantum computation.

Experimenters such as the Innsbruck team already knew how to manipulate single atoms that have been provided with an electric charge—ions. Usually, atoms are electrically neutral, because the positive charge of their central nucleus is balanced by the negative charge of the cloud of electrons that surrounds the nucleus. But it is relatively easy to knock away one of the electrons, leaving a small overall positive charge, or in some cases to encourage an extra electron to stick to the atom, giving an overall small negative charge. In either case, the charge on the resulting ion makes it possible to manipulate it with electric and/or magnetic fields, guiding it into a small vacuum chamber, known as an ion trap. In most of the quantum experiments, such as the ones at Sussex University, the ion is held in place by electric fields, and prevented from jiggling about by laser beams pushing at it from all sides to keep it still,
24
like a big ship being held in place by tugs nudging at it from all sides. Because heat is associated with the random motion of atoms and molecules, this process is known as optical cooling.

The ion trap was invented by Hans Dehmelt, a German-born American physicist working at the University of Washington. He was born in Görlitz on September 9, 1922, and graduated from high school (the Gymnasium zum Grauen Kloster, in Berlin) in 1940. In order to avoid being drafted into the infantry, he volunteered for the anti-aircraft artillery, where he never rose above the rank of senior private but had something of a charmed life in troubled times. He was sent with his battery to relieve the army at Stalingrad,
where they were extremely lucky to escape the encirclement, and then, in 1943, was selected under an army program to study physics for a year at the University of Breslau. Back on active service, he was sent to the Western Front and captured by the American army at the Battle of the Bulge at the end of 1944. After a year as a prisoner of war, Dehmelt studied physics at the University of Göttingen, supporting himself by repairing pre-war radios, all that were available to the civilian population. He completed a Master's degree in 1948 and a PhD in 1950. Two years later, after post-doctoral work in Göttingen, Dehmelt emigrated to the United States, taking up a position at Duke University. He moved to the University of Washington, Seattle, in 1955, staying there for the rest of his career and retiring in 2002. He became a naturalized US citizen in 1961.

Dehmelt had long been intrigued by the idea of isolating and investigating single quantum entities, and in the second half of the 1950s he focused on how electrons might be trapped in this way. His inspiration was a device invented in the 1930s by the Dutch physicist Frans Michel Penning, which used a flow of electrons through a discharge tube in a magnetic field to measure pressure. In a generous gesture, Dehmelt called his device, essentially as described above, a “Penning trap”; but it was different from Penning's, and it should really be called the Dehmelt trap. Throughout the 1960s, Dehmelt and his colleagues refined such traps until in 1973 they made the breakthrough of trapping a single electron; in 1976 they were able to observe a quantum transition (popularly known as a “quantum leap”) of a single ion held in a trap. For all this (and later) work, Dehmelt was awarded a share of the 1989 Nobel Prize in physics, together
with another German physicist, Wolfgang Paul,
25
who had devised a different kind of ion trap (logically, known as the Paul trap), which was also used by the Dehmelt team.

Although the ancillary apparatus needed to operate ion traps is quite large—cooling equipment, magnets, lasers and so on—the traps themselves can be tiny. In the first decade of the twenty-first century, Winfried Hensinger, now at Sussex University, and a team from the University of Michigan made the world's first monolithic
26
ion trap on a microchip, with the ion and the electrode separated by just 0.023 mm, about the width of a human hair. Such traps are made in a similar fashion to conventional computer chips, by depositing an insulating layer, itself made of gallium arsenide sandwiched between layers of aluminum-gallium-arsenide, between two electrode layers. The process, photolithography, produces a three-dimensional “nano sculpture,” chemically etched out of gallium arsenide, consisting of cantilever-shaped electrodes which surround the trapped ion. A channel carved down the chip leads a single cadmium ion into the prepared cavity, where it is cooled and probed using laser light shone down the same channel. Hensinger's team at Sussex has since trapped ytterbium ions in this kind of “chip compatible” and potentially scalable setup—indeed, while this book was going to press they produced the world's first two-dimensional lattice on a chip. Similar systems using microwaves rather than optical lasers have also been developed. In an alternative approach, about the same time a group in Boulder, Colorado (of whom more shortly), devised a trap which held up to six magnesium ions in a row, kept hovering above gold electrodes by electrostatic repulsion. A laser beam playing over the surface of the trap provided the cooling and probing.

It's easy to see how, in principle, such a trapped ion could operate as a qubit. A single electron in the outermost region of the ion (the outermost “shell”) could be flipped between two energy states, corresponding to 0 or 1, or placed in a superposition, by pulses of laser light. A row of ions jiggling to and fro in a magnetic field could store information in the qubits of the individual ions, and also in the to and fro motion, which is itself quantized. So a single ion could store 1 from the energy mode and 0 from the rocking mode, giving a two-bit register reading 10, which could be flipped to 11, 01, 00 or a suitable superposition—tantalizingly close to the requirements for a CNOT gate.

The first working CNOT gate was made in 1995 by David Wineland (who had been a member of Dehmelt's team that isolated a single electron in 1973), Christopher Monroe, and other members of the US National Institute of Standards and Technology (NIST) at Boulder, Colorado. It is no coincidence that they achieved this a year after Ekert's talk in Boulder, although Ekert told me that “at the time I did not realize that it inspired Dave, Ignacio, Peter and others so much that it effectively marked the beginning of experimental quantum computation.” The NIST team used the rocking motion as one of the qubits, as suggested by Cirac and Zoller; but the second qubit depended not on energy but on the spin of the outer electron. They were able to put single ions of beryllium (Be
+
) in a state where a laser pulse would change the spin of the electron (from up to down, or vice versa) only if the ion was vibrating in the mode corresponding to 1. In these early experiments the CNOT operation worked properly about 90 percent of the time, and the ion was maintained in the required state for less than a thousandth of a second
before it decohered, too short a time for it to be used in any extended computation. But the point is, it did work. It was a real quantum gate. Since 1995, researchers at NIST and elsewhere have been able to combine up to fourteen ions working together to carry out simple quantum computations. But this is far short of the rows of thousands of ions that would be needed to have a decent crack at factorizing a large number using Shor's algorithm. And the ugly problem of decoherence is made uglier by the need to identify and correct errors as the computation proceeds.

Error correction is one of those processes that gives with one hand while taking with the other, so the trick is to make sure that it gives more than it takes. Think of the steps in a computation as messages. The simplest way to check whether a binary message has been sent correctly is by adding a parity bit. If the number of 1s in the message is even, the parity bit is 0; if the number of 1s is odd, the parity bit is 1. So a seven-digit message 1010010 becomes 10100101 and 1110010 becomes 11100100. If we receive a message 11100101 we know there is a mistake and the message has to be re-sent.

Adding a little more sophistication makes it possible to identify the location of single errors. If the message is sent in lines, each of the same length, arranged in columns, we can add parity bits to both the rows and the columns. So a single error will show up in a single row and in a single column, making it easy to identify. If there are more errors, and if there are errors in the parity bits, the situation gets more complicated, but can still be tackled using this kind of technique. For example, a standard method called the Hamming code, used in classical computing, assigns each of the possible 16 4-bit messages
27
a unique 7-bit codeword chosen so
that each of them differs from all of the others in at least three places. This makes it easier to identify errors; but the point I want to emphasize is that, as with all these techniques including the parity bit itself, it does so by adding more bits to the message.

In quantum computation—quantum codes, in the computational, not the cryptographic, sense—there are two additional problems. First, we must be careful not to disturb any of the qubits taking part in the computation, because a measurement, or any observation, would cause decoherence. Secondly, it is impossible to make a duplicate of a set of quantum entities without destroying the original. This is called the “no-cloning” theorem, and is related to the fact that you cannot measure a quantum system without altering it. You
can
make an exact copy, but you destroy the original in the process. This is the basis of so-called quantum teleportation, when an entity such as a photon in a particular state is probed in one location and the information gleaned is used to construct a photon in the identical state somewhere else (perhaps tens of kilometers away) through entanglement.
28
In effect, the original photon has been teleported from A to B instantaneously. But although that opens mind-boggling possibilities in other areas, it is a major inconvenience at this stage in the development of quantum computing, because one way of checking for errors is to have several copies made of a message and check whether they agree with one another.

One way around these problems would be to add extra qubits which do not take part in the computation to the strings of qubits that are being manipulated inside a quantum computer. Such an extra string of qubits is known as an ancilla, from the Latin word for a maidservant. I've no
intention of going into the gory details, but the bottom line is that you need twenty-three ancilla qubits to correct for three simultaneous errors, and eighty-seven of them to correct for seven errors. The obvious question is whether the likelihood of errors in the ancillae outweighs the advantage of having them there at all. But the experts assure us that it will all be worthwhile if the error rate for each component in the computation is reduced to less than 1 in 100,000. Two-qubit gates can now be made that operate with a fidelity of just over 99 percent, which means that the probability of the gate operating incorrectly is less than 1 in 100. Until recently, it was thought that in order for error correction techniques to work, fidelity would have to be improved to 99.999 percent. Andrew Steane, of the University of Oxford, describes this as “daunting but possible.” But in 2011, David Wang, Austin Fowler and Lloyd Hollenberg, of the Center for Quantum Computation and Communication Technology at the University of Melbourne, showed that error correction could be done with “only” 99 percent fidelity. And even though error correction may increase the number of operations required to carry out a computation by a factor of thousands, what does this matter if you have the Multiverse to play with?
29

Other books

Nurse Kelsey Abroad by Marjorie Norrell
On Fire by Holder, Nancy
Storms (Sharani Series Book 2) by Nielsen, Kevin L.
Hell Bound (Seventh Level Book 2) by Charity Parkerson, Regina Puckett
Tita by Marie Houzelle
The Lost Years by Clark, Mary Higgins
Pilgrim by Sara Douglass
Dancers at the End of Time by Michael Moorcock
The Guilty by Boutros, Gabriel