Computing with Quantum Cats (14 page)

BOOK: Computing with Quantum Cats
2.35Mb size Format: txt, pdf, ePub

All this applies to “classical” (that is, non-quantum) physics. Feynman's contribution was to incorporate the idea of least action into quantum physics, coming up with a new formulation of quantum physics different from, and in many ways superior to, those of the pioneers, Heisenberg and Schrödinger. This was something which might have been the crowning achievement of a lesser scientist, but in Feynman's case was “merely” his contribution as a PhD student, completed in a rush before going off to war work at Los Alamos.

In his Nobel Lecture,
8
Feynman said that the seed of the idea which became his PhD thesis was planted when he was an undergraduate at MIT. At that time, the problem of an electron's “self-interaction” was puzzling physicists. The strength of an electric force is proportional to 1 divided by the distance from an electric charge, but the distance of an electron from itself is 0, and since 1 divided by 0 is infinity, the force of its self-interaction ought to be infinite. “Well,” Feynman told his audience in Stockholm,

it seemed to me quite evident that the idea that a particle acts on itself, that the electrical force acts on the same particle that generates it, is not a necessary one—it is a sort of a silly one, as a matter of fact. And so, I suggested to myself that electrons cannot act on themselves; they can only act on other electrons…. It was just that when you shook one charge another would shake later. There was a direct interaction between charges, albeit with a delay…. Shake this one, that one shakes later. The sun atom shakes; my eye electron shakes eight minutes later, because of a direct interaction.

The snag with this idea was that it was too good. It meant that when an electron (or other charged particle) interacted with another charged particle by ejecting a photon (which is the way charged particles interact with one another) there would be no back-reaction to produce a recoil of the first electron. This would violate the law of conservation of energy, so there had to be some way to provide just the right amount of interaction to produce a kick of the first electron (equivalent to the kick of a rifle when it is fired
9
) without being plagued by infinities. Stuck, but convinced that there must be a way around the problem, Feynman carried the idea with him to Princeton, where he discussed it with Wheeler; together they came up with an ingenious solution.

The starting point was the set of equations, discovered by James Clerk Maxwell in the nineteenth century,
10
which describe the behavior of light and other forms of electromagnetic radiation. It is a curious feature of these equations that they have two sets of solutions, one corresponding to an influence moving forward in time (the “retarded solution”) and another corresponding to an influence moving backward in
time (the “advanced solution”). If you like, you can think of these as waves moving either forward or backward in time, but it is better to avoid such images if you can. Since Maxwell's time, most people had usually ignored the advanced solution, although mathematically inclined physicists were aware that a combination of advanced and retarded solutions could also be used in solving problems involving electricity, magnetism and light. Wheeler suggested that Feynman might try to find such a combination that would produce the precise feedback he needed to balance the energy budget of an emitting electron.

Feynman found that there is indeed such a solution, provided that the Universe absorbs all the radiation that escapes out into it, and that the solution is disarmingly simple. It is just a mixture of one-half retarded and one-half advanced interaction. For a pair of interacting electrons (or other charged particles), half the interaction travels forward in time from electron A to electron B, and half travels backward in time from electron B to electron A. As Feynman put it, “one is to use the solution of Maxwell's equation which is symmetrical in time.” The overall effect is to produce an interaction including exactly the amount of back-reaction (the “kick”) needed to conserve energy. But the crucial point is that the whole interaction has to be considered as—well, as a whole. The entire process, from start to finish, is a seamless and in some sense timeless entity. This is like the way the whole path of the ball thrown through the window has to be considered to determine the action, and (somehow!) for nature to select the path with least action. Indeed, Feynman was able to reformulate the whole story of interacting electrons in terms of the Principle of Least Action.

The discovery led to Feynman's thesis project, in which he used the Principle of Least Action to develop a new understanding of quantum physics. He started from the basis that “fundamental (microscopic) phenomena in nature are symmetrical with respect to the interchange of past and future” and pointed out that, according to the ideas I have just outlined, “an atom alone in empty space would, in fact, not radiate…all of the apparent quantum properties of light and the existence of photons may be nothing more than the result of matter interacting with matter directly, and according to quantum mechanical laws,” before presenting those laws in all their glory. The thesis, he emphasized, “is concerned with the problem of finding a quantum mechanical description applicable to systems in which their classical analogues are expressible by a principle of least action.” He was helped in this project when a colleague, Herbert Jehle, showed him a paper written by Paul Dirac
11
that used a mathematical function known as the Lagrangian, which is related mathematically to action, in which Dirac said that a key equation “contains the quantum analogue of the action principle.” Feynman, being Feynman, worried about the meaning of the term “analogue” and tried making the function equal to the action. With a minor adjustment (he had to put in a constant of proportionality), he found that this led to the Schrödinger equation. The key feature of this formulation of quantum mechanics, which many people (including myself) regard as the most profound version, is, as Feynman put it in his thesis, that “a probability amplitude is associated with an entire motion of a particle as a function of time, rather than simply with a position of a particle at a particular time.” But everything, so far, ignored the complications caused
by including the effects of the special theory of relativity.

This is more or less where things stood when Feynman wrote up his thesis and went off to Los Alamos. After the war, he was able to pick up the threads and develop his ideas into a complete theory of quantum electrodynamics (QED) including relativistic effects, the work for which he received a share of the Nobel Prize; but this is not the place to tell that story.

What I hope you will take away from all this is the idea that at the quantum level the world is in a sense timeless, and that in describing interactions between quantum entities everything has to be considered at once, not sequentially. It is not necessary to say that an electron shakes and sends a wave moving out across space to shake another electron; it is equally valid to say that the first electron shakes and the second one shakes a certain time later as a result of a direct, but delayed, interaction. And the trajectory “sniffed out” when an electron moves from A to B is the one that corresponds to the least action for the journey. Nature is lazy.

Nobody said this more clearly than Feynman, as another quote from his Nobel Lecture shows:

We have a thing [the action] that describes the character of the path through all of space and time. The behavior of nature is determined by saying her whole space–time path has a certain character…if you wish to use as variables only the coordinates of particles, then you can talk about the property of the paths—but the path of one particle at a given time is affected by the path of another at a different time.

This is the essence of the path integral formulation of quantum mechanics.

Intriguingly, Schrödinger's quantum wave function has built into it exactly the same time symmetry as Maxwell's equations, which Schrödinger himself had commented on in 1931, but had been unable to interpret, simply remarking: “I cannot foresee whether [this] will prove useful for the explanation of quantum mechanical concepts.” Feynman doesn't seem to have been influenced by this, and may not have been aware, in 1942, of Schrödinger's remark, which was made in a paper published in German by the Prussian Academy of Science. Schrödinger was also ahead of Feynman, although in an almost equally understated way, in appreciating that all quantum “paths” are equally valid (but not necessarily equally probable). A convenient way into his version of this idea is through Schrödinger's famous cat puzzle (sometimes called a “paradox,” although it isn't one).

CATS DON'T COLLAPSE

The point of Schrödinger's “thought experiment,” published in 1935, was to demonstrate the absurdity of the Copenhagen Interpretation, and in particular the idea of the collapse of the wave function. His hypothetical cat
12
was imagined to be living happily in a sealed chamber, on its own with plenty to eat and drink, but accompanied by what Schrödinger called a “diabolical device.” This device could involve any one of several quantum systems, but the example Schrödinger chose was a combination of a sample of radioactive material and a detector. If the sample decays, spitting out an energetic particle, the detector triggers a piece of machinery which releases poison and kills the cat. The point is that the device can be set up in such a way that after a certain interval of time there is a 50:50 chance that the radioactive material has or has
not decayed. If the sample has decayed, the cat dies; if not, the cat lives. But the Copenhagen Interpretation says that the choice is not made until someone observes what is going on. Until then, the radioactive sample exists in a “superposition of states,” a mixture of the two possible wave functions. Only when it is observed does this mixture collapse one way or the other. This is equivalent to the way the wave associated with an electron going through the experiment with two holes goes both ways at once through the slits when we are not looking, but collapses onto one or the other of the slits when we put a detector in place to monitor its behavior.

This is all very well for electrons, and even radioactive atoms. But if we take the Copenhagen Interpretation literally it means that the cat is also in a mixture of states, a superposition of dead and alive, and collapses into one or the other state only when someone looks to see what is going on inside the room. But we never see a cat that is dead and alive (or neither dead nor alive) at the same time. This is the so-called paradox.

It is possible to extend this idea beyond Schrödinger's original formulation to bring out another aspect of quantum reality that is crucial in understanding quantum computation. Instead of one cat, imagine two (perhaps brother and sister), each in its own chamber, both connected to a diabolical device which determines, with 50:50 precision, that as a result of the radioactive decay one cat dies and the other lives. Now, after the mixture of states has been set up, the two chambers, still sealed, are separated and taken far apart from each other (in principle, to opposite sides of the Galaxy). The Copenhagen Interpretation says that
each
chamber contains a superposition of dead and alive cat states, until someone looks in
either one
of the chambers. As soon as that happens, the wave function collapses for
both
cats, instantaneously. If you look in just one chamber and see a dead cat, it means that the wave function for the other cat has collapsed at that precise instant to form a live cat, and vice versa. But the quantum rules do
not
say that the collapse happened before the chambers were separated and that there “always was” a dead cat in one chamber and a live cat in the other. Like the way a measurement at one slit in the experiment with two holes seems to affect what is going on at the other slit, this is an example of what is known as “quantum non-locality,” of which more shortly. But Schrödinger's resolution of all these puzzles was to say that there is no such thing as the collapse of the wave function. As early as 1927, at a major scientific meeting known as a Solvay Congress, he said: “The real system is a composite of the classical system in all its possible states.” At the time, this remark was largely ignored, and the Copenhagen Interpretation, which worked For All Practical Purposes, even if it doesn't make sense, held sway for the next half-century. I will explain the importance of his alternative view of quantum reality later; but Schrödinger was way ahead of his time, and it is worth mentioning now that in 2012, eighty-five years after he made that remark, two separate teams offered evidence that wave functions are indeed real states that do not collapse. Before coming up to date, though, we should take stock of what Feynman and his contemporaries had to say about quantum computation.

THE GATEWAY TO QUANTUM COMPUTATION

In order to understand their contribution, we need just a passing acquaintance with the logic of computation. This is
based on the idea of logic “gates”: components of computers which receive strings of 1s and 0s and modify them in accordance with certain rules. These are the rules of so-called Boolean logic (or Boolean algebra), developed by the mathematician George Boole in the 1840s. Boolean algebra can be applied to any two-valued system, and is familiar to logicians in the form of application to true/false systems; but in our context it applies to the familiar binary language of computation. When we talk blithely about computers carrying out the instructions coded in their programs, we are really talking about logic gates taking 1s and 0s and manipulating them in accordance with the rules of Boolean algebra.

These rules are very simple, but not quite the same as those used in everyday arithmetic. For example, in everyday arithmetic 1 + 0 is always equal to 1. But using Boolean algebra a so-called AND gate will take an input of 1 + 0 and give the output 0. It will do the same for 0 + 0 and for 0 + 1 (which is different from 1 + 0 in the Boolean world). It will only give the output 1 if
both
inputs are 1; in other words if input A
and
input B are both 1, which is where it gets its name. Another gate, called the NOT gate, will always give the opposite output from its input. Put in 1 and out comes 0; put in 0 and out comes 1. The output is
not
the same as the input. Don't worry about how the possible kinds of gates combine to carry out the instructions in a computer program; I just want you to be aware that such gates exist and give you the simplest idea of what is actually going on inside your smartphone or other computer. An important part of developing computers is to devise gates which do interesting things in a reliable way, and in the mid-1970s this endeavor led to a major rethinking of the possibilities of machine computation.

Other books

The Mistletoe Promise by Richard Paul Evans
Hearts That Survive by Yvonne Lehman
Carousel by Barbara Baldwin
Stripped by Karolyn James