The Beginning of Infinity: Explanations That Transform the World (50 page)

BOOK: The Beginning of Infinity: Explanations That Transform the World
5.05Mb size Format: txt, pdf, ePub

Microscopic events which are accidentally amplified to that coarse-grained level (like the voltage surge in our story) are rare in any one coarse-grained history, but common in the multiverse as a whole. For example, consider a single cosmic-ray particle travelling in the direction of Earth from deep space. That particle must be travelling in a range of slightly different directions, because the uncertainty principle implies that in the multiverse it must spread sideways like an ink blot as it travels. By the time it arrives, this ink blot may well be wider than the whole Earth – so most of it misses and the rest strikes everywhere on the exposed surface. Remember, this is just a single particle, which may consist of fungible instances. The next thing that happens is that they cease to be fungible, splitting through their interaction with atoms at their points of arrival into a finite but huge number of instances, each of which is the origin of a separate history.

In each such history, there is an autonomous instance of the cosmic-ray particle, which will dissipate its energy in creating a ‘cosmic-ray shower’ of electrically charged particles. Thus, in different histories, such a shower will occur at different locations. In some, that shower will provide a conducting path down which a lightning bolt will travel. Every atom on the surface of the Earth will be struck by such lightning in
some
history. In other histories, one of those cosmic-ray particles will strike a human cell, damaging some already damaged DNA in such a way as to make the cell cancerous. Some non-negligible proportion of all cancers are caused in this way. As a result, there exist histories in which any given person, alive in our history at any time, is killed soon afterwards by cancer. There exist other histories in which the course of a battle, or a war, is changed by such an event, or by a lightning bolt at exactly the right place and time, or by any of countless other unlikely, ‘random’ events. This makes it highly plausible that there exist histories in which events have played out more or less as in alternative-history stories such as
Fatherland
and
Roma Eterna
– or in which events in your own life played out very differently, for better or worse.

A great deal of fiction is therefore close to a fact somewhere in the multiverse. But not all fiction. For instance, there are no histories in which my stories of the transporter malfunction are true, because they require different laws of physics. Nor are there histories in which the fundamental constants of nature such as the speed of light or the charge on an electron are different. There is, however, a sense in which different laws of physics
appear
to be true for a period in some histories, because of a sequence of ‘unlikely accidents’. (There may also be universes in which there are different laws of physics, as required in anthropic explanations of fine-tuning. But as yet there is no viable theory of such a multiverse.)

Imagine a single photon from a starship’s communication laser, heading towards Earth. Like the cosmic ray, it arrives all over the surface, in different histories. In each history, only one atom will absorb the photon and the rest will initially be completely unaffected. A receiver for such communications would then detect the relatively large, discrete change undergone by such an atom. An important consequence for the construction of measuring devices (including eyes) is that no matter
how far away the source is, the kick given to an atom by an arriving photon is always the same: it is just that the weaker the signal is, the fewer kicks there are. If this were not so – for instance, if classical physics were true – weak signals would be much more easily swamped by random local noise. This is the same as the advantage of digital over analogue information processing that I discussed in
Chapter 6
.

Some of my own research in physics has been concerned with the theory of
quantum computers
. These are computers in which the information-carrying variables have been protected by a variety of means from becoming entangled with their surroundings. This allows a new mode of computation in which the flow of information is not confined to a single history. In one type of quantum computation, enormous numbers of different computations, taking place simultaneously, can affect each other and hence contribute to the output of a computation. This is known as
quantum parallelism
.

In a typical quantum computation, individual bits of information are represented in physical objects known as ‘qubits’ – quantum bits – of which there is a large variety of physical implementations but always with two essential features. First, each qubit has a variable that can take one of two discrete values, and, second, special measures are taken to protect the qubits from entanglement – such as cooling them to temperatures close to absolute zero. A typical algorithm using quantum parallelism begins by causing the information-carrying variables in some of the qubits to acquire both their values simultaneously. Consequently, regarding those qubits as a register representing (say) a number, the number of separate instances of the register as a whole is exponentially large: two to the power of the number of qubits. Then, for a period, classical computations are performed, during which waves of differentiation spread to some of the other qubits – but no further, because of the special measures that prevent this. Hence, information is processed separately in each of that vast number of autonomous histories. Finally, an interference process involving all the affected qubits combines the information in those histories into a single history. Because of the intervening computation, which has processed the information, the final state is not the same as the initial one, as in the simple interference experiment I discussed above, namely
, but is some function of it, like this:

A typical quantum computation.
Y
1
. . .
Y
many
are intermediate results that depend on the input
X
. All of them are needed to compute the output
f
(
X
) efficiently.

Just as the starship crew members could achieve the effect of large amounts of computation by sharing information with their doppelgängers computing the same function on different inputs, so an algorithm that makes use of quantum parallelism does the same. But, while the fictional effect is limited only by starship regulations that we may invent to suit the plot, quantum computers are limited by the laws of physics that govern quantum interference. Only certain types of parallel computation can be performed with the help of the multiverse in this way. They are the ones for which the mathematics of quantum interference happens to be just right for combining into a single history the information that is needed for the final result.

In such computations, a quantum computer with only a few hundred qubits could perform far more computations in parallel than there are atoms in the visible universe. At the time of writing, quantum computers with about ten qubits have been constructed. ‘Scaling’ the technology to larger numbers is a tremendous challenge for quantum technology, but it is gradually being met.

I mentioned above that, when a large object is affected by a small influence, the usual outcome is that the large object is strictly unaffected. I can now explain why. For example, in the Mach–Zehnder interferometer, shown earlier, two instances of a single photon travel on two different paths. On the way, they strike two different mirrors. Interference will happen only if the photon does not become entangled with the mirrors – but it
will
become entangled if either mirror retains the slightest record that it has been struck (for that would be a differential effect of the instances on the two different paths). Even a single quantum of change in the amplitude of the mirror’s vibration on its supports, for
instance, would be enough to prevent the interference (the subsequent merging of the photon’s two instances).

When one of the instances of the photon bounces off either mirror, its momentum changes, and hence by the principle of the conservation of momentum (which holds universally in quantum physics, just as in classical physics), the mirror’s momentum must change by an equal and opposite amount. Hence it seems that, in each history, one mirror but not the other must be left vibrating with slightly more or less energy after the photon has struck it. That energy change would be a record of which path the photon took, and hence the mirrors would be entangled with the photon.

Fortunately, that is not what happens. Remember that, at a sufficiently fine level of detail, what we crudely see as a single history of the mirror, resting passively or vibrating gently on its supports, is actually a vast number of histories with instances of all its atoms continually splitting and rejoining. In particular, the total energy of the mirror takes a vast number of possible values around the average, ‘classical’ one. Now, what happens when a photon strikes the mirror, changing that total energy by one quantum?

Oversimplifying for a moment, imagine just five of those countless instances of the mirror, with each instance having a different vibrational energy ranging from two quanta below the average to two quanta above it. Each instance of the photon strikes one instance of the mirror and imparts one additional quantum of energy to it. So, after that impact, the average energy of the instances of the mirror will have increased by one quantum, and there will now be instances with energies ranging from one quantum below the old average to three above. But since, at this fine level of detail, there is no autonomous history associated with any of those values of the energy, it is not meaningful to ask whether an instance of the mirror with a particular energy after the impact is
the same
one that previously had that energy. The objective physical fact is only that, of the five instances of the mirror, four have energies that were present before, and one does not. Hence, only that one – whose energy is three quanta higher than the previous average – carries any record of the impact of the photon. And that means that in only one-fifth of the universes in which the photon struck has the wave of differentiation spread to the mirror, and only
in those will subsequent interference between instances of that photon that have or have not hit the mirror be suppressed.

With realistic numbers, that is more like one in a trillion trillion – which means that there is only a probability of one in a trillion trillion that interference will be suppressed. This is considerably lower than the probability that the experiment will give inaccurate results due to imperfect measuring instruments, or that it will be spoiled by a lightning strike.

Now let us look at the arrival of that single quantum of energy, to see how that discrete change can possibly happen without any discontinuity. Consider the simplest possible case: an atom absorbs a photon, including all its energy. This energy transfer does not take place instantaneously. (Forget anything that you may have read about ‘quantum jumps’: they are a myth.) There are many ways in which it can happen but the simplest is this. At the beginning of the process, the atom is in (say) its ‘ground state’, in which its electrons have the least possible energy allowed by quantum theory. That means that all its instances (within the relevant coarse-grained history) have that energy. Assume that they are also fungible. At the end of the process, all those instances are still fungible, but now they are in the ‘excited state’, which has one additional quantum of energy. What is the atom like halfway through the process?
Its instances are still fungible
, but now half of them are in the ground state and half in the excited state. It is as if a continuously variable amount of money changed ownership gradually from one discrete owner to another.

This mechanism is ubiquitous in quantum physics, and is the general means by which transitions between discrete states happen in a continuous way. In classical physics, a ‘tiny effect’ always means a tiny change in some measurable quantities. In quantum physics, physical variables are typically discrete and so cannot undergo tiny changes. Instead, a ‘tiny effect’ means a tiny change in the
proportions
that have the various discrete attributes.

Other books

Everlasting Bond by Christine M. Besze
Scalpers by Ralph Cotton
Disappear by Henn, Iain Edward
This One and Magic Life by Anne C. George
Sailor & Lula by Barry Gifford
Wreckers' Key by Christine Kling
Caretakers (Tyler Cunningham) by Sheffield, Jamie