Anthropic Feuds
We have found a strange footprint on the shores of the unknown. We have devised profound theories, one after another, to account for its origin. At last, we have succeeded in reconstructing the creature that made the footprint. And Lo! It is our own.
—SIR ARTHUR EDDINGTON
T
he properties of every object in the universe, from a DNA molecule to a giant galaxy, are determined, in the final analysis, by several numbers—the constants of nature. These constants include the masses of elementary particles and the parameters characterizing the strength of the four basic interactions, or forces—
strong, weak, electromagnetic
, and
gravitational
. The proton, for example, is 0.14 percent less massive than the neutron and 1836 times more massive than the electron.
al
The gravitational attraction between two protons is 10
40
times weaker than their electric repulsion. On the face of it, these numbers appear completely arbitrary. To borrow Craig Hogan’s metaphor,
1
we can imagine the Creator sitting
at the control board of the universe and turning different knobs to adjust the values of the constants. “Shall we make it 1835 or 1836?”
Figure 13.1
.
At the control board of the universe.
Or could it be that there is some system behind this seemingly random set of numbers? Maybe there are no knobs to twiddle and the numbers are all fixed by mathematical necessity. It has long been a dream of particle physicists that indeed there is no choice and that all constants of nature will eventually be derived from some yet-to-be-discovered fundamental theory.
As of now, however, we have no indication that the choice of the constants is preordained. The Standard Model of particle physics, which describes strong, weak, and electromagnetic interactions of all known particles, contains twenty-five “adjustable” constants. The values of these constants are determined from observations.
am
Together with the newly discovered cosmological constant, we thus need twenty-six constants of nature to describe
the physical world. The list may have to be extended if new particles or new types of interaction are discovered.
The Creator’s choice of the constants may appear rather capricious, and yet, remarkably, there does seem to be a system behind it—although not of the kind physicists have been hoping for. Research in diverse areas of physics has shown that many essential features of the universe are sensitive to the precise values of some of the constants. Had the Creator adjusted the knobs slightly differently, the universe would be a strikingly different place. And most likely neither we, nor any other living creatures, would be around to admire it.
To start with, let us consider the effect of varying the neutron mass. As it stands now, it is slightly greater than the proton mass, which allows free neutrons to decay into protons and electrons.
an
Suppose now that we turn the neutron mass knob toward smaller values. It takes a very small change, no more than 0.2 percent, for the mass difference between proton and neutron to reverse. Now protons become unstable and decay into neutrons and positrons. Protons may still be stabilized inside atomic nuclei, but with some further turning of the knob they will decay there as well. As a result, the nuclei will lose their electric charge and atoms will disintegrate, since there will be nothing to keep electrons in orbit around the nuclei. The unattached electrons will form close pairs with the positrons. They will swirl around one another in a deadly dance and quickly annihilate into photons. We will thus be left in a “neutron world,” consisting of isolated neutronic nuclei and radiation. This world has no chemistry, no complex structures, and no life.
We next turn the neutron mass knob in the opposite direction. Once again, a mass increase of only a fraction of a percent triggers a catastrophic change. As neutrons get heavier, they become more unstable, and at some point they start decaying inside the atomic nuclei, turning into protons. The nuclei are then torn apart by the electric repulsion between protons, and the protons, once they are freed from the nuclei, combine with electrons to
form hydrogen atoms. Thus, we end up in a rather dull “hydrogen world,” where no chemical elements can exist except hydrogen.
ao
To proceed with our exploration, let us now examine the effect of varying the strengths of basic particle interactions. Weak interactions do not play much of a role in the present-day universe, except in spectacular stellar explosions—the supernovae. When a massive star runs out of nuclear fuel, the inner core of the star collapses under its own weight. Enormous energy is released, escaping mostly in the form of weakly interacting neutrinos. Photons and other particles, which interact strongly or electromagnetically, remain trapped in the superdense collapsing core. On their way out, neutrinos blow off the outer layers of the star, which results in a colossal explosion. If weak interactions were much stronger than they actually are, neutrinos would not be able to escape from the core, and if they were much weaker, neutrinos would fly freely through outer layers without dragging them along. Thus, if we were to make a significant change in the strength of weak interactions one way or the other, astronomers would lose one of their most cherished spectacles.
You think you might be able to live with that? But wait; let us not turn the knob just yet. The effect of the change at earlier stages of cosmic evolution could be much more devastating. As we discussed in Chapter 4, heavy elements, such as carbon, oxygen, and iron, were forged in stellar interiors and then dispersed in supernova explosions. These elements are essential for the formation of planets and living creatures. Without supernovae, they would remain buried inside stars and the only elements available would be the lightest ones, formed in the big bang: hydrogen, helium, and deuterium, with a trace of lithium—not the kind of universe you would like to live in.
Gravity is by far the weakest of the four fundamental forces. Its effects are important only in the presence of huge aggregates of matter, like galaxies or stars. In fact, it is the weakness of gravity that makes the stars so massive : the mass has to be large enough to squeeze the hot gas to the high
density needed for nuclear reactions. If we were to make gravity stronger, the stars would not be so large and would burn out faster. A millionfold increase in the strength of gravity would make stellar masses a billion times smaller.
ap
The mass of a typical star would then be less than the present mass of the Moon, and its lifetime would be about 10,000 years (compared to 10 billion years for the Sun). This time interval is hardly long enough for even the simplest bacteria to evolve. A much smaller enhancement of gravity may in fact be sufficient to depopulate the universe. A hundredfold increase, for example, would reduce stellar lifetimes well below the few billion years that it took for intelligent life to evolve on Earth.
These and many other examples show that our presence in the universe depends on a precarious balance between different tendencies—a balance that would be destroyed if the constants of nature were to deviate significantly from their present values.
2
What are we to make of this fine-tuning of the constants? Is it a sign of a Creator who carefully adjusted the constants so that life and intelligence would be possible? Perhaps. But there is also a completely different explanation.
The alternative view is based on a very different image of the Creator. Instead of meticulously designing the universe, he botches one sloppy job after another, putting out a huge number of universes with different and totally random values of the constants. Most of these universes are as exciting as the neutron world, but once in a while, by pure chance, a finely tuned universe fit for life will be created.
Given this worldview, let us ask ourselves: What kind of universe can we expect to live in? Most of the universes will be dreary and unsuitable for life, but there will be nobody there to complain about that. All intelligent beings will find themselves in the rare bio-friendly universes and will marvel at the miraculous conspiracy of the constants that made their existence possible. This line of reasoning is known as the
anthropic principle
. The name
was coined in 1974 by Cambridge astrophysicist Brandon Carter,
aq
who offered the following formulation of the principle: “[W]hat we can expect to observe must be restricted by the conditions necessary for our presence as observers.”
3
The anthropic principle is a selection criterion. It assumes the existence of some distant domains where the constants of nature are different. These domains may be located in some remote parts of our own universe, or they could belong to other, completely disconnected spacetimes. A collection of domains with a wide variety of properties is called a
multiverse
—the term introduced by Carter’s former classmate Martin Rees, now Britain’s Astronomer Royal. Later in this book we shall encounter three types of multiverse ensembles. The first consists of a multitude of regions all belonging to the same universe. The second type is made up of separate, disconnected universes.
ar
And the third type is a combination of the two: it consists of multiple universes, each of which has a variety of different regions. If a multiverse of any type really exists, then it is not surprising that the constants of nature are fine-tuned for life. On the contrary, they are guaranteed to be fine-tuned.
Anthropic reasoning can also be applied to variations of observable properties in time, rather than in space. One of the earliest applications was by Robert Dicke, who used the anthropic approach to explain the present age of the universe. Dicke argued that life can form only after heavy elements are synthesized in stellar interiors. This takes a few billion years. The elements are then dispersed in supernova explosions, and we have to allow a few more billion years for the second generation of stars and their planetary systems to form in the aftermath of the explosions and for biological evolution to occur. The first observers could not, therefore, appear much earlier than 10 billion years A.B. We should also keep in mind that a star like our Sun exhausts its nuclear energy in about 10 billion years and that the galactic supply of gas for new star formation is also depleted on a similar
time scale. At 100 billion years A.B. there will be very few Sun-like stars left in the visible universe.
4
If we assume that life will perish with the death of stars, we are left with a window between, say, 5 and 100 billion years A.B. when observers can exist.
as
Not surprisingly, the present age of the universe falls within this window.
5
Dicke’s use of the anthropic principle to constrain our location in time was uncontroversial. But Brandon Carter, Martin Rees, and a few other physicists attempted to go beyond that, using anthropic reasoning to explain the fine-tuning of the fundamental constants. And that’s where the controversy began.
As formulated by Carter, the anthropic principle is trivially true. The constants of nature and our location in spacetime should not preclude the existence of observers. For otherwise our theories would be logically inconsistent. When interpreted in this sense, as a simple consistency requirement, the anthropic principle is, of course, uncontroversial, although not very useful. But any attempt to use it as an
explanation
for the fine-tuning of the universe evoked an adverse and unusually temperamental response from the physics community.
There were in fact some good reasons for that. In order to explain the fine-tuning, one has to postulate the existence of a multiverse, consisting of remote domains where the constants of nature are different. The problem is, however, that there is not one iota of evidence to support this hypothesis. Even worse, it does not seem possible to
ever
confirm or disprove it. The philosopher Karl Popper has argued that any statement that cannot be falsified cannot be scientific. This criterion, which has been generally adopted by physicists, seems to imply that anthropic explanations of the fine-tuning are not scientific. Another, related criticism was that the anthropic principle can only be used to explain what we already know. It never predicts anything, and thus cannot be tested.
It did not help that the whole subject of the anthropic principle had been obscured by murky and confusing interpretations.
at
On top of that, many different formulations of the principle appeared in the literature (the philosopher Nick Bostrom, who wrote a book on the subject,
6
counted more than thirty). The situation is well summarized by a quote from Mark Twain: “The researches of many commentators have already thrown much darkness on this subject, and it is probable that, if they continue, we shall soon know nothing at all about it.”
7
The term “anthropic” was itself a source of confusion, as it seems to refer to human beings, rather than to intelligent observers in general.
But the main reason why the response to anthropic explanations was so emotional was probably the feeling of betrayal. Ever since Einstein, physicists believed that the day will come when all constants of nature will be calculated from some all-encompassing Theory of Everything. Resorting to anthropic arguments was viewed as a capitulation and evoked reactions ranging from annoyance to outright hostility. Some well-known physicists went so far as to say that anthropic ideas were “dangerous”
8
and that they were “corrupting science.”
9
Only in extreme cases, when all other possibilities have been exhausted, might one be excused for mentioning the “A-word,” and sometimes not even then. The Nobel Prize winner Steven Weinberg once said that a physicist talking about the anthropic principle “runs the same kind of risk as a cleric talking about pornography. No matter how much you say you are against it, some people will think you are a little too interested.”