WHEN THE LAWS OF PHYSICS AREN’T ENOUGH
Ultimately, it’s perfectly clear what the resolution to these debates must be, at least within our observable universe. Loschmidt is right in that the set of all possible evolutions has entropy decreasing as often as it is increasing. But Boltzmann is also right, that statistical mechanics explains why low-entropy conditions will evolve into high-entropy conditions with overwhelming probability. The conclusion should be obvious: In addition to the dynamics controlled by the laws of physics, we need to assume that the universe began in a low-entropy state. That is a
boundary condition
, an extra assumption, not part of the laws of physics themselves. (At least, not until we start talking about what happened before the Big Bang, which is not a discussion one could have had in the 1870s.) Unfortunately, that conclusion didn’t seem sufficient to people at the time, and subsequent years have seen confusions about the status of the
H
-Theorem proliferate beyond reason.
In 1876, Boltzmann wrote a response to Loschmidt’s reversibility objection, which did not really clarify the situation. Boltzmann certainly understood that Loschmidt had a point, and admitted that there must be something undeniably probabilistic about the Second Law; it couldn’t be absolute, if kinetic theory were true. At the beginning of his paper, he makes this explicit:
Since the entropy would decrease as the system goes through this sequence in reverse, we see that the fact that entropy actually increases in all physical processes in our own world cannot be deduced solely from the nature of the forces acting between the particles, but must be a consequence of the initial conditions.
We can’t ask for a more unambiguous statement than that: “the fact that entropy increases in our own world . . . must be a consequence of the initial conditions.” But then, still clinging to the idea of proving something without relying on initial conditions, he immediately says this:
Nevertheless, we do not have to assume a special type of initial condition in order to give a mechanical proof of the Second Law, if we are willing to accept a statistical viewpoint.
“Accepting a statistical viewpoint” presumably means that he admits we can argue only that increasing entropy is overwhelmingly likely, not that it always happens. But what can he mean by now saying that we don’t have to assume a special type of initial condition? The next sentences confirm our fears:
While any individual non-uniform state (corresponding to low entropy) has the same probability as any individual uniform state (corresponding to high entropy), there are many more uniform states than non-uniform states. Consequently, if the initial state is chosen at random, the system is almost certain to evolve into a uniform state, and entropy is almost certain to increase.
That first sentence is right, but the second is surely wrong. If an initial state is chosen at random, it is not “almost certain to evolve into a uniform state”; rather, it is almost certain to
be
in a uniform (high-entropy) state. Among the small number of low-entropy states, almost all of them evolve toward higher-entropy states. In contrast, only a very tiny fraction of high-entropy states will evolve toward lo w-entropy states; however, there are a fantastically larger number of high-entropy states to begin with. The total number of low-entropy states that evolve to high entropy is equal, as Loschmidt argued, to the total number of high-entropy states that evolve to low entropy.
Reading through Boltzmann’s papers, one gets a strong impression that he was several steps ahead of everyone else—he saw the ins and outs of all the arguments better than any of his interlocutors. But after zooming through the ins and outs, he didn’t always stop at the right place; moreover, he was notoriously inconsistent about the working assumptions he would adopt from paper to paper. We should cut him some slack, however, since here we are 140 years later and we still don’t agree on the best way of talking about entropy and the Second Law.
THE PAST HYPOTHESIS
Within our observable universe, the consistent increase of entropy and the corresponding arrow of time cannot be derived from the underlying reversible laws of physics alone. They require a boundary condition at the beginning of time. To understand why the Second Law works in our real world, it is not sufficient to simply apply statistical reasoning to the underlying laws of physics; we must also assume that the observable universe began in a state of very low entropy. David Albert has helpfully given this assumption a simple name: the
Past Hypothesis
.
142
The Past Hypothesis is the one profound exception to the Principle of Indifference that we alluded to above. The Principle of Indifference would have us imagine that, once we know a system is in some certain macrostate, we should consider every possible microstate within that macrostate to have an equal probability. This assumption turns out to do a great job of predicting the
future
on the basis of statistical mechanics. But it would do a terrible job of reconstructing the
past
, if we really took it seriously.
Boltzmann has told us a compelling story about why entropy increases: There are more ways to be high entropy than low entropy, so most microstates in a low-entropy macrostate will evolve toward higher-entropy macrostates. But that argument makes no reference to the direction of time. Following that logic, most microstates within some macrostate will increase in entropy toward the future but will also have evolved from a higher-entropy condition in the past.
Consider all the microstates in some medium-entropy macrostate. The overwhelming majority of those states have come from prior states of
high
entropy. They must have, because there aren’t that many low-entropy states from which they could have come. So with high probability, a typical medium-entropy microstate appears as a “statistical fluctuation” from a higher-entropy past. This argument is exactly the same argument that entropy should increase into the future, just with the time direction reversed.
As an example, consider the divided box of gas with 2,000 particles. Starting from a low-entropy condition (80 percent of the particles on one side), the entropy tends to go up, as plotted in Figure 43. But in Figure 47 we show how the entropy evolves to the past as well as to the future. Since the underlying dynamical rule (“each particle has a 0.5 percent chance of changing sides per second”) doesn’t distinguish between directions of time, it’s no surprise that the entropy is higher in the past of that special moment just as it is in the future.
You may object, thinking that it’s very unlikely that a system would start out in equilibrium and then dive down to a low-entropy state. That’s certainly true; it would be much more likely to remain at or near equilibrium. But given that we insist on having a low-entropy state at all, it is overwhelmingly likely that such a state represents a minimum on the entropy curve, with higher entropy both to the past and to the future.
Figure 47:
The entropy of a divided box of gas. The “boundary” condition is set at time = 500, where 80 percent of the particles are on one side and 20 percent on the other (a low-entropy macrostate). Entropy increases both to the future and to the past of that moment.
At least, it would be overwhelmingly likely, if all we had to go on were the Principle of Indifference. The problem is, no one in the world thinks that the entropy of the real universe behaves as shown in Figure 47. Everyone agrees that the entropy will be higher tomorrow than it is today, but nobody thinks it was higher yesterday than it is today. There are good reasons for that agreement, as we’ll discuss in the next chapter—if we currently live at a minimum of the entropy curve, all of our memories of the past are completely unreliable, and we have no way of making any kind of sense of the universe.
So if we care about what actually happens in the world, we have to supplement the Principle of Indifference with the Past Hypothesis. When it comes to picking out microstates within our macrostate, we do not assign every one equal probability: We choose only those microstates that are compatible with a much lower-entropy past (a very tiny fraction), and take all of
those
to have equal probability.
143
But this strategy leaves us with a question: Why is the Past Hypothesis true? In Boltzmann’s time, we didn’t know anything about general relativity or the Big Bang, much less quantum mechanics or quantum gravity. But the question remains with us, only in a more specific form: Why did the universe have a low entropy near the Big Bang?
9
INFORMATION AND LIFE
You should call it entropy, for two reasons. In the first place, your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one knows what entropy really is, so in a debate you will always have the advantage.
—John von Neumann, to Claude Shannon
144
In a celebrated episode in
Swann’s Way,
Marcel Proust’s narrator is feeling cold and somewhat depressed. His mother offers him tea, which he reluctantly accepts. He is then pulled into an involuntary recollection of his childhood by the taste of a traditional French teatime cake, the madeleine.
And suddenly the memory appeared. That taste was the taste of the little piece of madeleine which on Sunday mornings at Combray . . . when I went to say good morning to her in her bedroom, my aunt Léonie would give me after dipping it in her infusion of tea or lime blossom . . . And as soon as I had recognized the taste of the piece of madeleine dipped in lime-blossom tea that my aunt used to give me . . . immediately the old gray house on the street, where her bedroom was, came like a stage set to attach itself to the little wing opening onto a garden that had been built for my parents behind it . . . ; and with the house the town, from morning to night and in all weathers, the Square, where they sent me before lunch, the streets where I went on errands, the paths we took if the weather was fine.
145
Swann’s Way
is the first of the seven volumes of
À la recherche du temps perdu
, which translates into English as
In Search of Lost Time
. But C. K. Scott Moncrieff, the original translator, borrowed a line from Shakespeare’s Sonnet 30 to render Proust’s novel as
Remembrance of Things Past
.
The past, of course, is a natural thing to have remembrances of. What else would we be remembering, anyway? Surely not the future. Of all the ways in which the arrow of time manifests itself, memory—and in particular, the fact that it applies to the past but not the future—is the most obvious, and the most central to our lives. Perhaps the most important difference between our experience of one moment and our experience of the next is the accumulation of memories, propelling us forward in time.
My stance so far has been that all the important ways in which the past differs from the future can be traced to a single underlying principle, the Second Law of Thermodynamics. This implies that our ability to remember the past but not the future must ultimately be explained in terms of entropy, and in particular by recourse to the Past Hypothesis that the early universe was in a very low-entropy state. Examining how that works will launch us on an exploration of the relationship between entropy, information, and life.
PICTURES AND MEMORIES
One of the problems in talking about “memory” is that there’s a lot we don’t understand about how the human brain actually works, not to mention the phenomenon of consciousness.
146
For our present purposes, however, that’s not a significant handicap. When we talk about remembering the past, we’re interested not specifically in the human experience of memory, but in the general notion of reconstructing past events from the present state of the world. We don’t lose anything by considering well-understood mechanical recording devices, or even such straightforward artifacts as photographs or history books. (We are making an implicit assumption that human beings are part of the natural world, and in particular that our minds can in principle be understood in terms of our brains, which obey the laws of physics.)