From Eternity to Here (32 page)

Read From Eternity to Here Online

Authors: Sean Carroll

Tags: #Science

BOOK: From Eternity to Here
9.37Mb size Format: txt, pdf, ePub

The real reason we use the Principle of Indifference is that we don’t know any better. And, of course, because it seems to work.

OTHER ENTROPIES, OTHER ARROWS

We’ve been pretty definitive about what we mean by “entropy” and “the arrow of time.” Entropy counts the number of macroscopically indistinguishable states, and the arrow of time arises because entropy increases uniformly throughout the observable universe. The real world being what it is, however, other people often use these words to mean slightly different things.

The definition of entropy we have been working with—the one engraved on Boltzmann’s tombstone—associates a specific amount of entropy with each individual microstate. A crucial part of the definition is that we first decide on what counts as “macroscopically measurable” features of the state, and then use those to coarse-grain the entire space of states into a set of macrostates. To calculate the entropy of a microstate, we count the total number of microstates that are macroscopically indistinguishable from it, then take the logarithm.

But notice something interesting: As a state evolving through time moves from a low-entropy condition to a high-entropy condition, if we choose to forget everything other than the macrostate to which it belongs, we end up knowing less and less about which state we actually have in mind. In other words, if we are told that a system belongs to a certain macrostate, the probability that it is any particular microstate within that macrostate decreases as the entropy increases, just because there are more possible microstates it could be. Our
information
about the state—how accurately we have pinpointed which microstate it is—goes down as the entropy goes up.

This suggests a somewhat different way of defining entropy in the first place, a way that is most closely associated with Josiah Willard Gibbs. (Boltzmann actually investigated similar definitions, but it’s convenient for us to associate this approach with Gibbs, since Boltzmann already has his.) Instead of thinking of entropy as something that characterizes individual states—namely, the number of other states that look macroscopically similar—we could choose to think of entropy as characterizing
what we know
about the state. In the Boltzmann way of thinking about entropy, the knowledge of which macrostate we are in tells us less and less about the microstate as entropy increases; the Gibbs approach inverts this perspective and defines entropy in terms of how much we know. Instead of starting with a coarse-graining on the space of states, we start with a probability distribution: the percentage chance, for each possible microstate, that the system is actually in that microstate right now. Then Gibbs gives us a formula, analogous to Boltzmann’s, for calculating the entropy associated with that probability distribution.
139
Coarse-graining never comes into the game.

Neither the Boltzmann formula nor the Gibbs formula for entropy is the “right” one. They both are things you can choose to define, and manipulate, and use to help understand the world; each comes with its advantages and disadvantages. The Gibbs formula is often used in applications, for one very down-to-Earth reason: It’s easy to calculate with. Because there is no coarse-graining, there is no discontinuous jump in entropy when a system goes from one macrostate to another; that’s a considerable benefit when solving equations.

But the Gibbs approach also has two very noticeable disadvantages. One is epistemic: It associates the idea of “entropy” with our knowledge of the system, rather than with the system itself. This has caused all kinds of mischief among the community of people who try to think carefully about what entropy really means. Arguments go back and forth, but the approach I have taken in this book, which treats entropy as a feature of the state rather than a feature of our knowledge, seems to avoid most of the troublesome issues.

The other disadvantage is more striking: If you know the laws of physics and use them to study how the Gibbs entropy evolves with time, you find that it never changes. A bit of reflection convinces us that this must be true. The Gibbs entropy characterizes how well we know what the state is. But under the influence of reversible laws, that’s a quantity that doesn’t change—information isn’t created or destroyed. For the entropy to go up, we would have to know less about the state in the future than we know about it now; but we can always run the evolution backward to see where it came from, so that can’t happen. To derive something like the Second Law from the Gibbs approach, you have to “forget” something about the evolution. When you get right down to it, that’s philosophically equivalent to the coarse-graining we had to do in the Boltzmann approach; we’ve just moved the “forgetting” step to the equations of motion, rather than the space of states.

Nevertheless, there’s no question that the Gibbs formula for entropy is extremely useful in certain applications, and people are going to continue to take advantage of it. And that’s not the end of it; there are several other ways of thinking about entropy, and new ones are frequently being proposed in the literature. There’s nothing wrong with that; after all, Boltzmann and Gibbs were proposing definitions to supercede Clausius’s perfectly good definition of entropy, which is still used today under the rubric of “thermodynamic” entropy. After quantum mechanics came on the scene, John von Neumann proposed a formula for entropy that is specifically adapted to the quantum context. As we’ll discuss in the next chapter, Claude Shannon suggested a definition of entropy that was very similar in spirit to Gibbs’s, but in the framework of information theory rather than physics. The point is not to find the one true definition of entropy; it’s to come up with concepts that serve useful functions in the appropriate contexts. Just don’t let anyone bamboozle you by pretending that one definition or the other is the uniquely correct meaning of entropy.

Just as there are many definitions of entropy, there are many different “arrows of time,” another source of potential bamboozlement. We’ve been dealing with the thermodynamic arrow of time, the one defined by entropy and the Second Law. There is also the cosmological arrow of time (the universe is expanding), the psychological arrow of time (we remember the past and not the future), the radiation arrow of time (electromagnetic waves flow away from moving charges, not toward them), and so on. These different arrows fall into different categories. Some, like the cosmological arrow, reflect facts about the evolution of the universe but are nevertheless completely reversible. It might end up being true that the ultimate explanation for the thermodynamic arrow also explains the cosmological arrow (in fact it seems quite plausible), but the expansion of the universe doesn’t present any puzzle with respect to the microscopic laws of physics in the same way the increase of entropy does. Meanwhile, the arrows that reflect true irreversibilities—the psychological arrow, radiation arrow, and even the arrow defined by quantum mechanics we will investigate later—all seem to be reflections of the same underlying state of affairs, characterized by the evolution of entropy. Working out the details of how they are all related is undeniably important and interesting, but I will continue to speak of “the” arrow of time as the one defined by the growth of entropy.

PROVING THE SECOND LAW

Once Boltzmann had understood entropy as a measure of how many microstates fit into a given macrostate, his next goal was to derive the Second Law of Thermodynamics from that perspective. I’ve already given the basic reasons why the Second Law works—there are more ways to be high-entropy than low-entropy, and distinct starting states evolve into distinct final states, so most of the time (with truly overwhelming probability) we would expect entropy to go up. But Boltzmann was a good scientist and wanted to do better than that; he wanted to
prove
that the Second Law followed from his formulation.

It’s hard to put ourselves in the shoes of a late-nineteenth-century thermody namicist. Those folks felt that the inability of entropy to decrease in a closed system was not just a good idea; it was a
Law
. The idea that entropy would “probably” increase wasn’t any more palatable than a suggestion that energy would “probably” be conserved would have been. In reality, the numbers are just so overwhelming that the probabilistic reasoning of statistical mechanics might as well be absolute, for all intents and purposes. But Boltzmann wanted to prove something more definite than that.

In 1872, Boltzmann (twenty-eight years old at the time) published a paper in which he purported to use kinetic theory to prove that entropy would always increase or remain constant—a result called the “
H
-Theorem,” which has been the subject of countless debates ever since. Even today, some people think that the
H
-Theorem explains why the Second Law holds in the real world, while others think of it as an amusing relic of intellectual history. The truth is that it’s an interesting result for statistical mechanics but falls short of “proving” the Second Law.

Boltzmann reasoned as follows. In a macroscopic object such as a room full of gas or a cup of coffee with milk, there are a tremendous number of molecules—more than 10
24
. He considered the special case where the gas is relatively dilute, so that two particles might bump into each other, but we can ignore those rare events when three or more particles bump into one another at the same time. (That really is an unobjectionable assumption.) We need some way of characterizing the macrostate of all these particles. So instead of keeping track of the position and momentum of every molecule (which would be the whole microstate), let’s keep track of the average number of particles that have any particular position and momentum. In a box of gas in equilibrium at a certain temperature, for example, the average number of particles is equal at every position in the box, and there will be a certain distribution of momenta, so that the average energy per particle gives the right temperature. Given just that information, you can calculate the entropy of the gas. And then you could prove (if you were Boltzmann) that the entropy of a gas that is not in equilibrium will go up as time goes by, until it reaches its maximum value, and then it will just stay there. The Second Law has, apparently, been derived.
140

But there is clearly something fishy going on. We started with microscopic laws of physics that are perfectly time-reversal invariant—they work equally well running forward or backward in time. And then Boltzmann claimed to derive a result from them that is manifestly
not
time-reversal invariant—one that demonstrates a clear arrow of time, by saying that entropy increases toward the future. How can you possibly get irreversible conclusions from reversible assumptions?

This objection was put forcefully in 1876 by Josef Loschmidt, after similar concerns had been expressed by William Thomson (Lord Kelvin) and James Clerk Maxwell. Loschmidt was close friends with Boltzmann and had served as a mentor to the younger physicist in Vienna in the 1860s. And he was no skeptic of atomic theory; in fact Loschmidt was the first scientist to accurately estimate the physical sizes of molecules. But he couldn’t understand how Boltzmann could have derived time asymmetry without sneaking it into his assumptions.

The argument behind what is now known as “Loschmidt’s reversibility objection” is simple. Consider some specific microstate corresponding to a low-entropy macrostate. It will, with overwhelming probability, evolve toward higher entropy. But time-reversal invariance guarantees that for every such evolution, there is another allowed evolution—the time reversal of the original—that starts in the high-entropy state and evolves toward the low-entropy state. In the space of all things that can happen over time, there are precisely as many examples of entropy starting high and decreasing as there are examples of entropy starting low and increasing. In Figure 45, showing the space of states divided up into macrostates, we illustrated a trajectory emerging from a very low-entropy macrostate; but trajectories don’t just pop into existence. That history had to come from somewhere, and that somewhere had to have higher entropy—an explicit example of a path along which entropy decreased. It is manifestly impossible to prove that entropy always increases, if you believe in time-reversal-invariant dynamics (as they all did).
141

But Boltzmann had proven
something
—there were no mathematical or logical errors in his arguments, as far as anyone could tell. It would appear that he must have smuggled in some assumption of time asymmetry, even if it weren’t explicitly stated.

And indeed he had. A crucial step in Boltzmann’s reasoning was the assumption of
molecular chaos
—in German, the
Stosszahlansatz
, translated literally as “collision number hypothesis.” It amounts to assuming that there are no sneaky conspiracies in the motions of individual molecules in the gas. But a sneaky conspiracy is precisely what is required for the entropy to decrease! So Boltzmann had effectively proven that entropy could increase only by dismissing the alternative possibilities from the start. In particular, he had assumed that the momenta of every pair of particles were uncorrelated
before
they collided. But that “before” is an explicitly time-asymmetric step; if the particles really were uncorrelated before a collision, they would generally be correlated afterward. That’s how an irreversible assumption was sneaked into the proof.

If we start a system in a low-entropy state and allow it to evolve to a high-entropy state (let an ice cube melt, for example), there will certainly be a large number of correlations between the molecules in the system once all is said and done. Namely, there will be correlations that guarantee that if we reversed all the momenta, the system would evolve back to its low-entropy beginning state. Boltzmann’s analysis didn’t account for this possibility. He proved that entropy would never decrease, if we neglected those circumstances under which entropy would decrease.

Other books

The Grimswell Curse by Sam Siciliano
Lucy Muir by Highland Rivalry
Susie by M.C. Beaton
Mountain of Daggers by Seth Skorkowsky
Hope in Love by J. Hali Steele
Redemption by Alla Kar
Grimm's Fairy Tales (Illustrated) by Grimm, Brothers, Grimm, Jacob, Grimm, Wilhelm, Rackham, Arthur