From Eternity to Here (31 page)

Read From Eternity to Here Online

Authors: Sean Carroll

Tags: #Science

BOOK: From Eternity to Here
2.18Mb size Format: txt, pdf, ePub

At first it would seem simple enough. Take two boxes of gas molecules. Prepare one of them in some low-entropy state, as in the top left of Figure 46; once the molecules are let go, their entropy will go up as expected. Prepare the other box by taking a high-entropy state that has just evolved from a low-entropy state, and reversing all of the velocities, as at the bottom left. That second box is delicately constructed so that the entropy will decrease with time. So, starting from that initial condition in both boxes, we will see the entropy evolve in opposite directions.

But we want more than that. It’s not very interesting to have two completely separate systems with oppositely directed arrows of time. We would like to have systems that
interact
—one system can somehow communicate with the other.

And that ruins everything.
136
Imagine we started with these two boxes, one of which had an entropy that was ready to go up and the other ready to go down. But then we introduced a tiny interaction that connected the boxes—say, a few photons moving between the boxes, bouncing off a molecule in one before returning to the other. Certainly the interaction of Benjamin Button’s body with the rest of the world is much stronger than that. (Likewise the White Queen, or Martin Amis’s narrator in
Time’s Arrow
.)

That extra little interaction will slightly alter the velocities of the molecules with which it interacts. (Momentum is conserved, so it has no choice.) That’s no problem for the box that starts with low entropy, as there is no delicate tuning required to make the entropy go up. But it completely ruins our attempt to set up conditions in the other box so that entropy goes down. Just a tiny change in velocity will quickly propagate through the gas, as one affected molecule hits another molecule, and then they hit two more, and so on. It was necessary for all of the velocities to be very precisely aligned to make the gas miraculously conspire to decrease its entropy, and any interaction we might want to introduce will destroy the required conspiracy. The entropy in the first box will very sensibly go up, while the entropy in the other will just stay high; that subsystem will basically stay in equilibrium. You can’t have incompatible arrows of time among interacting subsystems of the universe.
137

ENTROPY AS DISORDER

We often say that entropy measures disorder. That’s a shorthand translation of a very specific concept into somewhat sloppy language—perfectly adequate as a quick gloss, but there are ways in which it can occasionally go wrong. Now that we know the real definition of entropy given by Boltzmann, we can understand how close this informal idea comes to the truth.

The question is, what do you mean by “order”? That’s not a concept that can easily be made rigorous, as we have done with entropy. In our minds, we associate “order” with a condition of purposeful arrangement, as opposed to a state of randomness. That certainly bears a family resemblance to the way we’ve been talking about entropy. An egg that has not yet been broken seems more orderly than one that we have split apart and whisked into a smooth consistency.

Entropy seems naturally to be associated with disorder because, more often than not, there are more ways to be disordered than to be ordered. A classic example of the growth of entropy is the distribution of papers on your desk. You can put them into neat piles—orderly, low entropy—and over time they will tend to get scattered across the desktop—disorderly, high entropy. Your desk is not a closed system, but the basic idea is on the right track.

But if we push too hard on the association, it doesn’t quite hold up. Consider the air molecules in the room you’re sitting in right now—presumably spread evenly throughout the room in a high-entropy configuration. Now imagine those molecules were instead collected into a small region in the center of the room, just a few centimeters across, taking on the shape of a miniature replica of the Statue of Liberty. That would be, unsurprisingly, much lower entropy—and we would all agree that it also seemed to be more orderly. But now imagine that all the gas in the room was collected into an extremely tiny region, only 1 millimeter across, in the shape of an amorphous blob. Because the region of space covered by the gas is even smaller now, the entropy of that configuration is lower than in the Statue of Liberty example. (There are more ways to rearrange the molecules within a medium-sized statuette than there are within a very tiny blob.) But it’s hard to argue that an amorphous blob is more “orderly” than a replica of a famous monument, even if the blob is really small. So in this case the correlation between orderliness and low entropy seems to break down, and we need to be more careful.

That example seems a bit contrived, but we actually don’t have to work that hard to see the relationship between entropy and disorder break down. In keeping with our preference for kitchen-based examples, consider oil and vinegar. If you shake oil and vinegar together to put on a salad, you may have noticed that they tend to spontaneously unmix themselves if you set the mixture down and leave it to its own devices. This is not some sort of spooky violation of the Second Law of Thermodynamics. Vinegar is made mostly of water, and water molecules tend to stick to oil molecules—and, due to the chemical properties of oil and water, they stick in very particular configurations. So when oil and water (or vinegar) are thoroughly mixed, the water molecules cling to the oil molecules in specific arrangements, corresponding to a relatively
low
-entropy state. Whereas, when the two substances are largely segregated, the individual molecules can move freely among the other molecules of similar type. At room temperature, it turns out that oil and water have a higher entropy in the unmixed state than in the mixed state.
138
Order appears spontaneously at the macroscopic level, but it’s ultimately a matter of disorder at the microscopic level.

Things are also subtle for really big systems. Instead of the gas in a room, consider an astronomical-sized cloud of gas and dust—say, an interstellar nebula. That seems pretty disorderly and high-entropy. But if the nebula is big enough, it will contract under its own gravity and eventually form stars, perhaps with planets orbiting around them. Because such a process obeys the Second Law, we can be sure that the entropy goes up along the way (as long as we keep careful track of all the radiation emitted during the collapse and so forth). But a star with several orbiting planets seems, at least informally, to be more orderly than a dispersed interstellar cloud of gas. The entropy went up, but so did the amount of order, apparently.

The culprit in this case is gravity. We’re going to have a lot to say about how gravity wreaks havoc with our everyday notions of entropy, but for now suffice it to say that the interaction of gravity with other forces seems to be able to create order while still making the entropy go up—temporarily, anyway. That is a deep clue to something important about how the universe works; sadly, we aren’t yet sure what that clue is telling us.

For the time being, let’s recognize that the association of entropy with disorder is imperfect. It’s not bad—it’s okay to explain entropy informally by invoking messy desktops. But what entropy really is telling us is how many microstates are macroscopically indistinguishable. Sometimes that has a simple relationship with orderliness, sometimes not.

THE PRINCIPLE OF INDIFFERENCE

There are a couple of other nagging worries about Boltzmann’s approach to the Second Law that we should clean up, or at least bring out into the open. We have this large set of microstates, which we divide up into macrostates, and declare that the entropy is the logarithm of the number of microstates per macrostate. Then we are asked to swallow another considerable bite: The proposition that each microstate within a macrostate is “equally likely.”

Following Boltzmann’s lead, we want to argue that the reason why entropy tends to increase is simply that there are more ways to be high-entropy than to be low-entropy, just by counting microstates. But that wouldn’t matter the least bit if a typical system spent a lot more time in the relatively few low-entropy microstates than it did in the many high-entropy ones. Imagine if the microscopic laws of physics had the property that almost all high-entropy microstates tended to naturally evolve toward a small number of low-entropy states. In that case, the fact that there were more high-entropy states wouldn’t make any difference; we would still expect to find the system in a low-entropy state if we waited long enough.

It’s not hard to imagine weird laws of physics that behave in exactly this way. Consider the billiard balls once again, moving around according to perfectly normal billiard-ball behavior, with one crucial exception: Every time a ball bumps into a particular one of the walls of the table, it sticks there, coming immediately to rest. (We’re not imagining that someone has put glue on the rail or any such thing that could ultimately be traced to reversible behavior at the microscopic level, but contemplating an entirely new law of fundamental physics.) Note that the space of states for these billiard balls is exactly what it would be under the usual rules: Once we specify the position and momentum of every ball, we can precisely predict the future evolution. It’s just that the future evolution, with overwhelming probability, ends up with all of the balls stuck on one wall of the table. That’s a very low-entropy configuration; there aren’t many microstates like that. In such a world, entropy would spontaneously decrease even for the closed system of the pool table.

It should be clear what’s going on in this concocted example: The new law of physics is not reversible. It’s much like checkerboard D from the last chapter, where diagonal lines of gray squares would run into a particular vertical column and simply come to an end. Knowing the positions and momenta of all the balls on this funky table is sufficient to predict the future, but it is not good enough to reconstruct the past. If a ball is stuck to the wall, we have no idea how long it has been there.

The real laws of physics seem to be reversible at a fundamental level. This is, if we think about it a bit, enough to guarantee that high-entropy states don’t evolve preferentially into low-entropy states. Remember that reversibility is based on conservation of information: The information required to specify the state at one time is preserved as it evolves through time. That means that two different states now will always evolve into two different states some given amount of time in the future; if they evolved into the same state, we wouldn’t be able to reconstruct the past of that state. So it’s just impossible that high-entropy states all evolve preferentially into low-entropy states, because there aren’t enough low-entropy states to allow it to happen. This is a technical result called
Liouville’s Theorem
, after French mathematician Joseph Liouville.

That’s almost what we want, but not quite. And what we want (as so often in life) is not something we can really get. Let’s say that we have some system, and we know what macrostate it is in, and we would like to say something about what will happen next. It might be a glass of water with an ice cube floating in it. Liouville’s Theorem says that
most
microstates in that macrostate will have to increase in entropy or stay the same, just as the Second Law would imply—the ice cube is likely to melt. But the system is in some particular microstate, even if we don’t know which one. How can we be sure that the microstate isn’t one of the very tiny number that is going to dramatically decrease in entropy any minute now? How can we guarantee that the ice cube isn’t actually going to grow a bit, while the water around it heats up?

The answer is: We can’t. There is bound to be some particular microstate, very rare in the ice-cube-and-water macrostate we are considering, that actually evolves toward an even lower-entropy microstate. Statistical mechanics, the version of thermodynamics based on atoms, is essentially
probabilistic
—we don’t know for sure what is going to happen; we can only argue that certain outcomes are overwhelmingly likely. At least, that’s what we’d like to be able to argue. What we can honestly argue is that most medium-entropy states evolve into higher-entropy states rather than lower-entropy ones. But you’ll notice a subtle difference between “most microstates within this macrostate evolve to higher entropy” and “a microstate within this macrostate is likely to evolve to higher entropy.” The first statement is just about counting the relative number of microstates with different properties (“ice cube melts” vs. “ice cube grows”), but the second statement is a claim about the probability of something happening in the real world. Those are not quite the same thing. There are more Chinese people in the world than there are Lithuanians; but that doesn’t mean that you are more likely to run into a Chinese person than a Lithuanian, if you just happen to be walking down the streets of Vilnius.

Conventional statistical mechanics, in other words, makes a crucial assumption: Given that we know we are in a certain macrostate, and that we understand the complete set of microstates corresponding to that macrostate, we can assume that
all such microstates are equally likely
. We can’t avoid invoking some assumption along these lines; otherwise there’s no way of making the leap from counting states to assigning probabilities. The equal-likelihood assumption has a name that makes it sound like a dating strategy for people who prefer to play hard to get: the “Principle of Indifference.” It was championed in the context of probability theory, long before statistical mechanics even came on the scene, by our friend Pierre-Simon Laplace. He was a die-hard determinist, but understood as well as anyone that we usually don’t have access to all possible facts, and wanted to understand what we can say in situations of incomplete knowledge.

And the Principle of Indifference is basically the best we can do. When all we know is that a system is in a certain macrostate, we assume that every microstate within that macrostate is equally likely. (With one profound exception—the Past Hypothesis—to be discussed at the end of this chapter.) It would be nice if we could
prove
that this assumption should be true, and people have tried to do that. For example, if a system were to evolve through every possible microstate (or at least, through a set of microstates that came very close to every possible microstate) in a reasonable period of time, and we didn’t know where it was in that evolution, there would be some justification for treating all microstates as equally likely. A system that wanders all over the space of states and covers every possibility (or close to it) is known as “ergodic.” The problem is, even if a system is ergodic (and not all systems are), it would take forever to actually evolve close to every possible state. Or, if not forever, at least a horrifically long time. There are just too many states for a macroscopic system to sample them all in a time less than the age of the universe.

Other books

Fiendish Deeds by P. J. Bracegirdle
Marrying the Marquis by Patricia Grasso
Young Love Murder by April Brookshire
Trusting Them by Marla Monroe
The Templar Conspiracy by Paul Christopher
He Makes Me Bundle by Blue, Gia
Late Nights on Air by Elizabeth Hay
Shafted by Mandasue Heller