120
Quoted in Maglich (1973). The original papers were Lee and Yang (1956) and Wu et al. (1957). As Wu had suspected, other physicists were able to reproduce the result very rapidly; in fact, another group at Columbia performed a quick confirmation experiment, the results of which were published back-to-back with the Wu et al
.
paper (Garwin, Lederman, and Weinrich, 1957).
121
Christenson et al
.
(1964). Within the Standard Model of particle physics, there is an established method to account for
CP
violation, developed by Makoto Kobayashi and Toshihide Maskawa (1973), who generalized an idea due to Nicola Cabbibo. Kobayashi and Maskawa were awarded the Nobel Prize in 2008.
122
We’re making a couple of assumptions here: namely, that the laws are time-translation invariant (not changing from moment to moment), and that they are deterministic (the future can be predicted with absolute confidence, rather than simply with some probability). If either of these fails to be true, the definition of whether a particular set of laws is time-reversal invariant becomes a bit more subtle.
8 . ENTROPY AND DISORDER
123
Almost the same example is discussed by Wheeler (1994), who attributes it to Paul Ehrenfest. In what Wheeler calls “Ehrenfest’s Urn,” exactly one particle switches side at every step, rather than every particle having a small chance of switching sides.
124
When we have 2 molecules on the right, the first one could be any of the 2,000, and the second could be any of the remaining 1,999. So you might guess there are 1,999 × 2,000 = 3,998,000 different ways this could happen. But that’s overcounting a bit, because the two molecules on the right certainly don’t come in any particular
order
. (Saying “molecules 723 and 1,198 are on the right” is exactly the same statement as “molecules 1,198 and 723 are on the right.”) So we divide by two to get the right answer: There are 1,999,000 different ways we can have 2 molecules on the right and 1,998 on the left. When we have 3 molecules on the right, we take 1,998 × 1,999 × 2,000 and divide by 3 × 2 different orderings. You can see the pattern; for 4 particles, we would divide 1,997 × 1,998 × 1,999 × 2,000 by 4 × 3 × 2, and so on. These numbers have a name—“binomial coefficients”—and they represent the number of ways we can choose a certain set of objects out of a larger set.
125
We are assuming the logarithm is “base 10,” although any other base can be used. The “logarithm base 2” of 8 = 2
3
is 3; the logarithm base 2 of 2,048 = 2
11
is 11. See Appendix for fascinating details.
126
The numerical value of
k
is about 3.2 × 10
-16
ergs per Kelvin; an erg is a measure of energy, while Kelvin of course measures temperature. (That’s not the value you will find in most references; this is because we are using base-10 logarithms, while the formula is more often written using natural logarithms.) When we say “temperature measures the average energy of moving molecules in a substance,” what we mean is “the average energy per degree of freedom is one-half times the temperature times Boltzmann’s constant.”
127
The actual history of physics is so much messier than the beauty of the underlying concepts. Boltzmann came up with the idea of “
S = k
log
W
,” but those are not the symbols he would have used. His equation was put into that form by Max Planck, who suggested that it be engraved on Boltzmann’s tomb; it was Planck who first introduced what we now call “Boltzmann’s constant.” To make things worse, the equation on the tomb is
not
what is usually called “Boltzmann’s equation”—that’s a different equation discovered by Boltzmann, governing the evolution of a distribution of a large number of particles through the space of states.
128
One requirement of making sense of this definition is that we actually know how to
count
the different kinds of microstates, so we can quantify how many of them belong to various macrostates. That sounds easy enough when the microstates form a discrete set (like distributions of particles in one half of a box or the other half) but becomes trickier when the space of states is continuous (like real molecules with specific positions and momenta, or almost any other realistic situation). Fortunately, within the two major frameworks for dynamics—classical mechanics and quantum mechanics—there is a perfectly well-defined “measure” on the space of states, which allows us to calculate the quantity
W
, at least in principle. In some particular examples, our understanding of the space of states might get a little murky, in which case we need to be careful.
129
Feynman (1964), 119-20.
130
I know what you’re thinking. “I don’t know about you, but when I dry myself off, most of the water goes onto the towel; it’s not fifty-fifty.” That’s true, but the reason why is because the fiber structure of a nice fluffy towel provides many more places for the water to be than your smooth skin does. That’s also why your hair doesn’t dry as efficiently, and why you can’t dry yourself very well with pieces of paper.
131
At least in certain circumstances, but not always. Imagine we had a box of gas, where every molecule on the left side was “yellow” and every molecule on the right was “green,” although they were otherwise identical. The entropy of that arrangement would be pretty low and would tend to go up dramatically if we allowed the two colors to mix. But we couldn’t get any useful work out of it.
132
The ubiquity of friction and noise in the real world is, of course, due to the Second Law. When two billiard balls smack into each other, there are only a very small number of ways that all the molecules in each ball could respond precisely so as bounce off each other without disturbing the outside world in any way; there are a much larger number of ways that those molecules can interact gently with the air around them to create the noise of the two balls colliding. All of the guises of dissipation in our everyday lives—friction, air resistance, noise, and so on—are manifestations of the tendency of entropy to increase.
133
Thought of yet another way: The next time you are tempted to play the Powerball lottery, where you pick five numbers between 1 and 59 and hope that they come up in a random drawing, pick the numbers “1, 2, 3, 4, 5.” That sequence is precisely as likely as any other “random-looking” sequence. (Of course, a nationwide outcry would ensue if you won, as people would suspect that someone had rigged the drawing. So you’d probably never collect, even if you got lucky.)
134
Strictly speaking, since there are an infinite number of possible positions and an infinite number of possible momenta for each particle, the number of microstates per macrostate is also infinite. But the possible positions and momenta for a particle on the left side of the box can be put into one-to-one correspondence with the possible positions and momenta on the right side; even though both are infinite, they’re “the same infinity.” So it’s perfectly legitimate to say that there are an equal number of possible states per particle on each side of the box. What we’re really doing is counting “the volume of the space of states” corresponding to a particular macrostate.
135
To expand on that a little bit, at the risk of getting hopelessly abstract: As an alternative to averaging within a small region of space, we could imagine averaging over a small region in
momentum
space. That is, we could talk about the average position of particles with a certain value of momentum, rather than vice versa. But that’s kind of crazy; that information simply isn’t accessible via macroscopic observation. That’s because, in the real world, particles tend to interact (bump into one another) when they are
nearby in space
, but nothing special happens when two distant particles have the same momentum. Two particles that are close to each other in position can interact, no matter what their relative velocities are, but the converse is not true. (Two particles that are separated by a few light years aren’t going to interact noticeably, no matter what their momentum is.) So the laws of physics pick out “measuring average properties within a small region of space” as a sensible thing to do.
136
A related argument has been given by mathematician Norbert Wiener in
Cybernetics
(1961), 34.
137
There is a loophole. Instead of starting with a system that had delicately tuned initial conditions for which the entropy would decrease, and then letting it interact with the outside world, we could just ask the following question: “Given that this system will go about interacting with the outside world, what state do I need to put it in right now so that its entropy will decrease in the future?” That kind of future boundary condition is not inconceivable, but it’s a little different than what we have in mind here. In that case, what we have is not some autonomous system with a naturally reversed arrow of time, but a conspiracy among every particle in the universe to permit some subsystem to decrease in entropy. That subsystem would not look like the time-reverse of an ordinary object in the universe; it would look like the rest of the world was conspiring to nudge it into a low-entropy state.
138
Note the caveat “at room temperature.” At a sufficiently high temperature, the velocity of the individual molecules is so high that the water doesn’t stick to the oil, and once again a fully mixed configuration has the highest entropy. (At that temperature the mixture will be vapor.) In the messy real world, statistical mechanics is complicated and should be left to professionals.
139
Here is the formula: For each possible microstate
x
, let
p
x
be the probability that the system is in that microstate. The entropy is then the sum over all possible microstates
x
of the quantity -
kp
x
log
p
x
, where
k
is Boltzmann’s constant.
140
Boltzmann actually calculated a quantity
H
, which is essentially the difference between the maximum entropy and the actual entropy, thus the name of the theorem. But that name was attached to the theorem only later on, and in fact Boltzmann himself didn’t even use the letter
H
; he called it
E
, which is even more confusing. Boltzmann’s original paper on the
H
-Theorem was 1872; an updated version, taking into account some of the criticisms by Loschmidt and others, was 1877. We aren’t coming close to doing justice to the fascinating historical development of these ideas; for various different points of view, see von Baeyer (1998), Lindley (2001), and Cercignani (1998); at a more technical level, see Ufflink (2004) and Brush (2003). Any Yale graduates, in particular, will lament the short shrift given to the contributions of Gibbs; see Rukeyser (1942) to redress the balance.
141
Note that Loschmidt is
not
saying that there are equal numbers of increasing-entropy and decreasing-entropy evolutions that start with the same initial conditions. When we consider time reversal, we switch the initial conditions with the final conditions; all Loschmidt is pointing out is that there are equal numbers of increasing-entropy and decreasing-entropy evolutions overall, when we consider every possible initial condition. If we confine our attention to the set of low-entropy initial conditions, we can successfully argue that entropy will usually increase; but note that we have sneaked in time asymmetry by starting with low-entropy
initial
conditions rather than final ones.
142
Albert (2000); see also (among many examples) Price (2004). Although I have presented the need for a Past Hypothesis as (hopefully) perfectly obvious, its status is not uncontroversial. For a dash of skepticism, see Callender (2004) or Earman (2006).
143
Readers who have studied some statistical mechanics may wonder why they don’t recall actually doing this. The answer is simply that it doesn’t matter, as long as we are trying to make predictions about the future. If we use statistical mechanics to predict the future behavior of a system, the predictions we get based on the Principle of Indifference plus the Past Hypothesis are indistinguishable from those we would get from the Principle of Indifference alone. As long as there is no assumption of any special
future
boundary condition, all is well.
9. INFORMATION AND LIFE
144
Quoted in Tribus and McIrvine (1971).
145
Proust (2004), 47.
146
We are, however, learning more and more all the time. See Schacter, Addis, and Buckner (2007) for a recent review of advances in neuroscience that have revealed how the way actual brains reconstruct memories is surprisingly similar to the way they go about imagining the future.
147
Albert (2000).
148
Rowling (2005).
149
Callender (2004). In Callender’s version, it’s not that you die; it’s that the universe ends, but I didn’t want to get confused with Big Crunch scenarios. But really, it would be nice to see more thought experiments in which the future boundary condition was “you fall in love” or “you win the lottery.”