Nevertheless, even without aspiring to answer all possible questions about the meaning of “life,” there is one concept that undoubtedly plays an important role:
free energy
. Schrödinger glossed over this idea in the first edition of
What Is Life?
, but in subsequent printings he added a note expressing his regret for not giving it greater prominence. The idea of free energy helps to tie together entropy, the Second Law, Maxwell’s Demon, and the ability of living organisms to keep going longer than nonliving objects.
FREE ENERGY, NOT FREE BEER
The field of biological physics has witnessed a dramatic rise in popularity in recent years. That’s undoubtedly a good thing—biology is important, and physics is important, and there are a great number of interesting problems at the interface of the two fields. But it’s also no surprise that the field lay relatively fallow for as long as it did. If you pick up an introductory physics textbook and compare it with a biological physics text, you’ll notice a pronounced shift in vocabulary.
160
Conventional introductory physics books are filled with words like
force
and
momentum
and
conservation
, while biophysics books feature words like
entropy
and
information
and
dissipation
.
This difference in terminology reflects an underlying difference in philosophy. Ever since Galileo first encouraged us to ignore air resistance when thinking about how objects fall in a gravitational field, physics has traditionally gone to great lengths to minimize friction, dissipation, noise, and anything else that would detract from the unimpeded manifestation of simple microscopic dynamical laws. In biological physics, we can’t do that; once you start ignoring friction, you ignore life itself. Indeed, that’s an alternative definition worth contemplating: Life is organized friction.
But, you are thinking, that doesn’t sound right at all. Life is all about maintaining structure and organization, whereas friction creates entropy and disorder. In fact, both perspectives capture some of the underlying truth. What life does is to create entropy somewhere, in order to maintain structure and organization somewhere else. That’s the lesson of Maxwell’s Demon.
Let’s examine what that might mean. Back when we first talked about the Second Law in Chapter Two, we introduced the distinction between “useful” and “useless” energy: Useful energy can be converted into some kind of work, while useless energy is useless. One of the contributions of Josiah Willard Gibbs was to formalize these concepts, by introducing the concept of “free energy.” Schrödinger didn’t use that term in his lectures because he worried that the connotations were confusing: The energy isn’t really “free” in the sense that you can get it for nothing; it’s “free” in the sense that it’s available to be used for some purpose.
161
(Think “free speech,” not “free beer,” as free-software guru Richard Stallman likes to say.) Gibbs realized that he could use the concept of entropy to cleanly divide the total amount of energy into the useful part, which he called “free,” and the useless part:
162
total energy = free energy + useless (high-entropy) energy.
When a physical process creates entropy in a system with a fixed total amount of energy, it uses up free energy; once all the free energy is gone, we’ve reached equilibrium.
That’s one way of thinking about what living organisms do: They maintain order in their local environment (including their own bodies) by taking advantage of free energy, degrading it into useless energy. If we put a goldfish in an otherwise empty container of water, it can maintain its structure (far from equilibrium with its surroundings) for a lot longer than an ice cube can; but eventually it will die from starvation. But if we
feed
the goldfish, it can last for a much longer time even than that. From a physics point of view, food is simply a supply of free energy, which a living organism can take advantage of to power its metabolism.
From this perspective, Maxwell’s Demon (along with his box of gas) serves as an illuminating paradigm for how life works. Consider a slightly more elaborate version of the Demon story. Let’s take the divided box of gas and embed it in an “environment,” which we model by an arbitrarily large collection of stuff at a constant temperature—what physicists call a “heat bath.” (The point is that the environment is so large that its own temperature won’t be affected by interactions with the smaller system in which we are interested, in this case the box of gas.) Even though the molecules of gas stay inside the walls of the box, thermal energy can pass in and out; therefore, even if the Demon were to segregate the gas effectively into one cool half and one hot half, the temperature would immediately begin to even out through interactions with the surrounding environment.
We imagine that the Demon would really like to keep its particular box far from equilibrium—it wants to do its best to keep the left side of the box at a high temperature and the right side at a low temperature. (Note that we have turned the Demon into a protagonist, rather than a villain.) So it has to do its traditional sorting of molecules according to their velocities, but now it has to keep doing that in perpetuity, or otherwise each side will equilibrate with its environment. By our previous discussion, the Demon can’t do its sorting without affecting the outside world; the process of erasing records will inevitably generate entropy. What the Demon requires, therefore, is a continual supply of free energy. It takes in the free energy (“food”), then takes advantage of that free energy to erase its records, generating entropy in the process and degrading the energy into uselessness; the useless energy is then discarded as heat (or whatever). With its newly erased notepad, the Demon is ready to keep its box of gas happily displaced from equilibrium, at least until it fills the notepad once more, and the cycle repeats itself.
Figure 51:
Maxwell’s Demon as a paradigm for life. The Demon maintains order—a separation of temperatures—in the box, against the influence of the environment, by processing information through the transformation of free energy into high-entropy heat.
This charming vignette obviously fails to encapsulate everything we mean by the idea of “life,” but it succeeds in capturing an essential part of the bigger picture. Life strives to maintain order in the face of the demands of the Second Law, whether it’s the actual body of the organism, or its mental state, or the works of Ozymandias. And it does so in a specific way: by degrading free energy in the outside world in the cause of keeping itself far from thermal equilibrium. And that’s an operation, as we have seen, that is tightly connected to the idea of information processing. The Demon carries out its duty by converting free energy into information about the molecules in its box, which it then uses to keep the temperature in the box from evening out. At some very basic level, the purpose of life boils down to survival—the organism wants to preserve the smooth operation of its own complex structure.
163
Free energy and information are the keys to making it happen.
From the point of view of natural selection, there are many reasons why a complex, persistent structure might be adaptively favored: An eye, for example, is a complex structure that clearly contributes to the fitness of an organism. But increasingly complex structures require that we turn increasing amounts of free energy into heat, just to keep them intact and functioning. This picture of the interplay of energy and information therefore makes a prediction: The more complex an organism becomes, the more
inefficient
it will be at using energy for “work” purposes—simple mechanical operations like running and jumping, as opposed to the “upkeep” purposes of keeping the machinery in good working condition. And indeed, that’s true; in real biological organisms, the more complex ones are correspondingly less efficient in their use of energy.
164
COMPLEXITY AND TIME
There are any number of fascinating topics at the interface of entropy, information, life, and the arrow of time that we don’t have a chance to discuss here: aging, evolution, mortality, thinking, consciousness, social structures, and countless more. Confronting all of them would make this a very different book, and our primary goals are elsewhere. But before returning to the relatively solid ground of conventional statistical mechanics, we can close this chapter with one more speculative thought, the kind that may hopefully be illuminated by new research in the near future.
As the universe evolves, entropy increases. That is a very simple relationship: At early times, near the Big Bang, the entropy was very low, and it has grown ever since and will continue to grow into the future. But apart from entropy, we can also characterize (at least roughly) the state of the universe at any one moment in time in terms of its
complexity
, or by the converse of complexity, its simplicity. And the evolution of complexity with time isn’t nearly that straightforward.
There are a number of different ways we could imagine quantifying the complexity of a physical situation, but there is one measure that has become widely used, known as the
Kolmogorov complexity
or
algorithmic complexity
.
165
This idea formalizes our intuition that a simple situation is easy to describe, while a complex situation is hard to describe. The difficulty we have in describing a situation can be quantified by specifying the shortest possible computer program (in some given programming language) that would generate a description of that situation. The Kolmogorov complexity is just the length of that shortest possible computer program.
Consider two strings of numbers, each a million characters long. One string consists of nothing but 8’s in every digit, while the other is some particular sequence of digits with no discernible pattern within them:
The first of these is simple—it has a low Kolmogorov complexity. That’s because it can be generated by a program that just says, “Print the number 8 a million times.” The second string, however, is complex. Any program that prints it out has to be at least one million characters long, because the only way to describe this string is to literally specify every single digit. This definition becomes helpful when we consider numbers like pi or the square root of two—they look superficially complex, but there is actually a short program in either case that can calculate them to any desired accuracy, so their Kolmogorov complexity is quite low.
The complexity of the early universe is low, because it’s very easy to describe. It was a hot, dense state of particles, very smooth over large scales, expanding at a certain rate, with some (fairly simple to specify) set of tiny perturbations in density from place to place. From a coarse-grained perspective, that’s the entire description of the early universe; there’s nothing else to say. Far in the future, the complexity of the universe will also be very low: It will just be empty space, with an increasingly dilute gruel of individual particles. But in between—like right now—things look extremely complicated. Even after coarse-graining, there is no simple way of expressing the hierarchical structures described by gas, dust, stars, galaxies, and clusters, much less all of the interesting things going on much smaller scales, such as our ecosystem here on Earth.
So while the entropy of the universe increases straightforwardly from low to high as time goes by, the complexity is more interesting: It goes from low, to relatively high, and then back down to low again. And the question is: Why? Or perhaps: What are the ramifications of this form of evolution? There are a whole host of questions we can think to ask. Under what general circumstances does complexity tend to rise and then fall again? Does such behavior inevitably accompany the evolution of entropy from low to high, or are other features of the underlying dynamics necessary? Is the emergence of complexity (or “life”) a generic feature of evolution in the presence of entropy gradients? What is the significance of the fact that our early universe was simple as well as low-entropy? How long can life survive as the universe relaxes into a simple, high-entropy future?
166