Read The Perfect Theory Online
Authors: Pedro G. Ferreira
To the uncharitable observer it sounds like all renormalization does is throw away the infinities and arbitrarily replace them with finite values. Paul Dirac declared himself
“very dissatisfied with the situation.” As he argued, “This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is smallânot neglecting it just because it is infinitely great and you do not want it!” It seemed like a messy piece of slightly magical thinking, but there was no denying that it worked spectacularly well.
QED was one step on the long path to unification, but from the 1930s to the 1960s it had become clear that there were two other forces, apart from the electromagnetic and gravitational forces, that also needed to be included in the ultimate framework. One was the weak force, proposed in the 1930s by the Italian physicist Enrico Fermi to explain a particular type of radioactivity known as beta decay. In beta decay a neutron transforms itself into a proton and spits out an electron in the process. Such a process is impossible to understand using electromagnetism, so Fermi conjured up a new force that would allow that transformation to happen. This new force acts only at very short distances, at internuclear separations, and is much weaker than electromagnetism; hence its name. The other force, the strong force, is what glues protons and neutrons together to form nuclei. It also binds the more fundamental particles, called quarks, that make up protons, neutrons, and a plethora of other particles. While it also has a very short range, it is much stronger than the weak force (hence the creative name). The challenge, just as James Clerk Maxwell had unified the electric and magnetic forces into a single electromagnetic force in the mid-nineteenth century, was to come up with a common way of dealing with all four fundamental forces: gravitational, electromagnetic, weak, and strong.
Throughout the 1950s and 1960s both the strong and weak forces were systematically unpeeled and studied in detail. As they became better understood, a mathematical similarity began to emerge between them and the electromagnetic force, suggesting there might be
one
unified force that manifests as one of the three different forces depending on the situation. By the late 1960s, Steven Weinberg of MIT, Sheldon Glashow of Harvard, and Abdus Salam of Imperial College in London had proposed a new way of packing at least two of the forces, the electromagnetic and weak forces, together into one electroweak force. The strong force couldn't yet be brought into the mix but looked so similar to the other forces that there was a belief that it should be possible to come up with a “grand unified theory” of the electromagnetic, weak, and strong forces. In the 1970s, the electroweak theory and the theory of the strong force were shown to be renormalizable, just like QED. All the pesky infinities that arose in their calculations could be replaced by known values, making the theories eminently predictable. The combination of the electroweak and strong theories became known as the standard model and made accurate predictions that were confirmed in laboratories like the gigantic particle accelerator at CERN in Geneva, Switzerland. This almost completely unified, yet powerful and predictive
quantum
theory of the three forcesâelectromagnetic, weak, and strongâwas universally accepted.
By all, that is, except Paul Dirac. Although he was impressed with the younger generation that had put together the standard model and marveled at some of the mathematics that had been used, he repeatedly railed against the infinities and what he considered to be the nefarious trick of renormalization. In the few public lectures he gave in which he deigned to mention the standard model, he chided his colleagues for not trying harder to find a better theory with no infinities. Toward the end of his career at Cambridge, Dirac became more and more isolated. He stubbornly rejected the developments in quantum physics. Despite his craving for privacy, he felt ignored by the rest of the physics world, which had embraced QED and saw him as a figure of the past. So he withdrew, keeping to his study at St. John's College and avoiding the department where he held his professorship, paying no attention to the great discoveries in general relativity that were coming from Dennis Sciama, Stephen Hawking, Martin Rees, and their collaborators. As one of their contemporaries at Cambridge recalls,
“Dirac was this ghost we rarely saw and never spoke to.” He retired from his position as the Lucasian Professor in 1969 and moved to Florida to take up a professorship there. In his final years he wouldn't have been surprised to see general relativity refuse to bow to the techniques of renormalization.
Â
Bryce DeWitt had no idea what a struggle his pursuit of a quantum theory of gravity would be. While working with Julian Schwinger at Harvard, he had witnessed the birth of QED firsthand. When he decided to tackle gravity, DeWitt chose to treat it just like electromagnetism and tried to reproduce the successes of QED. There were similarities between electromagnetism and gravity: both were long-range forces that could extend over large distances. In QED, the transmission of electromagnetic force could be described as being carried by a massless particle, the photon. You can view electromagnetism as a sea of photons zipping back and forth between charged particles, like electrons and protons, pushing them apart or pulling them together, depending on their relative charges. DeWitt approached a quantum theory of gravity in an analogous way, replacing the photon with another massless particle, the graviton. These gravitons would bounce back and forth between massive particles, pulling them together to create what we call the gravitational force. This approach abandoned all the beautiful ideas of geometry. While gravity was still described in terms of Einstein's equations, DeWitt chose to think of it as just another force, bringing to bear all the techniques of QED.
For the next twenty years, DeWitt tried to figure out how to quantize the graviton, but he found it a gargantuan challenge. Once again Einstein's field equations were simply too unwieldy and entangled to be dealt with easily. He watched as the theory of the other forces developed and saw the similarities in the difficulties. But while the problems with unifying the strong, weak, and electromagnetic forces seemed to fall away, general relativity was obstinate, unwilling to be shoehorned into the same set of quantum rules that seemed to apply to the other three forces. Through his battle, DeWitt was not alone: Matvei Bronstein, Paul Dirac, Richard Feynman, Wolfgang Pauli, and Werner Heisenberg had all had a go at quantizing the graviton at some point before him. Steven Weinberg and Abdus Salam, the architects of the successful model of the electroweak force, attempted to apply the techniques that they had developed for the standard model, but they too found that gravity was too difficult.
As DeWitt labored on, grappling with the graviton and trying to quantize it, isolated pockets of interest in his work developed. John Wheeler cheered him on and set his students working on it, as did the Pakistani physicist Abdus Salam, Dennis Sciama in Oxford, and Stanley Deser, based in Boston. But in general, reactions to work on quantum gravity were mixed and often cool. Michael Duff, a former student of Salam, recalls presenting his results on quantum gravity at a conference in Cargèse, Corsica, and being
“greeted with hoots of derision.” A student of Dennis Sciama named Philip Candelas, who was working on quantum properties of fields living on spacetimes with different geometries, heard that members of the faculty of physics at Oxford were muttering that he “wasn't doing physics.” Quantum gravity was still too unformed compared to the work on quantizing the other forces. To many, it was perceived as a waste of time.
In February 1974, the United Kingdom was at a standstill. The price of oil had shot up, a succession of ineffectual governments had been trying to stem the rise of inflation, and the country was hamstrung by industrial strife. Every now and then the working week was shortened to three days to save energy, and rolling power cuts meant that evening meals were often eaten by candlelight. It was during these dark days that a meeting was convened to take stock of the progress in quantizing gravity, almost twenty-five years after DeWitt first set to work. Despite the somber economic climate, euphoria reigned at the start of the Oxford Symposium on Quantum Gravity. The predictions of the standard model of particle physics developed by Glashow, Weinberg, and Salam were being spectacularly confirmed at the massive particle accelerator at CERN. Surely quantum gravity would have to follow close behind.
Yet, as the speakers stood up and presented hints of solutions and ideas, again and again, the same problem seemed to scupper the most promising and popular route for quantizing gravity. DeWitt's approach of forgetting about geometry and thinking of gravity simply as a force was not working. The organizers, paraphrasing Wolfgang Pauli, fretted, “What God hath torn asunder, let no man join.” The problem was that general relativity was not like QED and the standard model. With QED and the standard model it was always possible to renormalize all the masses and charges of the fundamental particles and get rid of the infinities that cropped up to get sensible results. But if the same tricks and techniques were applied to general relativity, the whole thing fell apart. Infinities kept on cropping up that refused to be renormalized. Tuck them away in one part of the theory and they would stick out in another part, and renormalizing the whole theory in one fell swoop proved impossible. Gravity, as described by general relativity, seemed far too entangled and different to be repackaged and fixed like the other forces. At the symposium, Mike Duff said ominously in the conclusion to his talk,
“It appears that the odds are stacked against us, and only a miracle could save us from non-renormalizability.”
Quantum gravity had hit a dead end, and general relativity refused to join the other forces in one, unified picture. As a
Nature
article on the symposium glumly noted, “The presentation of technical results by M. Duff only served to confirm the extraordinary lengths which are necessary to make even minor progress.” This failure was all the more galling given that there had been such tremendous progress in relativistic astrophysics, black holes, and cosmology in the previous years, not to mention the spectacular success in the standard model of particle physics.
Â
The Oxford symposium seemed like an admission of defeat, except for one surprising talk by the Cambridge physicist Stephen Hawking on black holes and quantum physics. In his talk, Hawking showed that there was a sweet spot where quantum physics and general relativity could be brought together. Furthermore, he claimed he could prove that black holes weren't in fact black but shone with an incredibly dim light. It was an outlandish claim that would transform quantum gravity for the next four decades.
By the early 1970s, Stephen Hawking was already a fixture on the Cambridge scene, working at the Department of Applied Mathematics and Theoretical Physics, or DAMTP for short. At only thirty, he had already made a name for himself in general relativity. Coming out of Dennis Sciama's stable of students, Hawking had worked with Roger Penrose to show that singularities had to exist in the very beginning of time. In the early 1970s he had turned his attention from cosmology to black holes and, with Brandon Carter and Werner Israel, had proved definitively that black holes have no hair: they lose any memory of how they were formed, and black holes with the same mass, spin, and charge all look exactly alike. He had also obtained an intriguing result about the sizes of black holes. If you took two black holes and merged them together, he found, the area of the Schwarzschild surface, or event horizon, of the final black hole had to be greater than or equal to the sum of the area of the original black holes. In practice, this meant that if you summed up the total area of black holes before and after
any
physical event, it
always
increased.
Hawking did all this work as Lou Gehrig's disease claimed his body. Throughout the late sixties, he walked through the corridors at DAMTP with a cane, leaning against the wall for support, but he slowly and steadily became unable to move unaided. As his ability to write and draw, essential tools in the arsenal of a theoretical physicist, dwindled away, he developed a formidable capacity to think things through at length, allowing him to tackle deep issues in general relativity and quantum theory.
One might say Hawking's great discovery was driven by his annoyance at a result put forward by a young Israeli PhD student of John Wheeler named Jacob Bekenstein. Bekenstein wanted to reconcile black holes with the second law of thermodynamics. To do so, he used one of Hawking's results to come up with a completely ludicrous claim about black holes. To Hawking, the claim was entirely too speculative and simply wrong.
To understand Bekenstein's claim, we need to take a quick detour into thermodynamics, the branch of physics that studies heat, work, and energy. The second law of thermodynamics (there are four in total) states that the entropy, or level of disorder, of a system always increases. Consider the classic example of a simple thermodynamic system: a box containing gas molecules. If the molecules are all at rest, neatly packed away in one corner, the system has low entropyâthere is very little disorder. There is also no way the stationary particles will collide with the sides of the box and heat it up, so the system has a low temperature. Now imagine that the molecules begin to move. They roam freely throughout the box and spread out randomly, shifting the system to a high-entropy state. That is, the distribution of molecules inside the box becomes more disordered. As they move around, they collide with the walls of the box and transfer some of their energy to it, heating it up and increasing its temperature. The faster the molecules move, the quicker they randomize, and the quicker the entropy goes up until it reaches its maximum. Indeed, the quicker the molecules move around, the less likely it is that they will all coalesce into a peaceful, ordered state of low entropy. But not only that, faster molecules also transfer more heat to the walls of the box, increasing the temperature of the system even more. This shows us two things: the box tends toward a high-entropy state, as the second law of thermodynamics states, and with entropy comes temperature.