Read How to Destroy the Universe Online
Authors: Paul Parsons
The first computer-based weather simulation was run in 1950 on a computer called ENIAC (Electronic Numerical Integrator and Computer) at the US Army Ballistic Research Laboratory in Maryland, where it had initially been used for working out artillery-shell
trajectories. ENIAC's early weather models used an extremely simplified picture of the atmosphere, where the air pressure at any point is determined simply by the density. Gradually meteorologists built more sophistication into their models to account for the processes of heating and atmospheric circulation that generate our complex real-world weather phenomena.
Computer weather models are set up by dividing the atmosphere into a three-dimensional grid. British mathematician Ian Stewart, in his book
Does God Play Dice?
, likens it to a 3D chess board. The weather at each precise moment in time is determined by assigning each cube in the grid a set of parameters defining the temperature, pressure, humidity and so on within that cube. These numbers can be thought of as rather like the chess pieces. The computer then evolves the board forward according to the rules of the game, encoded in the physics equations describing the weather. The results amount to moving the pieces around on the board rather like moves in a game.
In each cube, the computer takes the values of all the weather parameters and crunches them through the equations to work out the rate of change of each parameter at that instant in time. The rate of change allows all of the parameters to be evolved forward by a short interval, known as the “time step.” Now the new values for all the parameters can be fed back into the
computer again and used to work out a new set of rates of change, which can then be used to evolve the whole system forward by the next time step, and so on. The process repeats iteratively until enough time steps have been accumulated to reach the point in the future for which the forecast is needed. For a model of global weather systems, the time steps might be ten minutes or so, but for simulations of the weather over small regions they can be as small as a few seconds. After each time step, the parameter values in each cell are meshed together to ensure continuity. The result is a model of Earth's weather that can be advanced as far into the future as needed.
However, the model cannot just advance into the future. Something was still missing. The predictions of the computer weather models were still only good for a few days, after which time, they became hopelessly inaccurate. The reason why was uncovered in the 1960s by the US mathematician Edward Lorenz. What he found would revolutionize not just how we think about the weather, but pretty much the whole of math and physics.
In 1963, Lorenz carried out a detailed study of the equations describing a key element to how the weather behaves: convection. This is the process that makes
hot air rise and cold air sink. The same process happens in a pan of cold water that's heated from below on a stove. Even this small subset of weather math was too difficult to solve on paper, so Lorenz put the equations on a computer. But when he did this he found something curious. If he stopped his simulation halfway through and wrote down the values of all the parameters, and then fed these back in manually to finish the simulation off, he got an answer wildly different from what he got by just letting the simulation carry on running in the first place. Lorenz eventually isolated the problem. Although the computer's memory was storing the numbers to an accuracy of six decimal places, it was only displaying its results to three decimal places. So, for example, if a number in the memory was 0.876351, the computer would only display 0.876. When Lorenz fed this truncated number back in, the loss of accuracy brought about by sacrificing those last three digits was skewing his results. So sensitive are the equations of convection to the initial conditions of the system that changing these conditions by just a few hundredths of a percent was bringing about wildly different behavior. Lorenz had discovered a phenomenon known as “chaos”: extreme sensitivity of a system to its initial state, meaning that tiny differences in that initial state become magnified over time. The main reason why forecasting the weather tomorrow is so difficult is because we cannot measure the weather today accurately enough. Lorenz even coined a term to
describe the phenomenonâthe “butterfly effect,” the idea that the tiny perturbations caused one day by a butterfly beating its wings could be amplified over time to create dramatic shifts in the weather days down the line.
Today, chaos is known to crop up in all kinds of physical systemsâincluding quantum mechanics, relativity, astrophysics and economics. Mathematicians spot the presence of chaos by drawing a diagram called a “phase portrait,” which shows how the system evolves with time. They look for areas of the phase portrait called “attractors,” to which the system's behavior converges. Non-chaotic systems have simple, well-defined attractors. For instance, the phase portrait of a swinging pendulum is just a plot of the pendulum bob's position against its speed, and the attractor takes the form of a circle.
Chaotic systems have attractors with bizarre, convoluted forms known as “fractals”âdisjointed shapes that appear the same no matter how closely you zoom in on them. The simplest fractal is made by removing the middle third from a straight line and then repeating the process ad infinitum on the remaining segments. Edward Lorenz found that the attractor in the phase portrait of convection was indeed a fractalâa kind of distorted figure 8, which has since become known as the “Lorenz attractor.”
The simplest fractal is obtained by removing the middle third from a straight line and repeating the process.
Improved computing power is now enabling the future evolution of chaotic systems to be predicted more reliably by storing the system parameters to a greater number of decimal places. The most powerful scientific computer is a modified Cray XT5, known as Jaguar, at the National Center for Computational Science in Tennessee. It has the same number-crunching capacity as about 10,000 desktop PCs. In truth, it's unlikely the weathermen will ever be able to tell us with 100 percent certainty whether it's going to be sunny at the weekend. But disastrous misforecasts such as those that were issued prior to the Great Storm of '87 should at least become a thing of the past. Or so they tell us.
⢠What is an earthquake?
⢠The magnitude scale
⢠Tsunamis
⢠Quake-proof buildings
⢠Mass dampers
⢠Earthquake prediction
Earthquakes are one of the most destructive forces in the natural world, equivalent in power to an atomic bomb. The quake that struck Haiti in 2010 killed over 200,000 people, and as cities in earthquake zones grow larger, it is becoming increasingly likely that a future quake could claim not thousands but millions of lives. Or is it? Are new technologies to mitigate the effects of earthquakes, ranging from giant pendulums inside skyscrapers to rubber feet under buildings, finally about to tame this awesome force of nature?
Earthquakes occur when the tectonic plates that make up Earth's crust grate and grind against one another as
they move. Tectonic plates are vast interlocking slabs of rock that float on the liquid layers of molten metal and rock that lie below them. As these liquids roll and froth, stirred up by the heat of the planet's interior, they drag on the plates above, pulling them this way and that. There are seven major tectonic platesâAfrican, Antarctic, Eurasian, Indo-Australian, North American, Pacific and South Americanâand very many smaller ones. The boundaries where two plates meet are known as “fault lines” and they come in a variety of different forms, depending on the relative motion of the two plates.
When the two plates are slipping past one another horizontally, the boundary is referred to by geologists as a “transform fault.” As the plates jostle together, friction at the fault prevents them from slipping by smoothly. Instead they move in a jerking, juddering motion known as “stick-slip.” First, the rock at the fault sticks because of friction. It deforms as the plates move, as if it were made of rubber. Over time the stress on the fault increases until eventually friction is overcome and the plates quickly slip past each other as the rock suddenly snaps back into shape.
An earthquake results when millions of tons of rock all rebounding in this way unleashes a violent mechanical wave that spreads out through the land, a bit like the ripple on the surface of a pond when you've dropped a rather large stone in it. This wave, called a “seismic wave,” can have the power to bring down bridges and buildings, cause landslides and induce “soil liquefaction”âwhere agitated soil assumes a liquid-like consistency, into which buildings and other structures can sink. Transform faults can spawn some truly destructive earthquakes, including the 1906 quake that devastated San Francisco, a city that lies next to the San Andreas Fault at the boundary between the Pacific and North American plates.
This is the view from above a geological fault line. Over many years, movement of tectonic plates deforms the landscape at the fault. When the build-up of elastic energy in the rock becomes great enough, it suddenly slips. This is an earthquake.
Seismic waves generated during an earthquake come in two different forms, called
P
waves and
S
waves.
P
waves are compression waves, rather like the waves you get on a stretched spring. The disturbance caused by
P
waves is parallel to their direction of motion.
S
waves, on the other hand, are more like water waves, where the disturbance is at right angles to the wave's motion, creating an S-shaped pattern of peaks and troughs as the wave passes.
P
waves travel roughly 1.7 times faster than
S
waves and scientists can use this fact to determine the distance to the earthquake's source, called the “hypocenter.” Roughly speaking, eight times the time gap in seconds between the arrival of
P
waves and
S
waves gives the distance to the hypocenter in kilometers. By triangulating measurements made at a number of observing stations, the location of the hypocenter can be pinpointed. Most quakes happen within a few tens of kilometers of the surface, but the deepest ones can be located hundreds of kilometers down. The point on Earth's surface directly above the hypocenter is known as the “epicenter.”
Seismologists gauge the power of an earthquake by taking its “moment magnitude,” which is a measure of the amount of energy the earthquake releases. This is an updated version of the Richter magnitude scale, first put forward by US physicist Charles Richter in 1935. Each increment in the scale corresponds to an
increase in the energy of the quake by a factor of 10
1.5
(about 31.6). In other words, an earthquake with a moment magnitude of 6 is 1,000 (31.6
2
) times more powerful than a magnitude-4 quake. The 1906 San Francisco quake had a moment magnitude of 7.8, while Haiti in 2010 was magnitude 7. The most powerful earthquake on record, in Chile in 1960, measured a collossal 9.5. By comparison, the largest nuclear bomb ever detonated, the Russian Tsar Bomba in 1961, gave out energy equivalent to a magnitude-8 quake.
Earthquakes don't just happen on land. In addition to transform faults, the two other kinds of fault boundary separating two tectonic plates are known as “divergent” and “convergent.” Here, the plates are either moving apart or slipping under one another, respectively. Divergent faults are normally associated with what are known as seafloor spreading sites, where new crust is being created at the bottom of the ocean. But far more lethal are the convergent faults, also normally found on the seafloor, where existing crust is sinking down into the planet's interior in a process called subduction.