Armageddon Science (18 page)

Read Armageddon Science Online

Authors: Brian Clegg

BOOK: Armageddon Science
11.98Mb size Format: txt, pdf, ePub

Worrying though the problems foot-and-mouth can generate are, it has more potential as a weapon of mass
disruption
than a weapon of mass destruction. This is, certainly, a terrorist aim in its own right. Disruption is a powerful propaganda tool, and disruption caused by many terrorist threats (such as the security clampdown after the discovery of a plot to take liquid explosives onto aircraft) does have a direct impact on our ability to live freely. Yet disruption in itself does not have the Armageddon quality that is the focus of this book.

Compared to many weapons of mass destruction, biological weapons are relatively easy to make. Often, given a starting colony, the biological agent will make itself—and the technology involved can be little more sophisticated than the equipment found in an industrial kitchen, provided there are appropriate mechanisms in place to stop the agent from escaping and attacking the workers.

In at least one recorded case, a terrorist succeeded in obtaining deadly bacteria simply by placing an order with a medical supplier. As it happens, Larry Harris, a member of a white supremacist group called Aryan Nations, was too impatient, and caused suspicion when he kept calling the supplier to ask why his plague bacteria had not arrived—but it is chilling that he was able to simply order a sample of plague over the phone and have it delivered by FedEx. It’s to be hoped that since September 11, procedures checking those placing orders for deadly diseases have become significantly more robust.

The ease of production or access is likely to continue to make biological agents attractive to terrorists. And there is evidence of more than just a hypothetical threat. The year 1972 saw the arrest in Chicago of members of a terrorist group called the Order of the Rising Sun. They had in their possession seventy-five pounds of typhoid bacteria cultures, with which they planned to contaminate the water supply of the cities around the Great Lakes. In 1984, members of another fringe group succeeded in spreading typhoid bacteria via the salad bar in an Oregon restaurant—many diners became ill, though none died.

Similarly, rogue states can produce biological weapons relatively easily. There are many more facilities already existing worldwide to produce biological agents than there are for the more high-tech weapons of mass destruction. Any laboratory developing or producing harmless and much needed vaccines against disease can easily be turned into a factory for manufacturing biological weapons.

There are, however, two key problems that the manufacturer of these weapons has to confront: how to get the disease agent into a suitable form to use it as a weapon and how to deliver it. In their natural form, many bacterial agents are easily destroyed by heat, by ultraviolet light, or simply by being kept on the shelf too long. As Russia’s Ken Alibek has commented, “The most virulent culture in a test tube is useless as an offensive weapon until it has been put through a process that gives it stability and predictability. The manufacturing technique is, in a sense, the real weapon, and is harder to develop than individual agents.”

This process involves not only “growing” the agent itself, but rendering it in a form that makes it easy to store—often a dry, powderlike material produced by blasting the agent with powerful jets of air—and mixing it with additives that will help preserve it. It might also be necessary to cover the agent in tiny polymer capsules to protect it from light where it is liable to be damaged by the ultraviolet component of sunlight. The biological weapons business has as much in common with the packaged-food industry as it does with the weapons trade.

Then there’s the problem of delivery. The selected bacterium, virus, fungus, or rickettsia may be deadly enough when the disease is caught, but how do you deliver the agent to your target? We might mostly think “natural is best” when choosing food these days, but the natural means for these bacterial agents to spread can be slow and are almost always difficult to control.

When the Soviet Union was at the height of its biological warfare program it tested the effectiveness of different means of spreading biological agents on an unknowing population. Harmless bacteria like
Bacillus thuringiensis,
known as simulants, would be spread over a populated area using different delivery mechanisms to see how the technology performed. The test subjects would never know that they were being exposed to a disease, and the monitoring would be under the cover of routine medical examinations.

With an agent like anthrax that can be manufactured in powder form it is relatively easy to deliver the biological weapon to its target by spreading the powder from the air, or from an explosive container, but many biological agents are liquids. These will usually be dispersed into the air as tiny droplets, turning the liquid into an aerosol.

Although a liquid bacteriological agent can be dispersed from the air using low-energy bombs (if the bomb is too powerful it tends to kill the agent), there is a much easier solution, which is as readily available to the terrorist as to the rogue state. Most of the technology that is used to spread pesticides and other agrochemicals can be used equally well to launch a chemical or biological attack. This is particularly true with crop-spraying aircraft, which are ideal for dispersing a deadly agent over an occupied area.

Western countries had given up their biological weapons by the 1970s, and it was thought that this was also true of the Soviet program, until revelations came from defectors in the 1990s—it now seems that the USSR kept a heavy-duty biological weapons manufacturing process in place all the way up to the early 1990s.

According to Ken Alibek, a former senior manager in the Soviet program who has now defected to the West, in the 1980s, Biopreparat, the organization responsible for much of the Soviet biological weapons development, was coming up with a new biological weapon every year—either enhancing an existing threat like anthrax to make it more resistant to antibiotics, or weaponizing a whole new disease.

At the time, the Soviets genuinely believed that the United States was lying about having given up biological weapons, and they considered it essential to make their biological warfare program more and more extreme to maintain an imagined lead in a race that didn’t exist. As Alibek says, “We were engaged in secret combat against enemies who, we were told, would stop at nothing. The Americans had hidden behind a similar veil of secrecy when they launched the Manhattan Project to develop the first atomic bomb. Biopreparat, we believed, was our Manhattan Project.”

This is a hugely revealing comment. Not only was the Manhattan Project a vast and secret enterprise; it was intended to produce the weapon that would end the war—and the weapon that would prove the ultimate lever in dealing with the enemies of the United States at the time. It is not fanciful to suggest that the Soviet hierarchy thought that having a significant lead in biological weapons would give them a similar potential lever over their American rival.

At the peak of the program, the Soviets had the same long-range, multiple-warhead SS18 missiles that carried nuclear weapons prepared to carry enough anthrax to be capable of taking out a whole city like New York. At the same time, they were developing technology to release much smaller canisters of biological agents from cruise missiles, which would have had the advantages of stealth and much more accurate targeting, essential for effective use of biological weapons.

While many countries have worked on biological weapons, most now shy away from this despised means of attack, whether on moral grounds or because their militaries are not happy with the indiscriminate nature of the technology. However, just as rogue states like Iraq have used chemical weapons in the relatively recent past, so such states are still likely to consider using biological agents.

In principle, both biological and chemical weapons are covered by conventions, but these political controls have proved less effective than their nuclear counterparts because they don’t have the same accompanying regime of international inspection. It’s also much harder to spot a biological or chemical test than a nuclear test, and there are many more ifs and buts in the biological conventions to allow for research on the prevention of disease. If you are producing enriched uranium there is really only one thing you can do with it, whereas research into viruses and bacteria is at the heart of the production of new medical cures.

On the chemical side, as we have already seen, was the Hague Convention of 1899, which was strengthened in 1907 to prevent the use of all “poison or poisoned arms.” But this had very little influence on either side in the First World War. The most recent attempt to prevent the use of chemical weapons is the Chemical Weapons Convention of 1997, which has been ratified by all but a handful of countries (significant omissions include North Korea and Syria, with Israel signed up to the convention but not yet ratifying it at the time of writing). This treaty, like the nuclear one, does have an inspectorate in the form of the Organisation for the Prohibition of Chemical Weapons (OPCW), based in The Hague, Holland. But there is a significant get-out clause that allows for research related to protection against chemical weapons, which many would argue is a good defense for developing such weapons.

Things are even less well covered on the biological side. The Biological and Toxin Weapons Convention was set up earlier, in 1975, and has rather fewer states committed to it, many of the absent countries being in Africa. However, this convention has no monitoring body—it is merely a statement of intent—and as the history of development of biological weapons in the USSR shows, it is a convention that some states have been prepared to flout. Sadly, a significant part of the resistance to having a verification process comes from the United States, where the biotechnology industry has lobbied hard not to have any biological equivalent to the OPCW’s role for chemical agents.

Meanwhile, the United States maintains laboratories working on biological agents to research ways to detect, resist, and counter biological attacks. This proved justified in 2001, when it was proved that terrorists were capable of both considering and delivering biological weapons. The anthrax package attack killed five people, infected seventeen more, and had a large impact in terms of inconvenience and cost as systems were put in place to prevent the attacks from being repeated.

However, biological agents seem not to be the weapons of choice of terrorist groups like al Qaeda. This lack of interest is probably caused by a combination of a preference for the immediacy of explosives with a cultural dislike of the concept of biological warfare. Even so, we should not be surprised if biological weapons are used again. Most likely targets would be those where movements of air naturally provide for the spread of the agent.

One way to achieve this is in air-conditioning systems. These already have a tendency to spread the natural biological agent Legionnaires’ disease, and they would be equally effective at spreading other diseases if the agents were injected into the system appropriately. The same technique could work on airliners, though the potential target population is much smaller. Or, insidiously, a powder-based agent like anthrax could be seeded in subways, where the wind produced by the trains would spread the agent through the system.

Our agencies need to remain vigilant to the dangers of biological attack. The outbreak of an engineered plague, designed by human intervention to be difficult to resist, is a nightmare possibility. Yet it is less exotic than a newer threat to the human race that has emerged from scientific discoveries: nanotechnology.

Chapter Six
Gray Goo

These microscopic organisms form an entire world composed of species, families and varieties whose history, which has barely begun to be written, is already fertile in prospects and findings of the highest importance.

—Louis Pasteur (1822–95), “Influence de M. Pasteur sur les progrès de la chirurgie,” Quoted by Charles-Emile Sedilliot, paper read to the Académie de Médecine (March 1878)

Pasteur’s words in the quote that opens this chapter refer to the “microscopic organisms” of nature. But imagine the construction of man-made creatures on an even smaller scale, an army of self-replicating robots, each invisible to the naked eye. Like bacteria, these “nanobots,” endlessly reproducing devices, could multiply unchecked, forming a gray slime that swamped the world and destroyed its resources.

Each tiny robot would eat up natural resources in competition with living things, and could reproduce at a furious rate. This sounds like science fiction. It is—it’s the premise of Michael Crichton’s thriller
Prey
. But the idea of working with constructs on this tiny scale, nanotechnology, is very real. It has a huge potential for applications everywhere from medicine to engineering, from sun-block to pottery glaze—but could also be one of the most dangerous technologies science could engage in, as the so-called gray goo scenario shows (gray goo because the nanobots are too small to be seen individually, and would collectively appear as a viscous gray liquid, flowing like a living thing).

Louis Pasteur and his contemporaries didn’t discover microorganisms. The Dutch scientist Antoni von Leeuwenhoek peered through a crude microscope (little more than a powerful magnifying glass on a stand) in 1674 and saw what he described as “animalcules”—tiny rods and blobs that were clearly alive, yet so small that they were invisible to the naked eye. This idea of a world of the invisible, detectable only with the aid of technology, was boosted into a central theme of physics as atomic theory came to the fore and it was accepted that there could be structures far smaller than those we observe in the everyday world.

The original concept of the atom dates all the way back to the ancient Greeks, though, if truth be told, it proved something of a failure back then. The dominant theory at the time was taught by the philosopher Empedocles, who believed that everything was made up of four “elements”: earth, air, fire, and water. It was the kind of science that seemed to work from a commonsense viewpoint. If you took a piece of wood, for instance, and burned it, the result was earthlike ashes, hot air, fire, and quite possibly some water, condensing from the air. And these four “elements” do match up well with the four best-known states of matter: earth for solid, water for liquid, air for gas, and fire for plasma, the state of matter present in stars and the hottest parts of flames.

This theory would be the accepted wisdom for around two thousand years. By comparison, the alternative idea, posed by the philosophers Democritus and his master, Leucippus, was generally considered more a philosophical nicety than any reflection of reality. Democritus proposed cutting up a piece of matter repeatedly until it was smaller and smaller. Eventually you would have to come to a piece that, however fine your knife, was impossible to cut further. This would be indivisible, or in the Greek
a-tomos.
An atom.

These atoms of Democritus were not quite what we understand by the term today. Each different object had its own type of atom—so a cheese atom would be different from a wood atom—and the shape of the atom was determined the properties of the material. Fire, for instance, would have a sharp, spiky atom, where water’s was smooth and curvaceous. Yet in this largely forgotten concept there was the seed of the idea that would blossom in the early nineteenth century, when British scientist John Dalton devised the modern concept of atoms as incredibly tiny particles of elements, building blocks that would be combined to make up the substances we see around us, either in the pure elementary form or interacting with different atoms to make compound molecules.

Dalton was led to this idea by work a couple of decades earlier by the French scientist Antoine-Laurent Lavoisier, who has the rare (if hardly desirable) distinction among scientists of being executed, though this was for his role as a tax collector at a time of revolution, rather than for his theories. Lavoisier laid down the basics of modern chemistry, showing how the same quantities of different substances always combined to make specific compounds. It seemed to imply some interior structure that made for these special combinations.

Yet the existence of such tiny objects as atoms was only grudgingly accepted. As late as the early twentieth century there was still doubt as to whether atoms really existed. In the early days of atomic theory, atoms were considered by most to be useful concepts that made it possible to predict the behavior of materials without there being any true, individual particles. It was only when Einstein began to think about a strange activity of pollen grains that it became possible to demonstrate the reality of the atomic form.

In 1827, the Scottish botanist Robert Brown had noticed that pollen grains in water danced around under the microscope as if they were alive. To begin with, he put this down to a sort of life force that was driving the tiny living particles in motion. But he soon found that ancient and decidedly dead samples of pollen still showed the same activity. What’s more, when he ground up pieces of metal and glass to produce small enough particles—things that had never been alive—exactly the same dance occurred.

This was considered an interesting but insignificant effect until 1905, the year when Albert Einstein would publish three papers that shook the scientific world. One was on special relativity; the second was on the photoelectric effect, the paper that helped kick-start quantum theory; and the third was on Brownian motion. Einstein proposed that this random dance of tiny particles like pollen was caused by millions of collisions with the molecules—simple collections of atoms—that made up the water in which the grains floated.

The reality of the existence of atoms and molecules was confirmed with certainty only in 1912 by French physicist Jean Perrin, who took conclusive measurements that backed up Einstein’s theory. And in 1980, most remarkably of all, Hans Dehmelt of the University of Washington succeeded in bringing an individual atom to the human gaze. More accurately, this was an ion—an ion is an atom with electrons missing, or extra electrons added, giving it an electrical charge—of barium.

Just as the antimatter traps described in chapter 2 work, the ion was held in place by electromagnetic fields. The ion’s positive charge responded to the field rather in the same way that a magnet can be made to float over other magnets, though the ion had to be boxed in by several fields to prevent it from flying away. Incredibly, when illuminated by the right color of laser light, the single barium ion was visible to the naked eye as a pinprick of brilliance floating in space.

Once we have the idea that everything from a single water molecule to a human being is an assembly of atoms, differing only in the specific elements present and the way those atoms are put together, a startling possibility emerges. If there were some way to manipulate individual atoms, to place them together piece by piece as a child assembles a Lego construction, then in principle we should be able to make anything from a pile of atoms. Imagine taking a pen or a hamburger and analyzing it, establishing the nature and location of each individual atom present. Then with suitable technology—we’ll come back to that—it should be possible to build up, from ingredient stores of each atom present, an exact duplicate of that item.

But the proof of atoms’ existence that came in 1912 didn’t mean that it was possible to do anything with them in practice. Admittedly, in one sense, ever since human beings started to manipulate the world around us, we have been reassembling atoms. Whether simply chipping off bits of stone to make an ax head, or smelting metal and molding a tool, we were recombining atoms and molecules in new forms. But this approach was much too crude to enable any form of construction step by step with the fundamental building blocks.

Even now, more than a century after Einstein’s paper, the most common way to manipulate atoms and molecules directly is crudely using accelerators and atom smashers. Science fiction’s best guess of how we could handle such ridiculously small items was that we would have to be shrunk in size. So in Asimov’s movie and novel
Fantastic Voyage,
for example, we saw miniaturized humans interacting with the microscopic components of the human body. The idea of being able to manipulate objects on the nanoscale seemed unreal.

Until recently, this prefix “nano” was familiar only to scientists. It was introduced at the eleventh Conférence Générale des Poids et Mesures in 1960, when the SI (Système International) units were established. As well as fixing on which units should become standards of measurement—the meter, the kilogram, and the second, for example—the conference fixed prefixes for bigger and smaller units from tera (multiply by 1 trillion) to pico (divide by 1 trillion). The penultimate prefix was nano (divide by 1 billion), derived from
nanos,
the Greek word for a dwarf. It’s one-billionth, so a nanometer is a billionth of a meter (about 40 billionths of an inch), a truly tiny scale.

It was the great American physicist Richard Feynman who first suggested, in a lecture he gave to the American Physical Society in 1959, that it might become possible to directly manipulate objects at the molecular level. Feynman was a trifle optimistic. He said, “In the year 2000, when they look back at this age, they will wonder why it was not until the year 1960 that anyone began seriously to move in this direction.” In practice we are only just getting there in the twenty-first century.

There are three huge problems facing anyone attempting to manipulate atoms to produce a new object. First is mapping the structure of an object—having an accurate blueprint to build to. Second is the sheer volume of atoms that have to be worked on. Imagine we wanted to put together something around the size and weight of a human being. That would contain very roughly 7×10
27
atoms: 7 with 27 zeroes after it. If you could assemble 1 million atoms a second, it would still take 3×10
14
years to complete. That’s 300 trillion years. Not the kind of time anyone is going to wait for a burger at a drive-through.

And finally there is the problem of being able to directly manipulate individual atoms, to click them into place, like so many Lego bricks.

Feynman envisaged overcoming the problem of scale by using massively parallel working. It’s like those old problems they used to set in school tests. If it takes one man ten hours to dig a hole, how long would it take a gang of five men? Feynman envisaged making tiny manipulators, artificial “hands,” perhaps first just one-fourth of normal size. He imagined making ten of these. Then each of the little hands would be set to work making ten more hands one-sixteenth of the original size. So now we would have one hundred of the smaller hands. Each of those would make ten of one-sixty-fourth scale—and so on. As the devices got smaller, the number would multiply, until we would have billions upon billions of submicroscopic manipulators ready to take on our challenge.

Twenty-seven years after Feynman’s lecture, author and entrepreneur K. Eric Drexler combined the “nano” prefix with “technology” in his book
Engines of Creation
to describe his ideas on how it would be possible to work on this scale. He referred to the Feynman-style manipulator as an assembler, a nanomachine that would assemble objects atom by atom, molecule by molecule.

A single assembler working at this scale would take thousands of years to achieve anything. As we have seen, there are just too many molecules in a “normal”-scale object. To make anything practical using assembly would require trillions of nanomachines. Drexler speculated that the only practical way to produce such an army of submicroscopic workers would be to devise nanomachines that could replicate like a biological creature, leading to the vision of gray goo and the potential devastation portrayed in Crichton’s
Prey
.

Before examining the realities of the gray-goo scenario, there are other aspects of nanotechnology that need to be considered. Working on this scale doesn’t necessarily involve anything so complex as an assembler. One very limited form of nanotechnology is already widely used—that’s nanoparticles. These are just ordinary substances reduced to particles on this kind of scale, but because of their size they behave very differently from normal materials. The most common use of nanotechnology currently is in sunscreens, where nanoparticles of zinc oxide or titanium dioxide are employed to protect us from the sun’s rays, allowing visible light to pass through, but blocking harmful ultraviolet rays. In fact, we have been using nanoparticles for centuries in some of the pigments in pottery glazing, without realizing it.

There is also a form of atomic manipulation and construction at the heart of every electronic device, and especially a computer like the one I have written this book on. We casually refer to “silicon chips,” a weak, dismissive term for integrated circuits that totally understates what a marvel of technology these are. Using atomic layer deposition, one of the most advanced of the techniques used to “print” the detail on top of the base silicon wafer of a computer chip, layers as thin as one-hundredth of a nanometer can be employed. This is true nanotechnology.

Soon to be practical, with vast potential, are more sophisticated nano-objects—nanotubes and nanofibers. Often made of carbon, these molecular filaments are grown like a crystal rather than constructed and have the capability to provide both super-strong materials (as an extension of the current cruder carbon fibers) and incredibly thin conductors for future generations of electronics. Semiconducting nanotubes have already been built into (otherwise) impossibly small transistors, while carbon nanotubes could make one of the more remarkable speculations of science fiction a reality.

Writer Arthur C. Clarke conceived of a space elevator, a sixty-two-thousand-mile cable stretching into space that could haul satellites and spacecraft out beyond the Earth’s gravity without the need for expensive and dangerous rocketry. Bradley Edwards, working for the NASA Institute for Advanced Concepts, commented in 2002: “[With nanotubes] I’m convinced that the space elevator is practical and doable. In 12 years, we could be launching tons of payload every three days, at just a little over a couple hundred dollars a pound.” Edwards was overoptimistic—there is no sign of the development of a space elevator as we near the end of his twelve-year period—but nanotubes do have huge potential and will be used more and more.

Other books

The Drifter's Bride by Tatiana March
The First Three Rules by Wilder, Adrienne
Hotel Moscow by Talia Carner
The Truth Hurts by Nancy Pickard
The Hoods by Grey, Harry
Together is All We Need by Michael Phillips
Silken Prey by John Sandford
Perception Fault by James Axler