Read Darwin Among the Machines Online

Authors: George B. Dyson

Darwin Among the Machines (29 page)

BOOK: Darwin Among the Machines
9.25Mb size Format: txt, pdf, ePub
ads

Baran christened his technique “adaptive message block switching,” abbreviated to “packet switching” in 1966 by Donald Davies, working independently at the U.K. National Physical Laboratory. The first order of business was to take all forms of communicable information—text, data, graphics, voice—and break it up into short strings of bits of uniform length. To the network, all forms of communication would look the same. Error-free transmission of complex messages would be facilitated, for the same reason that the reproduction of complex organisms can best be achieved in a noisy environment through the collective reproduction of large numbers of smaller component parts. Every bit of data that passes through a network acquires a certain probability of error, and the cumulative probability of one of these errors affecting any given message grows exponentially with the message length. Short segments of code are far more likely to arrive unscathed. Error-detecting processes are most economically applied to short strings of code, checking each message segment for errors and only retransmitting those individual segments that fail. This is analogous to proofreading a manuscript one page at a time and retyping only those pages containing mistakes. Baran proposed using cyclical redundancy checking, or CRC, a method widely utilized today.

Each 1,024-bit message block was flagged to distinguish it from its neighbors and provided with fields containing its “to” and “from” address, as well as information needed to reconstruct the original message sequence at the other end. The message block also included a handover tag, which was incremented every time the code segment passed through a node. Each individual code packet knew where it
was going, where it had come from, which message it belonged to, and how many steps it had taken along the way. This information was shared with the host computer every time a packet passed through a node. “Packets are best viewed as wholly contained information entities designed to match the characteristics of the network switches,” noted Baran.
55

At the nodes, simple procedures (dubbed the “hot potato heuristic routing doctrine” by Baran) kept packets moving in the right direction and ensured that the network would adjust itself to any congestion or damage that arose. By looking at the “from” address and the value of the handover tag, the station kept track of which links were forwarding messages most efficiently from any given address and used that information to direct outgoing messages, operating on the assumption that the best link coming in was likely to be the best link going out. “Each node will attempt to get rid of its messages by choosing alternate routes if its preferred route is busy or destroyed. Each message is regarded as a ‘hot potato,' and rather than hold the ‘hot potato,' the node tosses the message to its neighbor, who will now try to get rid of the message.”
56

Individual switching nodes learned from experience where each station was and quickly updated this knowledge if a given station was damaged or moved. Baran found it possible to “suggest the appearance of an adaptive system” by implementing a “self-learning policy at each node so that overall traffic is effectively routed in a changing environment—without need for a central and possibly vulnerable control.”
57
A demonstration system, simulated as a forty-nine-node array on RAND's IBM 7090 computer, proved to be surprisingly robust against both random failure and deliberate attack. Starting from a “worst-case starting condition where no station knew the location of any other station,” it was found that “within ½ second of simulated real world time, the network had learned the locations of all connected stations and was routing traffic in an efficient manner. The mean measured path length compared very favorably to the absolute shortest possible path.”
58

The complete study was published in a series of eleven reports released in August 1964. An additional two volumes on cryptographic issues remained classified, even though, as Baran explained in the ninth volume, devoted to security and secrecy, “if one cannot safely describe a proposed system in the unclassified literature, then,
by definition
, it is not sufficiently secure to be used with confidence.”
59
Baran later emphasized that “we chose not to classify this work and also chose not to patent the work. We felt that it properly belonged in
the public domain. Not only would the US be safer with a survivable command and control system, the US would be even safer if the USSR also had a survivable command and control system as well!”
60

In late 1962 it was estimated that each switching node would occupy seventy-two cubic feet, consume fifteen hundred watts of power, weigh twenty-four hundred pounds, and incorporate 4,096 32-bit words of memory. By 1964 the estimate was down to units occupying one-third of a cubic foot, with twice the memory, no air-conditioning, and costing fifty thousand dollars less. Unlike most military budgets, this one was going down. By the time the final reports were released, most skeptics had been persuaded of the project's advantages—“the only major outpost of frontal opposition was AT&T.”
61
In 1965, RAND gave the air force its final recommendation to implement the project in a preliminary configuration comprising two hundred multiplexing stations distributed across the continental United States, each capable of serving up to 866 simultaneous subscribers with data and 128 with voice. There were to be four hundred switching nodes, each supporting up to eight full-duplex “minicost” microwave links. By 1966 the air force had received a favorable evaluation from an independent review. Only then was it determined that jurisdiction over the project would have to be assigned not to the air force and its independent contractors but to the Defense Communications Agency, which Baran and some influential colleagues believed was ill prepared to construct a nationwide network based on digital principles at odds with the communications establishment of the time. Baran was forced to advise against the implementation of his own project, lest “detractors would have proof that it couldn't be done . . . a hard decision, but I think it was the right one.”
62

Baran's packet-switched data network did eventually materialize—not out of whole cloth, as envisioned in 1960, but by the gradual association of many different levels of digital communication systems that ultimately converged, more or less closely, on Baran's original design. “The process of technological development is like building a cathedral,” said Baran. “Over the course of several hundred years: new people come along and each lays down a block on top of the old foundations, each saying, ‘I built a cathedral.' Next month another block is placed atop the previous one. Then comes along an historian who asks, ‘Well, who built the cathedral?' But the reality is that each contribution has to follow onto previous work. Everything is tied to everything else.”
63

9
T
HEORY OF
G
AMES AND
E
CONOMIC
B
EHAVIOR

The game that nature seems to be playing is difficult to formulate. When different species compete, one knows how to define a loss: when one species dies out altogether, it loses, obviously. The defining win, however, is much more difficult because many coexist and will presumably for an infinite time; and yet the humans in some sense consider themselves far ahead of the chicken, which will also be allowed to go on to infinity
.

—
STANISLAW ULAM
1

“U
nifications of fields which were formerly divided and far apart,” counseled John von Neumann in 1944, “are rare and happen only after each field has been thoroughly explored.”
2
So went his introduction (with Oskar Morgenstern) to
Theory of Games and Economic Behavior
, a mathematical vision whose brilliance was eclipsed only by the developments that his work in atomic weapons and digital computers was about to bring to light.

An interest in economics ran deeply through von Neumann's life. At the time of the Institute for Advanced Study's electronic computer project, von Neumann maintained an incongruous appearance, wearing a three-piece suit among casually dressed logicians and electrical engineers. This costume was a memento of his background as the son of an investment banker and the omen of a future in which the world of money and the world of logic, thanks to computers, would meet on equal terms. In his
Theory of Games and Economic Behavior
, von Neumann laid the foundations for a unified view of information theory, economics, evolution, and intelligence, whose implications continue to emerge.

Among von Neumann's predecessors was André-Marie Ampère, who published
Considérations sur la théorie mathématique du jeu
(On the
mathematical theory of games) at the age of twenty-seven in 1802. Ampère began his study by crediting Georges Louis Buffon (“an author in whom even errors bear the imprint of genius”) as the forefather of mathematical game theory, citing his (1777)
Essai d
'
Arithmétique Morale
. Buffon (1707–1788) was a celebrated naturalist whose evolutionary theories preceded both Charles and Erasmus Darwin, advancing ideas that were risky at the time. “Buffon managed, albeit in a somewhat scattered fashion,” wrote Loren Eiseley, “at least to mention every significant ingredient which was to be incorporated into Darwin's great synthesis of 1859.”
3
The elder Buffon and the young Ampère shared in the tragedy that swept postrevolutionary France: Button's son and Ampère's father both died under the guillotine, equally innocent of any crime.

Ampère analyzed the effects of probability rather than strategy, ignoring more deliberate collusion among the players of a game. Having suffered the first of a series of misfortunes that would follow him through life, Ampère saw games of chance as “certain ruin” to those who played indefinitely or indiscriminately against multiple opponents, “who must then be considered as a single opponent whose fortune is infinite.”
4
He observed that a zero-sum game (where one player's loss equals the other players' gain) will always favor the wealthier player, who has the advantage of being able to stay longer in the game.

Von Neumann's initial contribution to the theory of games, extending the work of Émile Borel, was published in 1928. Where Ampère saw chance as holding the upper hand, von Neumann sought to make the best of fate by determining the optimum strategy for any game. The results of his collaboration with Princeton economist Oskar Morgenstern were completed in the midst of wartime and published in 1944. “The main achievement of the [von Neumann-Morgenstern] book lies, more than in its concrete results, in its having introduced into economics the tools of modern logic and in using them with an astounding power of generalization,” wrote Jacob Marschak in the
Journal of Political Economy
in 1946.
5
Von Neumann's central insight was his proof of the “minimax” theorem on the existence of good strategies, demonstrating for a wide class of games that a determinable strategy exists that minimizes the expected loss to a player when the opponent tries to maximize the loss by playing as well as possible. This conclusion has profound but mathematically elusive consequences; many complexities of nature, not to mention of economics or politics, can be treated formally as games. A substantial section of the 625-page book is devoted to showing how seemingly intractable
situations can be rendered solvable through the assumption of coalitions among the players, and how non-zero-sum games can be reduced to zero-sum games by including a fictitious, impartial player (sometimes called Nature) in the game.

Game theory was applied to fields ranging from nuclear deterrence to evolutionary biology. “The initial reaction of the economists to this work was one of great reserve, but the military scientists were quick to sense its possibilities in their field,” wrote J. D. Williams in
The Compleat Strategyst
, a RAND Corporation best-seller that made game theory accessible through examples drawn from everyday life.
6
The economists gradually followed. When John Nash was awarded a Nobel Prize for the Nash equilibrium in 1994, he became the seventh Nobel laureate in economics whose work was influenced directly by von Neumann's ideas. Nash and von Neumann had collaborated at RAND. In 1954, Nash authored a short report on the future of digital computers, in which the von Neumann influence was especially pronounced. “The human brain is a highly parallel setup. It has to be,” concluded Nash, predicting that optimal performance of digital computers would be achieved by coalitions of processors operating under decentralized parallel control.
7

In 1945 the
Review of Economic Studies
published von Neumann's “Model of General Economic Equilibrium,” a nine-page paper read to a Princeton mathematics seminar in 1932 and first published in German in 1937. With characteristic dexterity, von Neumann managed to elucidate the behavior of an economy where “goods are produced not only from ‘natural factors of production,' but . . . from each other,” thereby shedding light on processes whose interdependence otherwise appears impenetrably complex. Because equilibrium was shown to depend on growth, this became known as von Neumann's expanding economic model. The conclusions were universally debated by economists; mathematicians were universally impressed. Von Neumann derived his conclusions from the topology of convex sets, noting that “the connection with topology may be very surprising at first, but the author thinks that it is natural in problems of this kind.”
8

BOOK: Darwin Among the Machines
9.25Mb size Format: txt, pdf, ePub
ads

Other books

Alicia ANOTADA by Lewis Carroll & Martin Gardner
Broken Harmony by Roz Southey
One Unhappy Horse by C. S. Adler
Her Dark Lord by Mel Teshco
01 Wing Warrior by Kevin Outlaw
13 Tiger Adventure by Willard Price