Darwin Among the Machines (35 page)

Read Darwin Among the Machines Online

Authors: George B. Dyson

BOOK: Darwin Among the Machines
6.08Mb size Format: txt, pdf, ePub

Then comes the crucial assumption—that without having to pay attention to the details, a behavioral gradient rewarding a closer match between model and reality will encourage the internal structure of the model to organize itself. “As we ‘tune' our model, so to speak, that is, make the whole and the components resemble patterns of performance observable in real life, then the adequacy of the specific parametric values imposed on Leviathan for specific experiments should improve; and Leviathan will come to behave more like a real society both in its components and as a whole. In this way system properties can be imposed downward into the hierarchical structures.”
24
As the Romes found out, this is easier said than done. They were on the right track, but with Leviathan running on an IBM 7090 computer, even with what was then an enormous amount of memory—over 1 million bits—the track wasn't wide enough. Their model was defeated by the central paradox of artificial intelligence: systems simple enough to be understandable are not complicated enough to behave intelligently; and systems complicated enough to behave intelligently are not simple enough to understand.

Citing von Neumann, the Romes concluded “that neither human thinking nor human social organization resembles a combinational
logical or mathematical system.”
25
Von Neumann believed the foundations of natural intelligence were distinct from formal logic, but through repeated association of the von Neumann architecture with attempts to formalize intelligence this distinction has been obscured. The Romes, following von Neumann's lead, believed a more promising approach to be the cultivation of these behavioral processes within a random, disorganized computational system, but the available matrix delivered disappointing results. Nonetheless, they foresaw that given a more fertile computational substrate, humans would not only instruct the system but would begin following instructions that were reflected back. “Once we incorporate live agents into our dynamic computer model, the result can be a machine that will be capable of teaching humans how to function better in large man-machine system contexts in which information is being processed and decisions are being made.”
26

The Leviathan project grew into an extensive experiment in communication and information handling in composite human-computer organizations (or, in the Romes' words, organisms), occupying a portion of the twenty-thousand-square-foot Systems Simulation Research Laboratory established at the System Development Corporation's Colorado Avenue headquarters in 1961. Twenty-one experimental human subjects were installed in separate cubicles, equipped with keyboards and video displays connected to SDC's Philco 2000 computer, the first fully transistorized system to be commercially announced. All activity was recorded electronically, and human behavior could be observed by researchers overlooking the complex through one-way glass. The Leviathan Technological System incorporated “16 live group heads (level III) reporting to four live branch leaders . . . (level IV) [who] in turn report to a single commanding officer (level V). . . . Underneath the live officers . . . 64 squads of robots are distributed . . . (level II) and report to the group heads directly over them. Each squad of robots consists of artificial enlisted men (level I) who exist only in the computer.”
27
The human-machine system was embedded in an artificial environment and fed a flow of artificial work. Under various communication architectures, the Romes observed how organization, adaptive behavior, and knowledge evolved. The Romes concluded that “social hierarchies are no mere aggregations of component individuals but coherent, organic systems in which organic subsystems are integrated. . . . Social development is a telic advance, not just a concatenation of correlated events.”
28

With a tightening of air force purse strings, Leviathan was quietly abandoned, but in the SAGE system and its successors (which include
most computer systems and data networks in existence today) these principles were given the room to freely grow. They developed not as an isolated experiment, but as an electronic representation coupled as closely to the workings of human culture, industry, and commerce as the SAGE system was coupled to the network of radar stations scanning the continental shelf. SAGE's knowledge of scheduled airline flight paths, for example, gave rise to the SABRE airline-reservation system, refining the grain of the model down to an aircraft-by-aircraft accounting of how many seats are empty and how many are occupied in real time. Ashby's law of requisite variety demands this level of detail in a system that can learn to
control
the routing, frequency, and occupancy of passenger aircraft, rather than simply identifying which flights are passing overhead. The tendency of representative models—whether a bank account that represents the exchange of goods, a nervous system that represents an organism's relation to the world, or a reservation system that represents the number of passengers on a plane—is to translate increasingly detailed knowledge into decision making and control. In less than forty years, adding one subsystem at a time, we have constructed a widely distributed electronic model that is instructing much of the operation of human society, rather than the other way around. The smallest transactions count. Centrally planned economies, violating Ashby's law of requisite variety, have failed not by way of ideology but by lack of attention to detail.

Large information-processing systems have evolved through competition over the limited amount of reality—whether airline customers, market volume, or cold warfare—available to go around. This Darwinian struggle can be implemented within a single computer as well as at levels above; all available evidence indicates that nature's computers are so designed. An influential step in this direction was an elder and much less ambitious cousin of Leviathan named Pandemonium, developed by Oliver Selfridge at the Lincoln Laboratory using an IBM 704. Instead of attempting to comprehend something as diffuse and complex as the SAGE air-defense system, Pandemonium was aimed at comprehending Morse code sent by human operators—a simple but nontrivial problem in pattern recognition that had confounded all machines to date.

Selfridge's program was designed to learn from its mistakes as it went along. Pandemonium—“the uproar of all demons”—sought to embody the Darwinian process whereby information is selectively evolved into perceptions, concepts, and ideas. The prototype operated on four distinct levels, a first approximation to the manifold levels by
which a cognitive system makes sense of the data it receives. “At the bottom the data demons serve merely to store and pass on the data. At the next level the computational demons or sub-demons perform certain more or less complicated computations on the data and pass the results of these up to the next level, the cognitive demons who weigh the evidence, as it were. Each cognitive demon computes a shriek, and from all the shrieks the highest level demon of all, the decision demon, merely selects the loudest.”
29

“The scheme sketched is really a natural selection on the processing demons,” Selfridge explained. “If they serve a useful function they survive, and perhaps are even the source for other subdemons who are themselves judged on their merits. It is perfectly reasonable to conceive of this taking place on a broad scale—and in fact it takes place almost inevitably. Therefore, instead of having but one Pandemonium we might have some crowd of them. . . . Eliminate the relatively poor and encourage the rest to generate new machines in their own images.”
30
In the 1950s, when machine cycles were a rationed commodity and memory cost fifty cents per bit, Pandemonium was far too wasteful to compete with programs in which every instruction, and every memory address, counted. But when parts are cheap and plentiful (whether neurons or microprocessors or object-oriented programming modules), Pandemonium becomes a viable approach. In the 1950s, while computer engineers were preoccupied with architecture and hardware, Selfridge played the outfield in anticipation of semiautonomous processes spawned by machines within machines. Forty years later architecture and hardware are taken for granted and software engineers spend most of their time trying to cultivate incremental adaptations among the host of unseen demons within. “In fact, the ecology of programming is such that overall, programmers spend over 80 percent of their time modifying code, not writing it,” said Selfridge in 1995.
31

It makes no difference whether the demons are strings of bits, sequences of nucleotides, patterns of connection in a random genetic or electronic net, living organisms, cultural institutions, languages, or machines. It also makes no difference whether the decision demon is nature as a whole, an external predator, an internal parasite, a debugging program, or all of the above making a decisive racket at the same time. As Nils Barricelli discovered through his study of numerical symbioorganisms and John von Neumann discovered through his development of game theory, the tendency to form coalitions makes it impossible to keep the levels of decision making from shifting from one moment to the next.

In this ambiguity lies the persistence of the nineteenth century's argument from design—a proprietary dispute over whether the power of selection and the intelligence that it represents belongs to an all-knowing God or to impartial nature alone; whether it descends from above, arises from below, or is shared universally at every level of the scale. “Suppose there were a being who did not judge by mere external appearances,” wrote Charles Darwin to Asa Gray in 1857, “but who could study the whole internal organization, who was never capricious, and should go on selecting for one object during millions of generations; who will say what he might not effect?”
32

The nature of this selective being determines the scale of the effects—and organisms—that can be evolved. Charles Darwin, knowing it best to attempt one revolution at a time, replaced one higher intelligent being with another of a different kind. Intelligence, by any measure, is based on the ability to be selective—to recognize signal amidst noise, to discriminate right from wrong, to select the strategy leading to reward. The process is additive. Darwin was able to replace the supreme intelligence of an all-knowing God, who selected the whole of creation all at once, with the lesser intelligence of a universe that selected nature's creatures step-by-step. But Darwin still dispensed this intelligence downward from the top.

Darwin's success at explaining the origin of species by natural selection may have obscured the workings of evolution in different forms. The Darwinian scenario of a population of individuals competing for finite resources, with natural selection guiding the improvement of a species one increment at a time, tempts us to conclude that where circumstances do not fit this scenario—despite the lengths to which Darwinism has been stretched—evolutionary processes are not involved. Large, self-organizing systems challenge these assumptions—perhaps even the assumption that a system must be self-reproducing, competing against similar systems and facing certain death or possible extinction, to be classified as evolving or alive. It is possible to construct self-preserving systems that grow, evolve, and learn but do not reproduce, compete, or face death in any given amount of time. It is also possible to view large, complex systems, such as species, gene pools, and ecosystems, as information-processing organizations providing a degree of guiding intelligence to component organisms whose evolution is otherwise characterized as blind. A blind watchmaker who can build an eye can evidently assemble structures that no longer stumble around.

Samuel Butler argued against Darwin in 1887 that “we must also have mind and design. The attempt to eliminate intelligence from
among the main agencies of the universe has broken down. . . . There is design, or cunning, but it is a cunning not despotically fashioning us from without as a potter fashions his clay, but inhering democratically within the body which is its highest outcome, as life inheres within an animal or plant.”
33
Butler did not doubt the power of descent with modification, but he believed Darwin's interpretation of the evidence to be upside down. “Bodily form may be almost regarded as idea and memory in a solidified state—as an accumulation of things each one of them so tenuous as to be practically without material substance. It is as a million pounds formed by accumulated millionths of farthings. . . . The theory that luck is the main means of organic modification is the most absolute denial of God which it is possible for the human mind to conceive—while the view that God is in all His creatures, He in them and they in Him, is only expressed in other words by declaring that the main means of organic modification is, not luck, but cunning.”
34
Butler saw each species—indeed, the entire organic kingdom—as a store of knowledge and intelligence transcending the life of its individual members as surely as we transcend the life and intelligence of our component cells.

The dispute between luck and cunning extends beyond evolutionary biology, promising a decisive influence over the future of technology as well. “The notion that no intelligence is involved in biological evolution may prove to be as far from reality as any interpretation could be,” argued Nils Barricelli in 1963. “When we submit a human or any other animal for that matter to an intelligence test, it would be rather unusual to claim that the subject is unintelligent on the grounds that no intelligence is required to do the job any single neuron or synapse in its brain is doing. We are all agreed upon the fact that no intelligence is required in order to die when an individual is unable to survive or in order not to reproduce when an individual is unfit to reproduce. But to hold this as an argument against the existence of an intelligence behind the achievements in biological evolution may prove to be one of the most spectacular examples of the kind of misunderstandings which may arise before two alien forms of intelligence become aware of one another.”
35
Likewise, to conclude from the failure of individual machines to act intelligently that machines are not intelligent may represent a spectacular misunderstanding of the nature of intelligence among machines.

Other books

The Start of Everything by Emily Winslow
Lost Voyage by Chris Tucker
Hunger Untamed H3 by Dee Carney
Kitten Cupid by Anna Wilson
Hay Fever by Bonnie Bryant
Recipe for Attraction by Gina Gordon
Horse Named Dragon by Gertrude Chandler Warner
Someone Else by Rebecca Phillips
Battle Cry by Lara Lee Hunter
Love Lies Bleeding by Jess Mcconkey