It Began with Babbage (49 page)

Read It Began with Babbage Online

Authors: Subrata Dasgupta

BOOK: It Began with Babbage
11.56Mb size Format: txt, pdf, ePub
II

An undergraduate physics major at Stanford University, where Polya taught, took some of his courses. Through them he came to know the ideas in
How to Solve It
. The undergraduate's name was Allen Newell (1927–1992). By way of Polya, he was introduced to the concept of heuristics.
5

In 1952, while working as an applied mathematician at the RAND Corporation, the California-based think tank, Newell met a social scientist by the name of Herbert Simon (1916–2001).
6
Simon was 36 at the time. By then, he had already established a formidable reputation not only as a social scientist, but also as a polymath: later, the term “Renaissance man” would be applied to him.
7

Simon's dissertation, for which the University of Chicago awarded him a PhD in political science in 1942, became, appropriately revised, a book titled
Administrative Behavior
(1947).
8
That same year, he published his first article on the foundations of Newtonian mechanics in the august
Philosophical Magazine
.
9
He coauthored another book on
Public Administration
(1950).
10
During the 1940s, while an assistant professor of political science at the Illinois Institute of Technology in Chicago, he began publishing papers in leading economics journals such as the
Quarterly Journal of Economics
and
Econometrica
, and established a close association with the Cowles Commission for Research in Economics, a collection of some of the most eminent economists of the post-World War II era, several of whom were future Nobel laureates as, indeed, Simon himself was.
11
He published an article in 1952 on causality in the
Journal of Philosophy
.
12
By 1950—he was then a full professor in the Graduate School of Industrial Administration at the Carnegie Institute of Technology (later, Carnegie- Mellon University) in Pittsburgh—Simon was seriously contemplating using principles of servomechanisms (from cybernetics, created by Norbert Wiener and others, circa 1948 [see
Chapter 12
, Section I]) in organization theory, and, in 1952, he published a very formal article in
Econometrica
on servomechanism
theory applied to production control.
13
That same year, he wrote an article on the interaction of social groups in the
American Sociological Review
.
14
Perhaps most relevant to our story, by the mid 1950s, he had read Edmund Berkeley's
Giant Brains
(1950) and
Faster Than Thought
(1953), edited by Vivian Bowden.
15

The meeting of Simon and Newell heralded the beginning of an extraordinary scientific partnership that would span some 20 years (and a friendship that ended only with Newell's death in 1992). Newell became Simon's doctoral student, but theirs was never a teacher/guru–student/disciple relationship. It was always that of collaborators–colleagues. What interests us in this story is how they transformed the idea of heuristic reasoning and problem solving as done by human beings into a research tradition or a subparadigm within the computer science paradigm itself.

III

We cannot really understand how Newell and Simon embarked on their particular intellectual enterprise without knowing something of its prehistory. It began in a discipline as far removed from computer science as one can imagine. In 1947, Simon published
Administrative Behavior
(1947), wherein he presented a fundamental insight for which he would receive the Nobel Prize in economics 31 years later. The Nobel Foundation citation for 1978 tells us that the prize was awarded for Simon's “pioneering research into the decision-making process in economic organizations.”
16
This “pioneering research” was first embedded in
Administrative Behavior
, which dealt not with the economic milieu at all, but with decision making in administrative organizations. The administrative “actor,” Simon pointed out (drawing on the ideas of administrative theorist Chester Barnard
17
) is a purposive (or goal-oriented) being. He desires to take action, make decisions, that lead to the attainment of goals. For Simon, purposive behavior is
rational
if the choice of means leads to the attainment of goals.

However, such purposive behavior faces several oppositions. There are limits to the decision maker's innate cognitive capabilities. There are limits to one's knowledge about all the factors that must be taken into consideration to make a fully rational decision. There are limits to the decision maker's ability to cope with the complexity of the environment in which decisions have to be made, and with the rich profusion of alternative actions (choices, decisions) and their consequences.

All these constraints suggest that, except in the simplest situations, it is virtually impossible for an individual to achieve
fully
rational behavior. There are limits to his or her rationality. The decision maker's behavior is governed by what Simon first called subjective rationality.
18
A decade later, he renamed this notion
bounded rationality
, and this is the term by which this phenomenon has come to be known.
19

By 1950, Simon had begun to apply his theory of bounded rationality to the realm of economic decision making
20
—to build a bridge between administrative theory and
economic theory. And this bridge was to be paved with the stuff of
psychology
.
21
In fact, Simon was tending toward a psychological (or behavioral) theory of what might be called a “universal decision maker,” regardless of the domain in which decisions are to be made—including chess. Bounded rationality lay at the core of this theory.

His interest in chess reached back to boyhood. In high school he had studied the game seriously, to the extent that he had mastered the basic decision-making issues in chess and tic-tac-toe (H. A. Simon, personal communication, November 4, 2000). This interest intensified years later when he attended a lecture by the ubiquitous John von Neumann in 1952 on computer chess, which, as we have seen, was studied by Claude Shannon circa 1949 to 1950 (see
Chapter 11
, Section VII). He was sufficiently stimulated by the lecture actually to engage, by mid 1953, with the design of chess-playing programs.
22
The chess player—human or automaton—is as much constrained by bounded rationality in making decisions of which moves to make as are administrative or economic decision makers.

The operative word here is
computer
chess. In summer 1954, Simon taught himself to program the IBM 701. By then, computers were much on his mind
23
—and not just in the realm of chess. As a visiting scientist at the RAND Corporation's Systems Research Laboratory, his interest in computers had been stimulated after meeting Allen Newell, and another person, Cliff Shaw (1922–1991), “an outstandingly talented and sophisticated systems programmer.”
24
Their work made Simon realize that the computer was much more than a number processor—that it was, in fact, a
general symbol processor
.
25

Newell was also much interested in computer chess. In 1955, he presented a paper on a “chess machine” at the Western Joint Computer Conference.
26
That same year, Simon published what was perhaps his first definitive article on bounded rationality, titled “A Behavioral Model of Rational Choice” in the
Quarterly Journal of Economics
.
27

The question was, Simon wrote, how to cope with bounded rationality. “Classic” concepts of rationality assumed complete information available to the decision maker, and that she computes all possible outcomes of choices and then selects the “best” (optimal) decision from all the possibilities. In fact, there was no empirical evidence, Simon pointed out, that people actually went about the matter in this way. His task in the article, Simon wrote, was to propose a model that would endow the decision maker with a form of rationality that was consistent with the cognitive limits possessed by organisms—including, especially, humans—in the actual kinds of environments—social as well as physical—organisms occupy.
28

So what does a decision maker
really
do? Instead of seeking an optimal outcome he will establish an “acceptable” or “satisfactory” goal, an aspiration level
29
that will be less ambitious than the optimal. This will reduce the amount of information the individual needs to make a decision. The decision maker then
searches
the reduced “space” of possibilities for a choice that meets the aspired goal. So the decision will not be optimal. Rather, in resigned recognition of the reality of bounded rationality, the decision will be “satisfactory.” Or, as Simon will coin the term in 1956, it will be
satisficing
.
30
To satisfice is to choose what is “good” rather than what is “best.” The bounded rational decision maker is a satisficing, rather than an optimizing, being.

However, even satisficing leaves a formidable problem for the decision maker. How does one search the “space” of possibilities? This is where
heuristics
enters the discourse. In his design of a chess machine, Newell incorporated a variety of heuristics that the computer could deploy to select its moves. He was, of course, not the first to have done so; recall Claude Shannon's minmax strategy—a heuristic—of 1950 (see
Chapter 11
, Section VII). However, although Newell's immediate intent was to design a program to play chess, like Simon he had larger ambitions. Chess was a means to understand how the
mind
deals with the complexities of the real-world environment, and how computers could simulate such mindlike behavior. Newell and Simon pondered at great length the possibility of modeling human thinking by computers.
31
Here was a meeting of profoundly like minds.

IV

In September 1956, at the same symposium at MIT where Noam Chomsky presented his seminal paper in linguistics (see
Chapter 13
, Section XVIII), Newell and Simon announced the first fruits of their joint ruminations. They described a computer program called
Logic Theorist
(LT), which was capable of discovering proofs for theorems in symbolic logic.
32

A computer
program
had been given a name, exactly as individual computers were given such names as the ENIAC and the EDSAC in the 1940s. This, itself, gives cause for momentary pause. It suggests that a very special artifact had been created, for here was a program that exhibited behavior akin to the highest form of human intelligence. A reader of their 1956 work would have irresistibly harked back to Alan Turing's 1950 essay “Computing Machinery and Intelligence” (see
Chapter 11
, Section XI).

LT was able to prove several theorems stated (and proved) in one of the chapters of
Principia Mathematica
(1910–1913), the massive treatise on the logical foundations of mathematics, authored by A. N. Whitehead and Bertrand Russell. Symbolic logic was the meta-language (see
Chapter 13
, Section XVI) for describing mathematics in the
Principia
, and the entities in this work were expressions (axioms and theorems) in symbolic logic and the proofs of the theorems.

By the mid 1950s, computers were being used routinely to solve complex mathematical problems such as finding roots of polynomials, evaluating matrices, and solving differential equations. What was it about LT that merited it a name of its own, an individuality, special attention?

The answer lay in the nature of proofs in axiomatic (formal) systems of the kind mathematics and symbolic logic were concerned with (see
Chapter 4
, Section I, for more on axiomatic systems). One begins with a set of axioms—expressions that are assumed to be true—and definitions. There will also be a small number of “rules of inference,” by means of which new true expressions (theorems) can be inferred (or deduced) beginning with axioms, definitions, and other previously proved theorems. A proof of a theorem is a sequence of expressions
e1, e2, … eN
, where
e1
is either an axiom or a previously proved
theorem, each
ei
is derived from its predecessor by applying the rules of inference and using the axioms and definitions, and
eN
is the theorem to be proved.

What makes this situation different from solving numeric problems such as differential equations is that
algorithms
exist for the solution of the numeric problems. In the case of proofs of theorems and the like, a “systematic algorithm” can be developed, but this would involve a potentially exhaustive search of all chain of inferences until one leads to an expression corresponding to the theorem of interest.
33
The situation is exactly the same as the human decision maker's problem in arriving at a “fully rational” optimal decision. The decision maker is faced with bounded rationality, which for all practical purposes prevents him from optimizing his decision. So also, although
in principle
an algorithm may be constructed that searches exhaustively the space of all possible sequences of inferences until one is found leading to the theorem of interest, such in-principle possibility is impractical computationally because of the very large numbers of pathways of inferences that might have to be explored in arriving at a valid proof.
34

Like the human decision maker, LT must resort to heuristic reasoning and search. And, like the human decision maker operating under bounded rationality, the theorem-proving computer program must use heuristic clues, rules of thumb, and strategies of various sorts to
search
for a proof.

Other books

Darke Academy 4: Lost Spirits by Poole, Gabriella
The Unpossessed by Tess Slesinger
The Last Vampire by Whitley Strieber
Bend by Bailey Bradford
Beyond A Highland Whisper by Greyson, Maeve
Hard Candy by Andrew Vachss
More Than You Know by Jo Goodman