It Began with Babbage (60 page)

Read It Began with Babbage Online

Authors: Subrata Dasgupta

BOOK: It Began with Babbage
9.22Mb size Format: txt, pdf, ePub
II

There is, I believe, a sense in beginning with Babbage. The year 1819 saw the origin of a train of thinking that led Babbage to the design of a fully automatic, programmable computing machine. English philosopher Alfred North Whitehead (1861–1947) once wrote that European philosophy was comprised of a series of footnotes to Plato. We cannot claim that the origin and evolution of computer science consisted of a series of footnotes to Babbage. As we have seen, most of the early creators were ignorant of Babbage. But,
we can certainly claim that Babbage invented a machine architecture and a principle of programming that anticipated remarkably what would come a century later. There is a perceptible
modernity
to Babbage, and we recognize it because we see it in much that followed. Babbage's ghost, as it were, haunts the intellectual space of what became computer science.

III

As for ending this history in 1969, here, too, there is some rationale. It was during the 1960s that computer science assumed a distinct identity of its own. By the act of naming itself, it broke free the umbilical cords that had tied it to mathematics and electrical engineering. Universities and other academic and research institutions founded departments with names that varied slightly (computer science, computing science, computation, computing, computer and information science in the English-speaking world; informatik, informatique, and datalogy in Europe), but with references that were unmistakable. The business of these academic units was
automatic computing—
the concept itself, its nature, the mechanism to achieve it, and all phenomena surrounding the concept.

The stored-program computing paradigm emerged between 1945 and 1949. The former year marked the appearance of the seminal EDVAC report authored by John von Neumann in which he laid out the logical principles of stored-program computing; the latter year was when these principles were tested empirically and corroborated by way of the Cambridge EDSAC and the Manchester Mark I. In Thomas Kuhn's terms, all that preceded this period were “preparadigmatic.”
3
However, this is neither to dismiss nor denigrate the work preceding 1945, because much of what happened before the EDVAC report
led to
the stored-program computing principles. And, although the seemingly far-removed abstractions Alan Turing created in 1936 may or may not (we do not know for sure) have shaped von Neumann's composition of the EDVAC report, the significance of the Turing machine would emerge after the founding of the stored-program computing paradigm. The Turing machine formalism offered, in significant ways, the mathematical foundations of the paradigm. In fact, some would claim that it was Turing's
Entscheidungsproblem
paper rather than the EDVAC report that
really
constituted the stored-program computing paradigm.

Now, if we were “pure” Kuhnians, we would believe that much of the excitement was over by 1949, that what followed was what Kuhn called “normal science”—essentially, puzzle solving. In fact, it was nothing like that at all. What actually happened from 1945 through 1949 was the creation of a
core
of a new paradigm. From a cognitive point of view, it marked the germination, in some minds, of a core
schema
.

It is tempting to draw a parallel with axiomatic mathematics. The basic definitions and axioms form the starting point for some branch of mathematics (such as Euclid's postulates in plane geometry or Peano's axioms in algebra), but the implications of these
axioms and definitions are far from obvious—thus mathematicians' goal to explore and discover their implications and produce, progressively, a rich structure of knowledge (theorems, identities, and so forth) beginning with the axioms.

So also, the stored-program computing principle became the starting point. The implications of these principles circa 1949 were far from obvious. The 1950s and 1960s were the decades during which these implications were worked out to an impressive depth; the elemental schema lodged in people's minds about the nature of automatic computation was expanded and enriched.
The paradigm did not shift
; it was not overthrown or replaced by something else. It was not a case of a “computing revolution” as a whole; rather, new subparadigms, linked to the core, were created. If there were revolutions, they were local rather than global. The outcome, however, was that the paradigm assumed a fullness, a richness. And, as we saw, the 1960s, especially, witnessed what I have described as an “explosion of subparadigms”.

IV

In the meantime, the subject that had motivated the creation of automatic computing in the first place—numeric mathematics (or numeric analysis)—grew in sophistication. But, in a certain way, it stood apart from the other subparadigms. Numeric mathematics concerned itself with “the theory and practice of the efficient calculations of approximate solutions of continuous mathematical problems”
4
—and insofar as it dealt with approximating
continuous
processes (polynomial functions, differential equations, and so on), it formed the link between the “new” computer science and the venerable world of continuous mathematics.

With the emergence of all the other new subparadigms, numeric mathematics was “decentered,” so to speak. Much as, thanks to Copernicus, the earth became “just” another planet orbiting the sun, so did numeric mathematics become “just” another subparadigm. Unlike the others, it had a long pedigree; but, as a subparadigm linked to the stored-program computing core, numeric mathematics was also enormously enriched during the 1950s and 1960s. As distinguished numeric analyst Joseph Traub wrote in 1972, in virtually every area of numeric mathematics, the current best algorithms had been invented after the advent of the electronic digital computer. The sheer exuberance and promise of this artifact breathed new life into a venerable discipline.
5

V

If we take Kuhn's idea of paradigms seriously, we must also recognize that there is more to a paradigm than its intellectual and cognitive aspects. The making of a paradigm entails social and communicative features.

Thus, another marker of the independent identity of the new science was the launching, during the 1950s and 1960s, of the first
periodicals
dedicated solely to computing—yet
another severance of umbilical cords. In America, the ACM, founded in 1947, inaugurated its first journal, the
Journal of the ACM
in 1954; and then, in 1958, what became its flagship publication, the
Communications of the ACM
; and in 1969, the ACM launched the first issues of
Computing Surveys
.

Also in America, the Institute of Radio Engineers (IRE) brought forth, in 1952, the
IRE Transactions on Electronic Computers
. After the IRE merged with the American Institute of Electrical Engineers in 1963, forming the Institute of Electrical and Electronics Engineers (IEEE), a suborganization called the Computer Group was formed in 1963/1964, which was the forerunner of the IEEE Computer Society, formed in 1971. The
Computer Group News
was first published in 1966.

In Britain, the British Computer Society, founded in 1957, published the first issue of the
Computer Journal
in 1958. In Germany,
Numerische Mathematik
was started in 1959. In Sweden, a journal called
BIT
, dedicated to all branches of computer science came into being in 1961.

Commercial publishers joined the movement. In 1957, Thompson Publications in Chicago, Illinois, began publishing
Datamation
, a magazine (rather than a journal) devoted to computing. Academic Press launched a highly influential journal named
Information & Control
in 1957/1958 dedicated to theoretical topics in information theory, language theory, and computer science.

VI

The appearance of
textbooks
was yet another signifier of the consolidation of academic computer science. Among the pioneers, perhaps the person who understood the importance of texts as much as anyone else was Maurice Wilkes. As we have seen,
The Preparation of Programs for an Automatic Digital Computer
(1951), coauthored by Wilkes, David Wheeler, and Stanley Gill, was the first book on computer programming. Wilkes's
Automatic Digital Computers
(1956) was one of the earliest (perhaps the first) comprehensive textbooks on the whole topic of computers and computing, and he would also write
A Short Introduction to Numerical Analysis
(1966) and the influential
Time Sharing Computer Systems
(1968). Another comprehensive textbook (reflecting, albeit, an IBM bias) was
Automatic Data Processing
(1963), authored by Frederick P. Brooks, Jr., and Kenneth E. Iverson, both (then) with IBM. In the realm of what might be generally called
computer hardware design
, IBM engineer R. K. Richards published
Arithmetic Operations in Digital Computers
(1955), a work that would be widely referenced for its treatment of logic circuits. Daniel McCracken (who became a prolific author) wrote
Digital Computer Programming
(1957) and, most notably, a best-seller—
A Guide to FORTRAN Programming
(1961)—the first of several “guides” on programming he would write throughout the 1960s.

Among trade publishing houses, Prentice-Hall launched its Prentice-Hall Series on Automatic Computation during the 1960s. Among its most influential early texts was
Marvin Minsky's
Computation: Finite and Infinite Machines
(1967), a work on automata theory. By the time this book appeared, there were already more than 20 books in this series, on numeric analysis; the programming languages PL/I, FORTRAN, and Algol; and applications of computing. McGraw-Hill also started its Computer Science Series during the 1960s. Among its early volumes was Gerard Salton's
Automatic Information Organization and Retrieval
(1968). The author was one of the progenitors of another subparadigm in computer science during the 1960s, dedicated to the theory of, and techniques for, the automatic storage and retrieval of information stored in computer files, and this branch of computer science would link the field to library science. And, as we have seen, Addison-Wesley, as the publisher of the Wilkes/Wheeler/Gill text on programming in 1951, can lay claim to be the first trade publisher in computer science. It also published, during the 1960s, the first two volumes of Donald Knuth's
The Art of Computer Programming
(1968 and 1969, respectively). Another publisher, Academic Press, distinguished for its dedication to scholarly scientific publications, inaugurated in 1963 its
Advances in Computers
series of annual volumes, each composed of long, comprehensive, and authoritative chapter-length surveys and reviews of specialized topics in computer science by different authors.

The explosion of subparadigms during the 1960s was, thus, accompanied by a proliferation of periodicals (and, thus, articles) and books.

VII

The computer science paradigm that had emerged by the end of the 1960s, then, constituted a core practical concept and a core theory: the former, the idea of the stored-program computer; the latter, a theory of computation as expressed by the Turing machine. These core elements were surrounded by a cluster of subparadigms, each embodying a particular aspect of automatic computation, each nucleating into a “special field” within (or of) computer science, to wit: automata theory, logic design, theory of computing, computer architecture, programming languages, algorithm design and analysis, numeric analysis, operating systems, artificial intelligence, programming methodology, and information retrieval. Looking back from the vantage of the 21st century, these can be seen as the “classic” branches of computer science. They were all, in one way or another, concerned with the nature and making of computational artifacts—material, abstract, and liminal.

We have also seen that a central and vital
methodology
characterized this paradigm: the twinning of design-as-theory (or the design process-as-theory construction) and implementation-as-experimentation. Even abstract computational artifacts (algorithms and computer languages) or the abstract faces of liminal artifacts (programs, computer architectures, sequential machines) are designed. The designs
are
the theories of these artifacts. And even abstract artifacts are implemented; the implementations become the experiments that test empirically the designs-as-theories. Algorithms are implemented
as programs, programs (abstract texts) become executable software, programming languages by way of their translators become liminal tools, computer architectures morph into physical computers, and sequential machines become logic or switching circuits. Turing machines are the sole, lofty exceptions; they remain abstract and, although they are designed, they are never implemented.

This methodology—design-as-theory/implementation-as-experimentation—is very much the core methodology of most sciences of the artificial, including the “classical” engineering disciplines. It is what bound the emerging computer science to the other artificial sciences on the one hand and separated it from both mathematics and the natural sciences on the other. And it was this synergy of design and implementation that made the computer science paradigm as a fundamentally
empirical
, rather than a purely mathematical or theoretical, science.

Another feature of the paradigm we have seen emerge is that, although computer scientists may have aspired for universal laws in the spirit of the natural sciences, they were rather more concerned with the
individual
. A design of a computational artifact is the design of an individual artifact; it is a theory of (or about) that particular artifact, be it an algorithm, a program, a language, an architecture, or whatever. Computer science as a science of the artificial is also, ultimately, a science of the individual.

Other books

The Week at Mon Repose by Margaret Pearce
Making Waves by Cassandra King
The Last Slayer by Lee, Nadia
Treasure of the Sun by Christina Dodd
White Lies by Linda Howard