Read It Began with Babbage Online
Authors: Subrata Dasgupta
A virtual memory, then, is a grand illusion. It consists of a “virtual address space” partitioned into fixed-length subspaces called
pages
, so that a single program would occupy one or (usually) more pages, or variable-length subspaces called
segments
, each occupied by a functionally distinct program module (such as an Algol procedure or block). A program would reside physically at all times as a collection of pages or segments in secondary memory. In the case of the Atlas, this was a drum store. However, pages or segments would be transferred
automatically
from secondary memory and placed in some region of the much smaller main memory as and when required for execution. The Atlas used a paging scheme with a page size of 512 words.
Allocating main memory regions to individual program segments or program pages meant that an individual program's parts in main memory might not be stored entirely in adjacent memory words, but rather may be scattered in different regions of main memory. In a virtual memory system coupled with multiprogramming, the “virtual address space” would consist, at any time, of several user programs located either in segments or pages. Parts of an individual program (“up” for execution) would also be located in pages or segments scattered all over main memory, so that adjacent blocks of main memory would hold segments or pages of
different
programs.
The principle of organizing and storing programs in memory in the form of segments was first used in an experimental machine built at Rice University, Houston, Texas, in 1961.
59
A similar scheme was used on the Burroughs Corporation's B5500 machine circa 1961.
60
However, neither systems involved multiprogramming. The principle of segmentation and its expansion to the design of multiprogrammed virtual memory computers was explored by Jack Dennis (1931â) and his associates at MIT during the mid 1960s as part of a project called
Multics
.
61
The Multics operating system (discussed more later), in fact, used a combination of segmentation and paging.
62
That is, the virtual address space was occupied by program segments, but only pages of a program's segment would be transferred to occupy corresponding blocks (“page frames”) in main memory.
The virtual memory concept as originally proposed by Kilburn and colleagues is yet another prime example of the creation of a subparadigm within the larger stored-program computing paradigm. Clearly, the “memory management problem” inhered in the overall paradigm. In this sense, the memory management problem was a “normal (computer) science” problem, in Thomas Kuhn's sense. However, the solution itselfâvirtual memoryâwas of such consequence that it created its own subparadigm; it spawned a whole range of problems that needed to be solved for virtual memory to be realizableâtheoretical, experimental, design related, architectural, algorithmicâto the extent that virtual memory became a fertile research arena.
63
Among the problems to which the virtual memory concept gave rise, and were investigated by computer scientists during the 1960s, were the design of algorithms for transferring pages/segments from virtual address space into main memory, algorithms for selecting pages/segments in main memory for replacement by “incoming” pages/segments, analysis of the performance of paging and segmentation schemes, and analysis of the performance of secondary memory that implemented virtual memory. Most interestingly, the virtual memory subparadigm unveiled new phenomena that became problems in their own right. One was the peculiar nature of “program behavior” in virtual memory environments.
64
Operating systems, as earlier noted, came of age during the 1960s. They were also, by far, the largest and most complex software artifacts built. People were not so much speaking of writing operating systems as of
building
them. And building large software systems became an engineering enterprise. In October 1968, a conference sponsored by the North Atlantic Treaty Organization (NATO) was held in Garmisch, Germany, on
software engineering
; a year later, another NATO-sponsored conference on software engineering was held in Rome, Italy.
65
The IBM OS/360 released in 1966,
66
an operating system for the System/360, required an estimated 5000 man-years between 1963 and 1966 for its design, implementation, and documentation.
67
However, we get a more complete sense of the functional range and
complexity of the most ambitious and innovative operating systems of this period by considering the Multics system.
In its mature state, Multics, built under the leadership of Fernando J. Corbató (1926â) at MIT, consisted of some 1500 program modules for a total of approximately one million lines of machine code.
68
Its structure was a direct consequence of its overall objective: to create a general computer utility analogous to electric power and telephone utilities that would run continuously and reliably, and to provide a comprehensive range of services to a population of users interacting with it through remote terminal access. The designers refined this all-encompassing objective into a collection of more specific capabilities, including time-sharing facilities, virtual memory, protection of users' programs from unauthorized access, a sophisticated programming environment in which users could work, a number of programming languages at users' disposal, an interuser communication facility (a forerunner of present-day e-mail), flexibility for enhancing the system in response to new technology and user expectations, and certain other maintenance, monitoring, and management capabilities.
Multics was thus conceived as an instance of what historian of technology Thomas Parke Hughes (1923â) would describe as a “technological system.”
69
Multics had an elaborate phylogeny (see
Chapter 7
, Section VII). It drew upon (a) an operating system called
CTSS
(Compatible Time Sharing Computer System), also built at MIT between 1960 and 1963 that was, in fact, the first operational time-sharing system
70
; (b) the combination of the two virtual memory schemes of paging and segmentation; (c) multiprogramming; and (d) a very sophisticated scheme for protecting a user's programs and data in memory from unauthorized access by other programs.
71
The Multics designers drew on these earlier inventions, but they combined, expanded, and generalized them, andâwith this synergyâcreated a significantly original but enormously complex artifact. Moreover, the project constituted a significant experiment in the use of a high-level programming language (PL/I, developed by IBM) to write a large operating system.
72
There is a price to be paid for the kind of complexity operating systems such as OS/360 and Multics manifested. How does a designer or programmer (or a programming team)
deal
with this kind of complexity? By the time of the NATO 1968 conference, there were rumblings of a “software crisis” And, as Thomas Kuhn stated, a crisis in science leads to the possibility of revolt or even revolution, a paradigm shift.
73
At the very least, it led to a serious challenge to the status quo. There were those even before the NATO conferences, even before “software crisis” became an alarmed mantra, who were willing to lead a revolution. The arch-leader was a man from Holland.
 Â
1
. J. R. Rice & S. Rosen. (1994). History of the computer science department of Purdue University. In R. DeMillo & J. R. Rice (Eds.),
Studies in computer science: In honor of Samuel D. Conte
(pp. 45â72). New York: Plenum.
 Â
2
. S. H. Lavington. (1998).
A history of Manchester computers
(p. 46). Swindon, UK: British Computer Society.
 Â
3
. J. B. Dennis & D. P. Misunas. (1974).
A preliminary architecture for a basic data flow processor
. CSG memo 102. Cambridge, MA: Laboratory for Computer Science, Massachusetts Institute of Technology.
 Â
4
. J. W. Backus. (1987). Can programs be liberated from the von Neumann style? A functional style and its algebra of programs (ACM Turing Award lecture for 1977). In Anon.
ACM Turing Award lectures: The first twenty years 1966â1985
(pp. 63â130). Reading, MA: Addison-Wesley (original work published 1977).
 Â
5
. D. A. Huffman. (1954). The synthesis of sequential switching circuits.
Journal of the Franklin Institute, 257
, 161â190; 275â303; G. H. Mealy. (1955). A method for the synthesis of sequential circuits.
Bell Systems Technical Journal, 34
, 1045â1079; S. C. Kleene. (1956). Representation of events in nervous nets and finite automata. In C. E. Shannon & E. F. Moore (Eds.),
Automata studies
(pp. 3â41). Princeton, NJ: Princeton University Press; E. F. Moore. (1956). Gedanken experiments on sequential machines. In Shannon & Moore (pp. 129â153), op cit.
 Â
6
. See, for example, J. Hartmanis & R. E. Stearns. (1966).
Algebraic structure theory of sequential machines
. Englewood-Cliffs, NJ: Prentice-Hall.
 Â
7
. See, for example, M. Minsky. (1967).
Computation: Finite and infinite machines
. Englewood-Cliffs, NJ: Prentice-Hall.
 Â
8
. See, for example, Z. Kohavi. (1970).
Switching and finite automata theory
. New York: McGraw-Hill; G. G. Langdon, Jr. (1974).
Logic design: A review of theory and practice
. New York: Academic Press.
 Â
9
. Shannon & Moore, op cit.
10
. Minsky, op cit.
11
. M. A. Arbib. (1969).
Theories of abstract automata
. Englewood-Cliffs, NJ: Prentice-Hall.
12
. J. E. Hopcroft & J. D. Ullman. (1969).
Formal languages and their relation to automata
. Reading, MA: Addison-Wesley.
13
. M. Davis. (1958).
Computability and undecidability
. New York: McGraw-Hill; M. Davis. (Ed.). (1965).
The undecidable
. Hewlett, NY: Raven Press; H. Rogers. (1967).
Theory of recursive functions and effective computability
. New York: McGraw-Hill.
14
. N. Chomsky. (1957).
Syntactic structures
(pp. 18â19). The Hague: Mouton.
15
. N. Chomsky. (1959). On certain formal properties of grammars.
Information and Control, 2
, 137â167.
16
. Hopcroft & Ullman, op cit.
17
. R. W. Floyd. (1963). Syntactic analysis and operator precedence.
Journal of the ACM, 10
, 316â333; D. E. Knuth. (1965). On the translation of languages from left to right.
Information & Control, 8
, 607â639.
18
. J. Earley. (1968).
An efficient context-free parsing algorithm
. PhD dissertation, Carnegie-Mellon University; D. H. Younger. (1967). Recognition and parsing of context-free languages in time
n
2
. Information & Control, 10
, 189â208.
19
. H. D. Huskey & W. H. Wattenburg. (1961). A basic compiler for algebraic expressions.
Communications of the ACM, 4
, 3â9; B. Randell & L. J. Russell. (1964).
Algol 60 implementation
. New York: Academic Press.
20
. F. E. Allen. (1969). Program optimization. In
Annual Review in Automatic Programming
(Vol. 5). Oxford: Pergamon Press.
21
. Randell & Russell, op cit.
22
. P. Z. Ingermann. (1960).
A syntax-oriented translator
. New York: Academic Press; J. A. N. Lee. (1967).
Anatomy of a compiler
. New York: Rheinhold; F. R. A. Hopgood. (1969).
Compiling techniques
. London: MacDonald.
23
. R. W. Floyd. (1964). The syntax of programming languages: A survey.
IEEE Transactions on Electronic Computers, EC-13
, 346â353.
24
. R. A. Brooker, I. R. MacCullum, D. Morris, & J. S. Rohl. (1963). The compiler-compiler. In
Annual Review in Automatic Programming
(Vol. 3). Oxford: Pergamon Press; R. A. Brooker, D. Morris, & J. S. Rohl. (1967). Experience with the compiler-compiler.
Computer Journal, 9
, 345â349.
25
. W. M. McKeeman, J. J. Horning, & D. B. Wortman. (1970).
A compiler generator
. Englewood-Cliffs, NJ: Prentice-Hall.
26
. T. S. Kuhn. (1970).
The structure of scientific revolutions
(2nd ed.). Chicago, IL: University of Chicago Press.
27
. G. M. Amdahl, G. A. Blaauw, & F. P. Brooks, Jr. (1964). Architecture of the IBM System/360.
IBM Journal of Research & Development, 8
, 87â101.
28
. R. F. Rosin. (1969a). Contemporary concepts of microprogramming and emulation.
Computing Surveys, 1
, 197â212.
29
. S. G. Tucker. (1965). Emulation of large systems.
Communications of the ACM, 8
, 753â761.
30
. C. G. Bell & A. Newell. (1971).
Computer structures: Readings and examples
. New York: McGraw-Hill.
31
. S. Dasgupta. (1989).
Computer architecture: A modern synthesis
(Vol. 1). New York: Wiley.
32
. D. E. Knuth. (1968).
The art of computer programming: Vol. 1. Fundamental algorithms
. Reading, MA: Addison-Wesley.
33
. D. E. Knuth. (1969).
The art of computer programming: Vol. 2. Seminumerical algorithms
. Reading, MA: Addison-Wesley.
34
. The third volume of the series was : D.E. Knuth (1973).
The art of computer programming: Vol. 3. Sorting and searching
. Reading, MA: Addison-Wesleyâ“Knuth Volume 3.”