Authors: jon stokes
Tags: #Computers, #Systems Architecture, #General, #Microprocessors
of data as input, and it produces a stream of results as output. For the pur-
poses of our initial discussion, we can generalize by saying that the
code stream
consists of different types of arithmetic operations and the
data stream
consists of the data on which those operations operate. The
results stream
, then, is made up of the results of these operations. You could also say that the results
stream begins to flow when the operators in the code stream are carried out
on the operands in the data stream.
Instructions
Data
Results
Figure 1-1: A simple representation of
a general-purpose computer
NOTE
Figure 1-1 is my own variation on the traditional way of representing a processor’s
arithmetic logic unit (ALU)
, which is the part of the processor that does the addition, subtraction, and so on, of numbers. However, instead of showing two operands
entering the top ports and a result exiting the bottom port (as is the custom in the
literature), I’ve depicted code and data streams entering the top ports and a
results stream
leaving the bottom port.
2
Chapter 1
To illustrate this point, imagine that one of those little black boxes in the
code stream of Figure 1-1 is an addition operator (a + sign) and that two of
the white data boxes contain two integers to be added together, as shown in
Figure 1-2.
2
+
3
=
5
Figure 1-2: Instructions are combined
with data to produce results
You might think of these black-and-white boxes as the keys on a
calculator—with the white keys being numbers and the black keys being
operators—the gray boxes are the results that appear on the calculator’s
screen. Thus the two input streams (the code stream and the data stream)
represent sequences of key presses (arithmetic operator keys and number
keys), while the output stream represents the resulting sequence of numbers
displayed on the calculator’s screen.
The kind of simple calculation described above represents the sort of
thing that we intuitively think computers do: like a pocket calculator, the
computer takes numbers and arithmetic operators (such as +, –, ÷, ×, etc.) as
input, performs the requested operation, and then displays the results. These
results might be in the form of pixel values that make up a rendered scene in a
computer game, or they might be dollar values in a financial spreadsheet.
The File-Clerk Model of Computing
The “calculator” model of computing, while useful in many respects, isn’t the
only or even the best way to think about what computers do. As an alterna-
tive, consider the following definition of a computer:
A
computer
is a device that shuffles numbers around from place to
place, reading, writing, erasing, and rewriting different numbers in
different locations according to a set of inputs, a fixed set of rules
for processing those inputs, and the prior history of all the inputs
that the computer has seen since it was last reset, until a predefined
set of criteria are met that cause the computer to halt.
We might, after Richard Feynman, call this idea of a computer as a
reader, writer, and modifier of numbers the “file-clerk” model of computing
(as opposed to the aforementioned calculator model). In the file-clerk model,
the computer accesses a large (theoretically infinite) store of sequentially
arranged numbers for the purpose of altering that store to achieve a desired
result. Once this desired result is achieved, the computer halts so that the
now-modified store of numbers can be read and interpreted by humans.
The file-clerk model of computing might not initially strike you as all
that useful, but as this chapter progresses, you’ll begin to understand how
important it is. This way of looking at computers is powerful because it
emphasizes the end product of computation rather than the computation
itself. After all, the purpose of computers isn’t just to compute in the
abstract, but to produce usable results from a given data set.
Basic Computing Concepts
3
NOTE
Those who’ve studied computer science will recognize in the preceding description the
beginnings of a discussion of a Turing machine. The Turing machine is, however, too
abstract for our purposes here, so I won’t actually describe one. The description that
I develop here sticks closer to the classic Reduced Instruction Set Computing (RISC)
load-store model, where the computer is “fixed” along with the storage. The Turing
model of a computer as a movable read-write head (with a state table) traversing a
linear “tape” is too far from real-life hardware organization to be anything but confusing in this discussion.
In other words, what matters in computing is not that you did some math,
but that you started with a body of numbers, applied a sequence of operations
to it, and got a body of results. Those results could, again, represent pixel
values for a rendered scene or an environmental snapshot in a weather
simulation. Indeed, the idea that a computer is a device that transforms one
set of numbers into another should be intuitively obvious to anyone who has
ever used a Photoshop filter. Once we understand computers not in terms of
the math they do, but in terms of the numbers they move and modify, we can
begin to get a fuller picture of how they operate.
In a nutshell, a computer is a device that reads, modifies, and writes
sequences of numbers. These three functions—read, modify, and write—
are the three most fundamental functions that a computer performs, and
all of the machine’s components are designed to aid in carrying them out.
This read-modify-write sequence is actually inherent in the three central
bullet points of our initial file-clerk definition of a computer. Here is the
sequence mapped explicitly onto the file-clerk definition:
A computer is a device that shuffles numbers around from place to
place, reading, writing, erasing, and rewriting different numbers in
different locations according to a set of inputs [
read
], a fixed set of
rules for processing those inputs [
modify
], and the prior history of
all the inputs that the computer has seen since it was last reset
[
write
], until a predefined set of criteria are met that cause the
computer to halt.
That sums up what a computer does. And, in fact, that’s
all
a computer
does. Whether you’re playing a game or listening to music, everything that’s
going on under the computer’s hood fits into this model.
NOTE
All of this is fairly simple so far, and I’ve even been a bit repetitive with the explanations to drive home the basic read-modify-write structure of all computer operations. It’s
important to grasp this structure in its simplicity, because as we increase our computing
model’s level of complexity, we’ll see this structure repeated at every level.
The Stored-Program Computer
All computers consist of at least three fundamental types of structures
needed to carry out the read-modify-write sequence:
Storage
To say that a computer “reads” and “writes” numbers implies that
there is at least one number-holding structure that it reads from and
4
Chapter 1
writes to. All computers have a place to put numbers—a storage
area that can be read from and written to.
Arithmetic logic unit (ALU)
Similarly, to say that a computer “modifies” numbers implies that the
computer contains a device for performing operations on numbers. This
device is the ALU, and it’s the part of the computer that performs arith-
metic operations (addition, subtraction, and so on), on numbers from
the storage area. First, numbers are read from storage into the ALU’s
data input port. Once inside the ALU, they’re modified by means of an
arithmetic calculation, and then they’re written back to storage via the
ALU’s output port.
The ALU is actually the green, three-port device at the center of
Figure 1-1. Note that ALUs aren’t normally understood as having a code
input port along with their data input port and results output port. They
do, of course, have command input lines that let the computer specify
which operation the ALU is to carry out on the data arriving at its data
input port, so while the depiction of a code input port on the ALU in
Figure 1-1 is unique, it is not misleading.
Bus
In order to move numbers between the ALU and storage, some means of
transmitting numbers is required. Thus, the ALU reads from and writes
to the data storage area by means of the
data bus
, which is a network of transmission lines for shuttling numbers around inside the computer.
Instructions travel into the ALU via the
instruction
bus
, but we won’t cover how instructions arrive at the ALU until Chapter 2. For now, the data bus
is the only bus that concerns us.
The code stream in Figure 1-1 flows into the ALU in the form of a
sequence of arithmetic instructions (add, subtract, multiply, and so on).
The operands for these instructions make up the data stream, which flows
over the data bus from the storage area into the ALU. As the ALU carries
out operations on the incoming operands, the results stream flows out of the
ALU and back into the storage area via the data bus. This process continues
until the code stream stops coming into the ALU. Figure 1-3 expands on
Figure 1-1 and shows the storage area.
The data enters the ALU from a special storage area, but where does
the code stream come from? One might imagine that it comes from the
keypad of some person standing at the computer and entering a sequence
of instructions, each of which is then transmitted to the code input port of
the ALU, or perhaps that the code stream is a prerecorded list of instruc-
tions that is fed into the ALU, one instruction at a time, by some manual or
automated mechanism. Figure 1-3 depicts the code stream as a prerecorded
list of instructions that is stored in a special storage area just like the data stream, and modern computers do store the code stream in just such a
manner.
Basic Computing Concepts
5
Storage Area
ALU
Figure 1-3: A simple computer, with an ALU
and a region for storing instructions and data
NOTE
More advanced readers might notice that in Figure 1-3 (and in Figure 1-4 later)
I’ve separated the code and data in main memory after the manner of a Harvard
architecture level-one cache. In reality, blocks of code and data are mixed together in
main memory, but for now I’ve chosen to illustrate them as logically separated.
The modern computer’s ability to store and reuse prerecorded sequences
of commands makes it fundamentally different from the simpler calculating
machines that preceded it. Prior to the invention of the first
stored-program
computer
,1 all computing devices, from the abacus to the earliest electronic computing machines, had to be manipulated by an operator or group of
operators who manually entered a particular sequence of commands each
time they wanted to make a particular calculation. In contrast, modern com-
puters store and reuse such command sequences, and as such they have a
level of flexibility and usefulness that sets them apart from everything that
has come before. In the rest of this chapter, you’ll get a first-hand look at the many ways that the stored-program concept affects the design and capabilities of the modern computer.
Refining the File-Clerk Model
Let’s take a closer look at the relationship between the code, data, and
results streams by means of a quick example. In this example, the code
stream consists of a single instruction, an add, which tells the ALU to add
two numbers together.
1 In 1944 J. Presper Eckert, John Mauchly, and John von Neumann proposed the first stored-program computer, the EDVAC (Electronic Discrete Variable Automatic Computer), and in 1949 such a machine, the EDSAC, was built by Maurice Wilkes of Cambridge University.
6
Chapter 1
The add instruction travels from code storage to the ALU. For now, let’s
not concern ourselves with how the instruction gets from code storage to
the ALU; let’s just assume that it shows up at the ALU’s code input port
announcing that there is an addition to be carried out immediately. The
ALU goes through the following sequence of steps:
1.
Obtain the two numbers to be added (the input operands) from data
storage.
2.
Add the numbers.
3.
Place the results back into data storage.
The preceding example probably sounds simple, but it conveys the basic
manner in which computers—
all
computers—operate. Computers are fed
a sequence of instructions one by one, and in order to execute them, the
computer must first obtain the necessary data, then perform the calculation
specified by the instruction, and finally write the result into a place where the end user can find it. Those three steps are carried out billions of times per