The Half-Life of Facts (7 page)

Read The Half-Life of Facts Online

Authors: Samuel Arbesman

BOOK: The Half-Life of Facts
2.61Mb size Format: txt, pdf, ePub

To understand how much has changed, and how rapidly, during the 1990s, we can look to the
Today
show. At one point in January 1994, Bryant Gumbel was asked to read an e-mail address out loud.

He was at an utter loss, especially when it came to the “a, and then the ring around it.” This symbol, @, is second nature for us now, but Gumbel found it baffling. Gumbel and Katie Couric then went into a discussion about what the Internet is. They even asked those off camera, “What is ‘Internet’ anyway?”

The @ symbol has been on keyboards since the first typewriter in 1885, the Underwood. However, it languished in relative obscurity until people began using it as a separator in e-mail addresses, beginning in 1971. Even then, its usage didn’t enter the popular consciousness until decades later. Gumbel’s confusion, and our amusement at this situation, is a testament to the rapid change that the Internet has wrought.

But, of course, these changes aren’t limited to the Internet. When I think of a 386 processor I think of playing SimCity 2000 on my friend’s desktop computer, software and hardware that have both long since been superseded. In digital storage media, I have personally used 5-inch floppy disks, 3-inch diskettes, zip discs, rewritable CDs, flash drives, burnable DVDs, even the Commodore Datasette, and in 2012 I save many of my documents to the storage that’s available anytime I have access to the Internet: the cloud. This is over a span of less than thirty years.

Clearly our technological knowledge changes rapidly, and this shouldn’t surprise us. But in addition to our rapid adaptation to all of the change around us—which I address in
chapter 9
—what should surprise us is that there are regularities in these changes in technological knowledge. It’s not random and it’s not erratic. There is a pattern, and it affects many of the facts that surround us, even ones that don’t necessarily seem to deal with technology. The first example of this? Moore’s Law.

.   .   .

WE
all at least have heard of Moore’s Law. It deals with the rapid doubling of computer processing power. But what exactly is it and how did it come about? Gordon Moore, of the eponymous law, is a retired chemist and physicist as well as the cocreator of the Intel Corporation. He founded Intel in 1968 with Robert Noyce, who helped invent the integrated circuit, the core of every modern computer. But Moore wasn’t famous or fabulously wealthy when he developed his law. In fact, he hadn’t even founded Intel yet. Three years before, Moore wrote a short paper in the journal
Electronics
entitled, “Cramming More Components Onto Integrated Circuits.”

In this paper Moore predicted the number of components that it would be possible to place on a single circuit in the years 1970 and 1975. He argued that growth would continue to increase at the same rate. Essentially, Moore’s Law states that the processing power of a single chip or circuit will double every year. He didn’t arrive at this conclusion through exhaustive amounts of data gathering and analysis; in fact, he based his law on only four data points.

The incredible thing is that he was right. This law has held roughly true since 1965, even as more and more data have been added to the simple picture he examined. While with more data we now know that the period for doubling is closer to eighteen months than a year, the principle stands. It has weathered the personal computer revolution, the march from 286 to 486 to Pentium, and the many advances since then. Just as in science, we have experienced an exponential rise in technological advances over time: Processing power grows every year at a constant
rate
rather than by a constant amount. And according to the original formulation, the annual rate of growth is about 200 percent.

Moore’s Law hasn’t simply affected our ability to make more and more calculations more easily. Many other developments occur as an outgrowth of this pattern. When processing power doubles rapidly it allows much more to be possible. For example, the
number of pixels that digital cameras can process has increased directly due to the regularity of Moore’s Law.

But it gets even more interesting. If you generalize Moore’s Law from chips to simply thinking about information technology and processing power in general, Moore’s Law becomes the latest in a long line of technical rules of thumb that explain extremely regular change in technology.

What does this mean? Let’s first take the example of processing power. Rather than simply focusing on the number of components on an integrated circuit, we can think more broadly. What do these components do? They enable calculations to occur. So if we measure calculations per second, or calculations per second at a given cost (which is the kind of thing that might be useful when looking at affordable personal computers), we can ignore the specific underlying technologies that enable these things to happen and instead focus on what they are designed to do.

Chris Magee set out to do exactly that. Magee is a professor at MIT in the Engineering Systems Division, an interdisciplinary department that defies any sort of simple description. It draws people from lots of different areas—physics, computer science, engineering, even aerospace science. But the common denominator is that all of these people think about complex systems—from traffic to health care—from the perspectives of engineering, management science, and the quantitative social sciences.

Magee, along with a postdoctoral fellow Heebyung Koh, decided to examine the progress we’ve made in our ability to calculate, or what they termed
information transformation
. They compiled a vast data set of all the different instances of information transformation that have occurred throughout history. Their dataset, which goes back to the nineteenth century, is close to exhaustive: It begins with calculations done by hand in 1892 that clocked in at a little under one calculation a minute. Following that came: an IBM Hollerith Tabulator in 1919 that was only about four times faster; the ENIAC, which is often thought of as the world’s first computer, that used vacuum tubes to complete about four
thousand calculations per second in 1946; the Apple II, which could perform twenty thousand calculations every second, in 1977; and, of course, many more modern and extremely fast machines.

By lining up one technology after another, one thing becomes clear: Despite the differences among all of these technologies—human brains, punch cards, vacuum tubes, integrated circuits—the overall increase in humanity’s ability to perform calculations has progressed quite smoothly and extremely quickly. Put together, there has been a roughly exponential increase in our information transformation abilities over time.

But how does this happen? Isn’t it true that when a new technology or innovation is developed it is often far ahead of what is currently in use? And if a new technology’s not that much better, shouldn’t it simply not be adopted? How can all of these combined technologies yield such a smooth and regular curve? Actually, the truth is far messier but much more exciting.

In fact, when someone develops a new innovation, it is often largely untested. It might be better than what is currently in use, but it is clearly a work in progress. This means that the new technology is initially only a little bit better. As its developers improve and refine it (this is the part that often distinguishes engineering and practical application from basic science), they begin to realize the potential of this new innovation. Its capabilities begin to grow exponentially.

But then a limit is reached. And when that limit is reached there is the opportunity to bring in a new technology, even if it’s still tentative, untested, and buggy. This progression of refinement and plateau for each successive innovation is in fact described in the mathematical world as a series of steadily rising
logistic curves.

This is a variation on the theme of the exponential curve. Imagine bacteria growing in a petri dish. At first, as they gobble the nutrients in the dish, they obey the doubling and rapid growth of the exponential curve. One bacterium divides into two bacteria, two bacteria become four, and eventually, one million becomes two million. But soon enough these bacteria bump up against certain
limits. They begin to run out of space, literally bumping up against each other, since the size of the petri dish, though very large in the eyes of each individual bacterium, is far from infinite relative to the entire colony.

Soon the growth slows, and eventually it approaches a certain steady number of bacteria, the number that can be safely held in the petri dish over a long period of time. This amount is known as the
carrying capacity
. The mathematical function that explains how something can quickly begin to grow exponentially, only to slow down until it reaches a carrying capacity, is known as a logistic curve.

Of course, the logistic curve describes lots more than bacteria. It can explain everything from how deer populate a forest to how the fraction of the world population with access to the Internet changes over time. It can also explain how people adopt something new.

When a tech gadget is new, its potential for growth is huge. No one has it yet, so its usage can only grow. As people begin to buy the newest Apple device, for example, each additional user is gained faster and faster, obeying an exponential curve. But of course this growth can’t go on forever. Eventually the entire population that might possibly choose to adopt the gadget is reached. The growth slows down as it reaches this carrying capacity, obeying its logistic shape.

These curves are also often referred to as S-curves, due to their stretched S-like shapes. This is the term that’s commonly used when discussing innovation adoption. Clayton Christensen, a professor at Harvard Business School, argues that a series of tightly coupled and successive S-curves—each describing the progression and lifetime of a single technology—can be combined sequentially when looking at what each consecutive technology is actually doing (such as transforming information) and together yield a steady and smooth exponential curve, exactly as Magee and Koh found. This is known as linked S-curve theory, and it explains how multiple technologies have been combined to explain the shapes of
change we see over time.

Figure 4. Schematic of linked S-curves (or linked logistic curves). When combined, they can yield a smooth curve over time.

But Magee and Koh didn’t simply expand Moore’s Law and examine information transformation. They looked at a whole host of technological functions to see how they have changed over the years. From information storage and information transportation to how we deal with energy, in each case they found mathematical regularities.

This ongoing doubling of technological capabilities has even been found in robots. Rodney Brooks is a professor emeritus at MIT who has lived through much of the current growth in robotics and is himself a pioneer in the field. He even cofounded the company that created the Roomba. Brooks looked at how robots have improved over the years and found that their movement abilities—how far and how fast a robot can move—have gone through about thirteen doublings in twenty-six years. That means that we have had a doubling about every two years: right on schedule and similar to Moore’s Law.

Kevin Kelly, in his book
What Technology Wants
, has cataloged a wide collection of technological growth rates that fit an exponential curve. The doubling time of each kind of technology, as shown in the following table, acts as a sort of half-life for it and is indicative of exponential growth: It’s the amount of time before what you have is out-of-date and you’re itching to upgrade.

Technology
Doubling Time (in months)
Wireless, bits per second
10
Digital cameras, pixels per dollar
12
Pixels, per array
19
Hard-drive storage, gigabytes per dollar
20
DNA sequencing, base pairs per dollar
22
Bandwidth, kilobits per second per dollar
30

Notably, this table bears a strikingly similarity to the chart seen in
chapter 2
, from Price’s research. Technological knowledge exhibits rapid growth just like scientific knowledge.

But the relationship between the progression of technological facts and that of science is even more tightly intertwined. One of the simplest ways to begin seeing this is by looking at scientific prefixes.

.   .   .

IN
chapter 8
, I explore how advances in measurement enable the creation of new facts and new knowledge. But one fundamental way that measurement is affected is through the tools that we have to understand our surroundings. And we can see the effects of technological advances in measurement by looking at it in one small and simple area: the scientific prefix.

The International Bureau of Weights and Measures, which is responsible for defining the length of a meter, and for a long time maintained in a special vault the quintessential and canonical kilogram, is also in charge of providing the officially sanctioned metric prefixes. We are all aware of
centi
- (one hundredth), from the world of length, and
giga
- (one billion), from measuring hard disk space. But there are other, more exotic, prefixes. For example,
femto
- is one quadrillionth and
zeta
- is a sextillion (a one followed by twenty-one zeroes). The most recent prefixes are
yotta
- (10
24
) and
yocto
- (10
-24
), both approved in 1991.

Other books

Gladyss of the Hunt by Arthur Nersesian
The Forgiving Hour by Robin Lee Hatcher
Special Ops Exclusive by Elle Kennedy
It's Now or Never by Jill Steeples
Silver by Cheree Alsop
Dark Love by M. D. Bowden
Explorer X Alpha by LM. Preston
The Cemetery Boys by Heather Brewer