Read The Half-Life of Facts Online
Authors: Samuel Arbesman
However, these stories, and how we use them to laugh at our own ignorance, are indicative of a viewpoint in our society: Not only will innovation continue, but anyone who foresees an end to the growth in technological knowledge is bound to be proven wrong. Technological development, and the changes in facts that go along with it, doesn’t seem to be ending anytime soon. Of
course, these things must end eventually. The physicist Tom Murphy has shown, in a reductio ad absurdum style of argument, that based on certain fundamental ideas about energy constraints, we will exhaust all the energy in our entire galaxy in less than three millennia. So a logistic curve, with its slow saturation to some sort of upper limit, might be more useful in the long term than a simple exponential with never-ending growth.
In the meantime, technology and science are growing incredibly rapidly and systematically. But there are still questions that need to be addressed: Why do these fields continue to grow? And why do they grow in such a regular manner, with mathematical shapes that are so often exponential curves?
. . .
THERE
are those who, when confronted with regularities such as Moore’s Law, feel that these are simply self-fulfilling propositions. Once Moore quantified the doubling rate of the number of components of integrated circuits, and predicted what would happen in the coming decade, it was simply a matter of working hard to make it come to pass. And once the prediction of 1975 came true, the industry had a continued stake in trying to reach the next milestone predicted by Moore’s Law, because if any company ever fell behind this curve, it would be out of business. Since it was presumed to be possible, these companies had to make it possible; otherwise, they were out of the game.
This is similar to the well-known Hawthorne effect, when subjects behave differently if they know they are being studied. The effect was named after what happened in a factory called Hawthorne Works outside Chicago in the 1920s and 1930s. Scientists wished to measure the effects of environmental changes, such as lighting, on the productivity of the workers. They discovered that whatever they did to change the workers’ behaviors—whether they increased the lighting or altered any other aspect of their environment—resulted in increased productivity. However, as soon as the study was completed, the productivity dropped.
The researchers concluded that the observations themselves were affecting productivity and not the experimental changes. The Hawthorne effect was defined as “an increase in worker productivity produced by the psychological stimulus of being singled out and made to feel important.” While it has been expanded to mean any change in response to being observed and studied, the focus here on productivity is important for us: If the members of an industry know that they’re being observed and measured, especially in relationship to a predicted metric, perhaps they have an added incentive to increase productivity and meet the metric’s expectations.
But this doesn’t quite ring true, and in fact it isn’t even possible. These doublings have been occurring in many areas of technology well before Moore formulated his law. As noted earlier, this regularity just in the realm of computing power has held true as far back as the late nineteenth and early twentieth centuries, before Gordon Moore was even born. So while Moore gave a name to something that had been happening, the phenomenon he named didn’t actually create it.
Why else might everything be adhering to these exponential curves and growing so rapidly? A likely answer is related to the idea of cumulative knowledge. Anything new—an idea, discovery, or technological breakthrough—must be built upon what is known already. This is generally how the world works. Scientific ideas build upon one another to allow for new scientific knowledge and technologies and are the basis for new breakthroughs. When it comes to technological and scientific growth, we can bootstrap what we have learned before toward the creation of new facts. We must gain a certain amount of knowledge in order to learn something new.
Koh and Magee argue that we should imagine that the magnitude of technological growth is proportional to the amount of knowledge that has come before it. The more preexisting methods, ideas, or anything else that is essential for making a certain technology just a little bit better, the more potential for that technology to grow.
What I have just stated can actually be described mathematically.
An equation in which something grows by an amount proportional to its current size gets exactly what we hoped for: exponential growth. What this means is that if technology is essentially bootstrapping itself, much as science does, and its growth is based on how much has come before it, then we can easily get these doublings and exponential growth rates. Numerous researchers have proposed a whole variety of mathematical models to explain this, using the core idea of cumulative knowledge.
So while exponential growth is not a self-fulfilling proposition, there is feedback, which leads to a sort of technological imperative: As there is more technological or scientific knowledge on which to grow, new technologies increase the speed at which they grow.
But why does this continue to happen? Technological or scientific change doesn’t happen automatically; people are needed to create new ideas and concepts. Therefore, in addition to knowledge accumulation, we need to understand another piece that’s important to the growth of knowledge: population growth.
. . .
SOMEWHERE
between ten thousand and twelve thousand years ago, a land bridge between Australia and Tasmania was destroyed. Up until that point individuals could easily walk between Australia and what became this small island off the southern coast of the mainland. Soon after the land bridge vanished, something happened: The tiny population of Tasmania became one of the least technologically advanced societies on the planet.
By the time European explorers came to Tasmania in the seventeenth century, the Tasmanians had only twenty-four distinct devices, as classified by anthropologists, in their toolkit. These twenty-four included such basics as rocks and clubs. In contrast, the Aborigines not far across the strait had hundreds more elements of technology: fishing nets, boats, barbed spears, cold-weather clothing, and much more.
The Tasmanians either never invented these technologies or simply lost them over the millennia.
Joseph Henrich, an anthropologist, constructed a mathematical model to account for how such a loss of technology, or even such a long absence of innovation, could have occurred. The model ultimately comes down to simple numbers. Larger groups of interacting people can maintain skills and innovations, and in turn develop new ones. A small group doesn’t have the benefit of specialization and idea exchange necessary for any of this to happen.
Imagine a small group of randomly chosen people stranded on a desert island. Not only would they have just a small subset of the knowledge necessary to re-create modern civilization—assuming Gilligan’s professor wasn’t included—but only a tiny fraction of the required skills could be done by each person. Much like the economic concept of division of labor, even if we each have two or three skills, to perform all of them adeptly, and also pass them along to our descendants, is a difficult proposition. The maintenance and creation of cultural knowledge are much more easily done with large groups of people; each person can specialize and be responsible for a smaller area of knowledge.
In fact, many economists argue that population growth has grown hand in hand with innovation and the development of new facts. The George Mason University economist Bryan Caplan writes:
The more populous periods of human history—most obviously the last few centuries—clearly produced more scientific, technological, and cultural innovations than earlier, less populous periods. More populous countries today produce many more scientific, technological, and cultural innovations than less populous countries.
A classic paper by economist Michael Kremer argues this position, in an incredibly sweeping and magnificent article: “Population Growth and Technological Change: One Million B.C. to 1990.”
Such a timescale is not for the weak-kneed. In an analysis
worthy of someone as well traveled as Doctor Who, Kremer shows that the growth of human population over the history of the world is consistent with how technological change happens.
Kremer does this in an elegant way, making only a small set of assumptions. First he states that population growth is limited by technological progress. This is one of those assumptions that has been around since Thomas Malthus, and it is based on the simple fact that as a population grows we need more technology to sustain the population, whether through more efficient food production, more efficient waste management, or other similar considerations.
Conversely, Kremer also states that technological growth should be proportional to population size. If invention occurs at the same rate for each person, the more people there are, the more innovation there should be. More recent research, however, shows that population density often causes innovation to grow faster than population size, so this seems like an underestimate. But let’s see where Kremer’s math takes us.
Using these two assumptions, and a bit of related math, Kremer found that a population’s growth rate will increase in size proportionally to the current number of people. To be clear: This is much faster than exponential growth, the fastest growth rate we’ve considered so far. Exponential growth is a constant rate, and here the rate is growing, and growing along the speed at which the population increases. This is known as a hyperbolic growth rate, and if left unchecked can even result in infinite growth.
Kremer found that until very recently, over the long sweep of human history, this result seems to be true and could be the cause of the rapid technological progress around us. The number of humans in the world has grown in proportion to the current level of the population—the larger the number of people on Earth, the faster the rate at which the population rises.
Furthermore, he found that his model fits with other aspects of world history. For example, just as Tasmania was disconnected from Australia about ten thousand years ago, a number of other land bridges were also destroyed, leading to several populated but
disconnected regions. The largest by far was the Old World, which consisted of Europe, Asia, and Africa. Next in size were the Americas, followed by Australia, Tasmania, and Flinders Island, a tiny island off the coast of Tasmania.
And as Kremer predicted, the largest areas—meaning those capable of supporting the largest populations—were the most technologically advanced. The Old World, with its gunpowder and other technologies, led the pack. In second place came the Americas, which were dotted with massive cities, used sophisticated calendars, and had well-developed agriculture. On the other hand, Australian Aborigines remained hunter-gatherers, and Tasmania, as mentioned before, was without even some of the most basic of technologies.
Last we have little Flinders Island, where evidence indicates that the population vanished only four thousand years after its land bridge was destroyed, possibly due to a
technological regress
. This is the phrase Kremer used to signify the loss of even the technologies basic for survival.
But is population really the only story? Or is something more complex going on?
In physics, a simple model that explains the largest amount of the system being studied is often termed a
first-order model.
The more “orders” that are added, the more precise the model will be, as this terminology is derived from the history of fitting functions to complex curves on a graph. The first order explains the general shape, the second order explains a bit of its wiggle, and so on. While each successive term—a higher order—makes the overall model more precise, they each individually explain less and less of the shape of the curve. The first-order model explains most of what’s going on, while the higher orders explain the details.
Very likely, population is part of the first-order model of technological progress; it certainly seems that technology and population have gone hand in hand for millennia. However, we know that the likelihood of someone being innovative is not independent of
population, as Kremer assumed, and we also know that higher population densities in certain regions need not lead to higher amounts of innovation.
Similarly, it’s not just the size of the population that’s important, but its parts; the makeup of the population can have an effect on how our facts change. Robert Merton, a renowned sociologist of science, argued in “Science, Technology, and Society in Seventeenth-Century England” that the concerns of the English people during this time period affected where the scientists and engineers of that century focused their attentions. It is unsurprising that they were obsessed with the construction of precise timepieces—that is what was needed in order to carefully measure longitude on the high seas, something of an English preoccupation during this time.
In addition, Merton argued that it wasn’t just the overall population size that caused innovation, but who these people were: It turns out that a greater percentage of eminent people of that time chose to become scientists rather than officers of the church or to go into the military. This in turn influenced the rapid innovation of England, rather than overall population size.
The world’s evolving technologies and changing facts are not just due to churning out babies and waiting for advances that are due to population growth. New knowledge and innovative technologies are due to a whole host of factors, from the concerns of the populace to the makeup of the population. But to ignore population growth as an important factor for technological innovation is to miss a significant piece of the puzzle.
We’ve examined technological change and how it’s mathematically regular and, even more so, often predictable. We now have a handle on why innovation fits the particular shapes that we see around us. And it’s clear that technological change can itself lead to widespread change of other facts. But there is one large area of technology that not only obeys reliable trajectories but also plays a significant role in the spread of other facts and pieces of knowledge: travel and communication.