Read The Singularity Is Near: When Humans Transcend Biology Online
Authors: Ray Kurzweil
Tags: #Non-Fiction, #Fringe Science, #Retail, #Technology, #Amazon.com
Oh, and about that “offer” at the beginning of this précis, consider that present stock values are based on future expectations. Given that the (literally) shortsighted linear intuitive view represents the ubiquitous outlook, the common wisdom in economic expectations is dramatically understated. Since stock prices reflect the consensus of a buyer-seller market, the prices reflect the underlying linear assumption that most people share regarding future economic growth. But the law of accelerating returns clearly implies that the growth rate will continue to grow exponentially, because the rate of progress will continue to accelerate.
M
OLLY
2004:
But wait a second, you said that I would get eighty trillion dollars if I read and understood this section of the chapter
.
R
AY
:
That’s right. According to my models, if we replace the linear outlook with the more appropriate exponential outlook, current stock prices should triple
.
94
Since there’s (conservatively) forty trillion dollars in the equity markets, that’s eighty trillion in additional wealth
.
M
OLLY
2004:
But you said I would get that money
.
R
AY
:
No, I said “you” would get the money, and that’s why I suggested reading the sentence carefully. The English word “you” can be singular or plural. I meant it in the sense of “all of you.”
M
OLLY
2004:
Hmm, that’s annoying. You mean all of us as in the whole world? But not everyone will read this book
.
R
AY
:
Well, but everyone could. So if all of you read this book and understand it, then economic expectations would be based on the historical exponential model, and thus stock values would increase
.
M
OLLY
2004:
You mean if everyone understands it and agrees with it. I mean the market is based on expectations, right?
R
AY
:
Okay, I suppose I was assuming that
.
M
OLLY
2004:
So is that what you expect to happen?
R
AY
:
Well, actually, no. Putting on my futurist hat again, my prediction is that indeed these views on exponential growth will ultimately prevail but only over time, as more and more evidence of the exponential nature of technology and its impact on the economy becomes apparent. This will happen gradually over the next decade, which will represent a strong long-term updraft for the market
.
G
EORGE
2048:
I don’t know, Ray. You were right that the price-performance of information technology in all of its forms kept growing at an exponential rate, and with continued growth also in the exponent. And indeed, the economy kept growing exponentially, thereby more than overcoming a very high deflation rate. And it also turned out that the general public did catch on to all of these trends. But this realization didn’t have the positive impact on the stock market that you’re describing. The stock market did increase along with the economy, but the realization of a higher growth rate did little to increase stock prices
.
R
AY
:
Why do you suppose it turned out that way?
G
EORGE
2048:
Because you left one thing out of your equation. Although people realized that stock values would increase rapidly, that same realization also increased the discount rate (the rate at which we need to discount values in the future when considering their present value). Think about it. If we know
that stocks are going to increase significantly in a future period, then we’d like to have the stocks now so that we can realize those future gains. So the perception of increased future equity values also increases the discount rate. And that cancels out the expectation of higher future values
.
M
OLLY
2104:
Uh, George, that was not quite right either. What you say makes logical sense, but the psychological reality is that the heightened perception of increased future values did have a greater positive impact on stock prices than increases in the discount rate had a negative effect. So the general acceptance of exponential growth in both the price-performance of technology and the rate of economic activity did provide an upward draft for the equities market, but not the tripling that you spoke about, Ray, due to the effect that George was describing
.
M
OLLY
2004:
Okay, I’m sorry I asked. I think I’ll just hold on to the few shares I’ve got and not worry about it
.
R
AY
:
What have you invested in?
M
OLLY
2004:
Let’s see, there’s this new natural language–based search-engine company that hopes to take on Google. And I’ve also invested in a fuel-cell company. Also, a company building sensors that can travel in the bloodstream
.
R
AY
:
Sounds like a pretty high-risk, high-tech portfolio
.
M
OLLY
2004:
I wouldn’t call it a portfolio. I’m just dabbling with the technologies you’re talking about
.
R
AY
:
Okay, but keep in mind that while the trends predicted by the law of accelerating returns are remarkably smooth, that doesn’t mean we can readily predict which competitors will prevail
.
M
OLLY
2004:
Right, that’s why I’m spreading my bets
.
CHAPTER THREE
Achieving the Computational
Capacity of the Human Brain
As I discuss in
Engines of Creation
, if you can build genuine AI, there are reasons to believe that you can build things like neurons that are a million times faster. That leads to the conclusion that you can make systems that think a million times faster than a person. With AI, these systems could do engineering design. Combining this with the capability of a system to build something that is better than it, you have the possibility for a very abrupt transition. This situation may be more difficult to deal with even than nanotechnology, but it is much more difficult to think about it constructively at this point. Thus, it hasn’t been the focus of things that I discuss, although I periodically point to it and say: “That’s important too.”
—E
RIC
D
REXLER, 1989
The Sixth Paradigm of Computing Technology:
Three-Dimensional Molecular Computing and
Emerging Computational Technologies
I
n the April 19, 1965, issue of
Electronics
, Gordon Moore wrote, “The future of integrated electronics is the future of electronics itself. The advantages of integration will bring about a proliferation of electronics, pushing this science into many new areas.”
1
With those modest words, Moore ushered in a revolution that is still gaining momentum. To give his readers some idea of how profound this new science would be, Moore predicted that “by 1975, economics may dictate squeezing as many as 65,000 components on a single silicon chip.” Imagine that.
Moore’s article described the repeated annual doubling of the number of transistors (used for computational elements, or gates) that could be fitted onto an integrated circuit. His 1965 “Moore’s Law” prediction was criticized at the time because his logarithmic chart of the number of components on a chip
had only five reference points (from 1959 through 1965), so projecting this nascent trend all the way out to 1975 was seen as premature. Moore’s initial estimate was incorrect, and he revised it downward a decade later. But the basic idea—the exponential growth of the price-performance of electronics based on shrinking the size of transistors on an integrated circuit—was both valid and prescient.
2
Today, we talk about billions of components rather than thousands. In the most advanced chips of 2004, logic gates are only fifty nanometers wide, already well within the realm of nanotechnology (which deals with measurements of one hundred nanometers or less). The demise of Moore’s Law has been predicted on a regular basis, but the end of this remarkable paradigm keeps getting pushed out in time. Paolo Gargini, Intel Fellow, director of Intel technology strategy, and chairman of the influential International Technology Roadmap for Semiconductors (ITRS), recently stated, “We see that for at least the next 15 to 20 years, we can continue staying on Moore’s Law. In fact, . . . nanotechnology offers many new knobs we can turn to continue improving the number of components on a die.”
3
The acceleration of computation has transformed everything from social and economic relations to political institutions, as I will demonstrate throughout this book. But Moore did not point out in his papers that the strategy of shrinking feature sizes was not, in fact, the first paradigm to bring exponential growth to computation and communication. It was the fifth, and already, we can see the outlines of the next: computing at the molecular level and in three dimensions. Even though we have more than a decade left of the fifth paradigm, there has already been compelling progress in all of the enabling technologies required for the sixth paradigm. In the next section, I provide an analysis of the amount of computation and memory required to achieve human levels of intelligence and why we can be confident that these levels will be achieved in inexpensive computers within two decades. Even these very powerful computers will be far from optimal, and in the last section of this chapter I’ll review the limits of computation according to the laws of physics as we understand them today. This will bring us to computers circa the late twenty-first century.
The Bridge to 3-D Molecular Computing.
Intermediate steps are already under way: new technologies that will lead to the sixth paradigm of molecular three-dimensional computing include nanotubes and nanotube circuitry, molecular computing, self-assembly in nanotube circuits, biological systems emulating circuit assembly, computing with DNA, spintronics (computing with
the spin of electrons), computing with light, and quantum computing. Many of these independent technologies can be integrated into computational systems that will eventually approach the theoretical maximum capacity of matter and energy to perform computation and will far outpace the computational capacities of a human brain.
One approach is to build three-dimensional circuits using “conventional” silicon lithography. Matrix Semiconductor is already selling memory chips that contain vertically stacked planes of transistors rather than one flat layer.
4
Since a single 3-D chip can hold more memory, overall product size is reduced, so Matrix is initially targeting portable electronics, where it aims to compete with flash memory (used in cell phones and digital cameras because it does not lose information when the power is turned off). The stacked circuitry also reduces the overall cost per bit. Another approach comes from one of Matrix’s competitors, Fujio Masuoka, a former Toshiba engineer who invented flash memory. Masuoka claims that his novel memory design, which looks like a cylinder, reduces the size and cost-per-bit of memory by a factor of ten compared to flat chips.
5
Working prototypes of three-dimensional silicon chips have also been demonstrated at Rensselaer Polytechnic Institute’s Center for Gigascale Integration and at the MIT Media Lab.
Tokyo’s Nippon Telegraph and Telephone Corporation (NTT) has demonstrated a dramatic 3-D technology using electron-beam lithography, which can create arbitrary three-dimensional structures with feature sizes (such as transistors) as small as ten nanometers.
6
NTT demonstrated the technology by creating a high-resolution model of the Earth sixty microns in size with ten-nanometer features. NTT says the technology is applicable to nanofabrication of electronic devices such as semiconductors, as well as creating nanoscale mechanical systems.
Nanotubes Are Still the Best Bet.
In
The Age of Spiritual Machines
, I cited nanotubes—using molecules organized in three dimensions to store memory bits and to act as logic gates—as the most likely technology to usher in the era of three-dimensional molecular computing. Nanotubes, first synthesized in 1991, are tubes made up of a hexagonal network of carbon atoms that have been rolled up to make a seamless cylinder.
7
Nanotubes are very small: single-wall nanotubes are only one nanometer in diameter, so they can achieve high densities.