The Singularity Is Near: When Humans Transcend Biology (80 page)

Read The Singularity Is Near: When Humans Transcend Biology Online

Authors: Ray Kurzweil

Tags: #Non-Fiction, #Fringe Science, #Retail, #Technology, #Amazon.com

BOOK: The Singularity Is Near: When Humans Transcend Biology
13.05Mb size Format: txt, pdf, ePub

The Criticism from Incredulity

 

Perhaps the most candid criticism of the future I have envisioned here is simple disbelief that such profound changes could possibly occur. Chemist Richard Smalley, for example, dismisses the idea of nanobots being capable of performing missions in the human bloodstream as just “silly.” But scientists’ ethics call for caution in assessing the prospects for current work, and such reasonable prudence unfortunately often leads scientists to shy away from considering the power of generations of science and technology far beyond today’s frontier. With the rate of paradigm shift occurring ever more quickly, this ingrained pessimism does not serve society’s needs in assessing scientific capabilities in the decades ahead. Consider how incredible today’s technology would seem to people even a century ago.

A related criticism is based on the notion that it is difficult to predict the future, and any number of bad predictions from other futurists in earlier eras can be cited to support this. Predicting which company or product will succeed is indeed very difficult, if not impossible. The same difficulty occurs in predicting which technical design or standard will prevail. (For example, how will the wireless-communication protocols WiMAX, CDMA, and 3G fare over the next several years?) However, as this book has extensively argued, we find remarkably precise and predictable exponential trends when assessing the overall effectiveness (as measured by price-performance, bandwidth, and other measures of capability) of information technologies. For example, the smooth exponential growth of the price-performance of computing dates back over a century. Given that the minimum amount of matter and energy required to compute or transmit a bit of information is known to be vanishingly small, we can confidently predict the continuation of these information-technology trends at least through this next century. Moreover, we can reliably predict the capabilities of these technologies at future points in time.

Consider that predicting the path of a single molecule in a gas is essentially impossible, but predicting certain properties of the entire gas (composed of a great many chaotically interacting molecules) can reliably be predicted through the laws of thermodynamics. Analogously, it is not possible to reliably predict the results of a specific project or company, but the overall capabilities of information technology (comprised of many chaotic activities) can nonetheless be dependably anticipated through the law of accelerating returns.

Many of the furious attempts to argue why machines—nonbiological systems—cannot ever possibly compare to humans appear to be fueled by this basic reaction of incredulity. The history of human thought is marked by many
attempts to refuse to accept ideas that seem to threaten the accepted view that our species is special. Copernicus’s insight that the Earth was not at the center of the universe was resisted, as was Darwin’s that we were only slightly evolved from other primates. The notion that machines could match and even exceed human intelligence appears to challenge human status once again.

In my view there is something essentially special, after all, about human beings. We were the first species on Earth to combine a cognitive function and an effective opposable appendage (the thumb), so we were able to create technology that would extend our own horizons. No other species on Earth has accomplished this. (To be precise, we’re the only surviving species in this ecological niche—others, such as the Neanderthals, did not survive.) And as I discussed in
chapter 6
, we have yet to discover any other such civilization in the universe.

The Criticism from Malthus

 

Exponential Trends Don’t Last Forever
. The classical metaphorical example of exponential trends hitting a wall is known as “rabbits in Australia.” A species happening upon a hospitable new habitat will expand its numbers exponentially until its growth hits the limits of the ability of that environment to support it. Approaching this limit to exponential growth may even cause an overall reduction in numbers—for example, humans noticing a spreading pest may seek to eradicate it. Another common example is a microbe that may grow exponentially in an animal body until a limit is reached: the ability of that body to support it, the response of its immune system, or the death of the host.

Even the human population is now approaching a limit. Families in the more developed nations have mastered means of birth control and have set relatively high standards for the resources they wish to provide their children. As a result population expansion in the developed world has largely stopped. Meanwhile people in some (but not all) underdeveloped countries have continued to seek large families as a means of social security, hoping that at least one child will survive long enough to support them in old age. However, with the law of accelerating returns providing more widespread economic gains, the overall growth in human population is slowing.

So isn’t there a comparable limit to the exponential trends that we are witnessing for information technologies?

The answer is yes, but not before the profound transformations described throughout this book take place. As I discussed in
chapter 3
, the amount of
matter and energy required to compute or transmit one bit is vanishingly small. By using reversible logic gates, the input of energy is required only to transmit results and to correct errors. Otherwise, the heat released from each computation is immediately recycled to fuel the next computation.

As I discussed in
chapter 5
, nanotechnology-based designs for virtually all applications—computation, communication, manufacturing, and transportation—will require substantially less energy than they do today. Nanotechnology will also facilitate capturing renewable energy sources such as sunlight. We could meet all of our projected energy needs of thirty trillion watts in 2030 with solar power if we captured only 0.03 percent (three ten-thousandths) of the sun’s energy as it hit the Earth. This will be feasible with extremely inexpensive, lightweight, and efficient nanoengineered solar panels together with nano–fuel cells to store and distribute the captured energy.

A Virtually Unlimited Limit
. As I discussed in
chapter 3
an optimally organized 2.2-pound computer using reversible logic gates has about 10
25
atoms and can store about 10
27
bits. Just considering electromagnetic interactions between the particles, there are at least 10
15
state changes per bit per second that can be harnessed for computation, resulting in about 10
42
calculations per second in the ultimate “cold” 2.2-pound computer. This is about 10
16
times more powerful than all biological brains today. If we allow our ultimate computer to get hot, we can increase this further by as much as 10
8
-fold. And we obviously won’t restrict our computational resources to one kilogram of matter but will ultimately deploy a significant fraction of the matter and energy on the Earth and in the solar system and then spread out from there.

Specific paradigms do reach limits. We expect that Moore’s Law (concerning the shrinking of the size of transistors on a flat integrated circuit) will hit a limit over the next two decades. The date for the demise of Moore’s Law keeps getting pushed back. The first estimates predicted 2002, but now Intel says it won’t take place until 2022. But as I discussed in
chapter 2
, every time a specific computing paradigm was seen to approach its limit, research interest and pressure increased to create the next paradigm. This has already happened four times in the century-long history of exponential growth in computation (from electromagnetic calculators to relay-based computers to vacuum tubes to discrete transistors to integrated circuits). We have already achieved many important milestones toward the next (sixth) paradigm of computing: three-dimensional self-organizing circuits at the molecular level. So the impending end of a given paradigm does not represent a true limit.

There are limits to the power of information technology, but these limits are
vast. I estimated the capacity of the matter and energy in our solar system to support computation to be at least 10
70
cps (see
chapter 6
). Given that there are at least 10
20
stars in the universe, we get about 10
90
cps for it, which matches Seth Lloyd’s independent analysis. So yes, there are limits, but they’re not very limiting.

The Criticism from Software

 

A common challenge to the feasibility of strong AI, and therefore the Singularity, begins by distinguishing between quantitative and qualitative trends. This argument acknowledges, in essence, that certain brute-force capabilities such as memory capacity, processor speed, and communications bandwidths are expanding exponentially but maintains that the software (that is, the methods and algorithms) are not.

This is the hardware-versus-software challenge, and it is a significant one. Virtual-reality pioneer Jaron Lanier, for example, characterizes my position and that of other so-called cybernetic totalists as, we’ll just figure out the software in some unspecified way—a position he refers to as a software “deus ex machina.”
2
This ignores, however, the specific and detailed scenario that I’ve described by which the software of intelligence will be achieved. The reverse engineering of the human brain, an undertaking that is much further along than Lanier and many other observers realize, will expand our AI toolkit to include the self-organizing methods underlying human intelligence. I’ll return to this topic in a moment, but first let’s address some other basic misconceptions about the so-called lack of progress in software.

Software Stability
. Lanier calls software inherently “unwieldy” and “brittle” and has described at great length a variety of frustrations that he has encountered in using it. He writes that “getting computers to perform specific tasks of significant complexity in a reliable but modifiable way, without crashes or security breaches, is essentially impossible.”
3
It is not my intention to defend all software, but it’s not true that complex software is necessarily brittle and prone to catastrophic breakdown. Many examples of complex mission-critical software operate with very few, if any, breakdowns: for example, the sophisticated software programs that control an increasing percentage of airplane landings, monitor patients in critical-care facilities, guide intelligent weapons, control the investment of billions of dollars in automated pattern recognition-based hedge funds, and serve many other functions.
4
I am not aware of any airplane
crashes that have been caused by failures of automated landing software; the same, however, cannot be said for human reliability.

Software Responsiveness
. Lanier complains that “computer user interfaces tend to respond more slowly to user interface events, such as a key press, than they did fifteen years earlier. . . . What’s gone wrong?”
5
I would invite Lanier to attempt using an old computer today. Even if we put aside the difficulty of setting one up (which is a different issue), he has forgotten just how unresponsive, unwieldy, and limited they were. Try getting some real work done to today’s standards with twenty-year-old personal-computer software. It’s simply not true to say that the old software was better in any qualitative or quantitative sense.

Although it’s always possible to find poor-quality design, response delays, when they occur, are generally the result of new features and functions. If users were willing to freeze the functionality of their software, the ongoing exponential growth of computing speed and memory would quickly eliminate software-response delays. But the market demands ever-expanded capability. Twenty years ago there were no search engines or any other integration with the World Wide Web (indeed, there was no Web), only primitive language, formatting, and multimedia tools, and so on. So functionality always stays on the edge of what’s feasible.

This romancing of software from years or decades ago is comparable to people’s idyllic view of life hundreds of years ago, when people were “unencumbered” by the frustrations of working with machines. Life was unfettered, perhaps, but it was also short, labor-intensive, poverty filled, and disease and disaster prone.

Software Price-Performance
. With regard to the price-performance of software, the comparisons in every area are dramatic. Consider the table on
p. 103
on speech-recognition software. In 1985 five thousand dollars bought you a software package that provided a thousand-word vocabulary, did not offer continuous-speech capability, required three hours of training on your voice, and had relatively poor accuracy. In 2000 for only fifty dollars, you could purchase a software package with a hundred-thousand-word vocabulary that provided continuous-speech capability, required only five minutes of training on your voice, had dramatically improved accuracy, offered natural-language understanding (for editing commands and other purposes), and included many other features.
6

Software Development Productivity
. How about software development itself? I’ve been developing software myself for forty years, so I have some perspective on the topic. I estimate the doubling time of software development productivity to be approximately six years, which is slower than the doubling time for processor price-performance, which is approximately one year today. However, software productivity is nonetheless growing exponentially. The development tools, class libraries, and support systems available today are dramatically more effective than those of decades ago. In my current projects teams of just three or four people achieve in a few months objectives that are comparable to what twenty-five years ago required a team of a dozen or more people working for a year or more.

Other books

Captive- Veiled Desires by Cartharn, Clarissa
Joan Wolf by The Guardian
A Cruise to Die For (An Alix London Mystery) by Elkins, Aaron, Elkins, Charlotte
Time of Death by James Craig
Seeds by Kin, M. M.
Married Lovers by Jackie Collins
Ransom for a Prince by Childs, Lisa
Secrets of Surrender by Madeline Hunter