The Singularity Is Near: When Humans Transcend Biology (66 page)

Read The Singularity Is Near: When Humans Transcend Biology Online

Authors: Ray Kurzweil

Tags: #Non-Fiction, #Fringe Science, #Retail, #Technology, #Amazon.com

BOOK: The Singularity Is Near: When Humans Transcend Biology
5.59Mb size Format: txt, pdf, ePub

In Smolin’s theory the mechanism that gives rise to new universes is the creation of black holes, so those universes best able to produce black holes are the ones that are most likely to reproduce. According to Smolin a universe best able to create increasing complexity—that is, biological life—is also most likely to create new universe-generating black holes. As he explains, “Reproduction through black holes leads to a multiverse in which the conditions for life are common—essentially because some of the conditions life requires, such as plentiful carbon, also boost the formation of stars massive enough to become black holes.”
93
Susskind’s proposal differs in detail from Smolin’s but is also based on black holes, as well as the nature of “inflation,” the force that caused the very early universe to expand rapidly.

Intelligence as the Destiny of the Universe.
In
The Age of Spiritual Machines
, I introduced a related idea—namely, that intelligence would ultimately permeate the universe and would decide the destiny of the cosmos:

How relevant is intelligence to the universe? . . . The common wisdom is
not very
. Stars are born and die; galaxies go through their cycles of creation and destruction; the universe itself was born in a big bang and will end with a crunch or a whimper, we’re not yet sure which. But intelligence has little to do with it. Intelligence is just a bit of froth, an ebullition of little creatures darting in and out of inexorable universal forces. The mindless mechanism of the universe is winding up or down to a distant future, and there’s nothing intelligence can do about it.

That’s the common wisdom. But I don’t agree with it. My conjecture is that intelligence will ultimately prove more powerful than these big impersonal forces. . . .

So will the universe end in a big crunch, or in an infinite expansion of dead stars, or in some other manner? In my view, the primary issue is not the mass of the universe, or the possible existence of antigravity, or of Einstein’s so-called cosmological constant. Rather, the fate of the universe is a decision yet to be made, one which we will intelligently consider when the time is right.
94

Complexity theorist James Gardner combined my suggestion on the evolution of intelligence throughout the universe with Smolin’s and Susskind’s concepts of evolving universes. Gardner conjectures that it is specifically the evolution of intelligent life that enables offspring universes.
95
Gardner builds on British astronomer Martin Rees’s observation that “what we call the fundamental constants—the numbers that matter to physicists—may be secondary consequences of the final theory, rather than direct manifestations of its deepest and most fundamental level.” To Smolin it is merely coincidence that black holes and biological life both need similar conditions (such as large amounts of carbon), so in his conception there is no explicit role for intelligence, other than that it happens to be the by-product of certain biofriendly circumstances. In Gardner’s conception it is intelligent life that creates its successors.

Gardner writes that “we and other living creatures throughout the cosmos are part of a vast, still undiscovered transterrestrial community of lives and intelligences spread across billions of galaxies and countless parsecs who are collectively engaged in a portentous mission of truly cosmic importance. Under the Biocosm vision, we share a common fate with that community—to help shape
the future of the universe and transform it from a collection of lifeless atoms into a vast, transcendent mind.” To Gardner the laws of nature, and the precisely balanced constants, “function as the cosmic counterpart of DNA: they furnish the ‘recipe’ by which the evolving cosmos acquires the capacity to generate life and ever more capable intelligence.”

My own view is consistent with Gardner’s belief in intelligence as the most important phenomenon in the universe. I do have a disagreement with Gardner on his suggestion of a “vast . . . transterrestrial community of lives and intelligences spread across billions of galaxies.” We don’t yet see evidence that such a community beyond Earth exists. The community that matters may be just our own unassuming civilization here. As I pointed out above, although we can fashion all kinds of reasons why each particular intelligent civilization may remain hidden from us (for example, they destroyed themselves, or they have decided to remain invisible or stealthy, or they’ve switched
all
of their communications away from electromagnetic transmissions, and so on), it is not credible to believe that every single civilization out of the billions that should be there (according to the SETI assumption) has some reason to be invisible.

The Ultimate Utility Function.
We can fashion a conceptual bridge between Susskind’s and Smolin’s idea of black holes being the “utility function” (the property being optimized in an evolutionary process) of each universe in the multiverse and the conception of intelligence as the utility function that I share with Gardner. As I discussed in
chapter 3
, the computational power of a computer is a function of its mass and its computational efficiency. Recall that a rock has significant mass but extremely low computational efficiency (that is, virtually all of the transactions of its particles are effectively random). Most of the particle interactions in a human are random also, but on a logarithmic scale humans are roughly halfway between a rock and the ultimate small computer.

A computer in the range of the ultimate computer has a very high computational efficiency. Once we achieve an optimal computational efficiency, the only way to increase the computational power of a computer would be to increase its mass. If we increase the mass enough, its gravitational force becomes strong enough to cause it to collapse into a black hole. So a black hole can be regarded as the ultimate computer.

Of course, not any black hole will do. Most black holes, like most rocks, are performing lots of random transactions but no useful computation. But a well-organized black hole would be the most powerful conceivable computer in terms of cps per liter.

Hawking Radiation.
There has been a long-standing debate about whether or not we can transmit information into a black hole, have it usefully transformed, and then retrieve it. Stephen Hawking’s conception of transmissions from a black hole involves particle-antiparticle pairs that are created near the event horizon (the point of no return near a black hole, beyond which matter and energy are unable to escape). When this spontaneous creation occurs, as it does everywhere in space, the particle and antiparticle travel in opposite directions. If one member of the pair travels into the event horizon (never to be seen again), the other will fly away from the black hole.

Some of these particles will have sufficient energy to escape its gravitation and result in what has been called Hawking radiation.
96
Prior to Hawking’s analysis it was thought that black holes were, well, black; with his insight we realized that they actually give off a continual shower of energetic particles. But according to Hawking this radiation is random, since it originates from random quantum events near the event boundary. So a black hole may contain an ultimate computer, according to Hawking, but according to his original conception, no information can escape a black hole, so this computer could never transmit its results.

In 1997 Hawking and fellow physicist Kip Thorne (the wormhole scientist) made a bet with California Institute of Technology’s John Preskill. Hawking and Thorne maintained that the information that entered a black hole was lost, and any computation that might occur inside the black hole, useful or otherwise, could never be transmitted outside of it, whereas Preskill maintained that the information could be recovered.
97
The loser was to give the winner some useful information in the form of an encyclopedia.

In the intervening years the consensus in the physics community steadily moved away from Hawking, and on July 21, 2004, Hawking admitted defeat and acknowledged that Preskill had been correct after all: that information sent into a black hole is not lost. It could be transformed inside the black hole and then transmitted outside it. According to this understanding, what happens is that the particle that flies away from the black hole remains quantum entangled with its antiparticle that disappeared into the black hole. If that antiparticle inside the black hole becomes involved in a useful computation, then these results will be encoded in the state of its tangled partner particle outside of the black hole.

Accordingly Hawking sent Preskill an encyclopedia on the game of cricket, but Preskill rejected it, insisting on a baseball encyclopedia, which Hawking had flown over for a ceremonial presentation.

Assuming that Hawking’s new position is indeed correct, the ultimate
computers that we can create would be black holes. Therefore a universe that is well designed to create black holes would be one that is well designed to optimize its intelligence. Susskind and Smolin argued merely that biology and black holes both require the same kind of materials, so a universe that was optimized for black holes would also be optimized for biology. Recognizing that black holes are the ultimate repository of intelligent computation, however, we can conclude that the utility function of optimizing black-hole production and that of optimizing intelligence are one and the same.

Why Intelligence Is More Powerful than Physics.
There is another reason to apply an anthropic principle. It may seem remarkably unlikely that our planet is in the lead in terms of technological development, but as I pointed out above, by a weak anthropic principle, if we had not evolved, we would not be here discussing this issue.

As intelligence saturates the matter and energy available to it, it turns dumb matter into smart matter. Although smart matter still nominally follows the laws of physics, it is so extraordinarily intelligent that it can harness the most subtle aspects of the laws to manipulate matter and energy to its will. So it would at least appear that intelligence is more powerful than physics. What I should say is that intelligence is more powerful than cosmology. That is, once matter evolves into smart matter (matter fully saturated with intelligent processes), it can manipulate other matter and energy to do its bidding (through suitably powerful engineering). This perspective is not generally considered in discussions of future cosmology. It is assumed that intelligence is irrelevant to events and processes on a cosmological scale.

Once a planet yields a technology-creating species and that species creates computation (as has happened here), it is only a matter of a few centuries before its intelligence saturates the matter and energy in its vicinity, and it begins to expand outward at at least the speed of light (with some suggestions of circumventing this limit). Such a civilization will then overcome gravity (through exquisite and vast technology) and other cosmological forces—or, to be fully accurate, it will maneuver and control these forces—and engineer the universe it wants. This is the goal of the Singularity.

A Universe-Scale Computer.
How long will it take for our civilization to saturate the universe with our vastly expanded intelligence? Seth Lloyd estimates there are about 10
80
particles in the universe, with a theoretical maximum capacity of about 10
90
cps. In other words a universe-scale computer would be able to compute at 10
90
cps.
98
To arrive at those estimates, Lloyd took the observed density of matter—about one hydrogen atom per cubic meter—and
from this figure computed the total energy in the universe. Dividing this energy figure by the Planck constant, he got about 10
90
cps. The universe is about 10
17
seconds old, so in round numbers there have been a maximum of about 10
107
calculations in it thus far. With each particle able to store about 10
10
bits in all of its degrees of freedom (including its position, trajectory, spin, and so on), the state of the universe represents about 10
90
bits of information at each point in time.

We do not need to contemplate devoting all of the mass and energy of the universe to computation. If we were to apply 0.01 percent, that would still leave 99.99 percent of the mass and energy unmodified, but would still result in a potential of about 10
86
cps. Based on our current understanding, we can only approximate these orders of magnitude. Intelligence at anything close to these levels will be so vast that it will be able to perform these engineering feats with enough care so as not to disrupt whatever natural processes it considers important to preserve.

The Holographic Universe.
Another perspective on the maximum information storage and processing capability of the universe comes from a speculative recent theory of the nature of information. According to the “holographic universe” theory the universe is actually a two-dimensional array of information written on its surface, so its conventional three-dimensional appearance is an illusion.
99
In essence, the universe, according to this theory, is a giant hologram.

The information is written at a very fine scale, governed by the Planck constant. So the maximum amount of information in the universe is its surface area divided by the square of the Planck constant, which comes to about 10
120
bits. There does not appear to be enough matter in the universe to encode this much information, so the limits of the holographic universe may be higher than what is actually feasible. In any event the order of magnitude of the number of orders of magnitudes of these various estimates is in the same range. The number of bits that a universe reorganized for useful computation will be able to store is 10 raised to a power somewhere between 80 and 120.

Other books

Above His Proper Station by Lawrence Watt-Evans
Death Dream by Ben Bova
Apocalypse Baby by Virginie Despentes
Watching Over Us by Will McIntosh
A Plea for Eros by Siri Hustvedt
Snow Ride by Bonnie Bryant
Becoming Strangers by Louise Dean