Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (17 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
11.81Mb size Format: txt, pdf, ePub

If the intelligence starts taking part in its own modification, then what happens? Eliezer Yudkowsky describes how quickly after AGI the pace of technological progress may slip from our hands.

If computing speeds double every two years, what happens when computer-based AIs are doing the research?

Computing speed doubles every two years.

Computing speed doubles every two years of work.

Computing speed doubles every two subjective years of work.

Two years after Artificial Intelligences reach human equivalence, their speed doubles.

One year later, their speed doubles again. Six months—three months—1.5 months … Singularity.

Some object that Moore’s law will stop before 2020, when it becomes physically impossible to fit more transistors onto an integrated circuit. Others think Moore’s law will give way to
faster
doublings when processors undergo technological makeovers, using smaller components to perform computations, such as atoms, photons of light, even DNA. 3-D processor chips developed by Switzerland’s École Polytechnique Fédérale de Lausanne could be the first to beat Moore’s law. Though not yet in production, EPFL’s processor chips are stacked vertically instead of horizontally, and will be faster and more efficient than traditional chips, as well as parallel-processing ready. And already the company Gordon Moore cofounded may have bought his law more time with the creation of the first 3-D
transistors.
Recall that transistors are electrical switches. Traditional transistors work by regulating electrical current moving across two dimensions. Intel’s new Tri-Gate transistors conduct current over three dimensions, for a 30 percent increase in speed, and up to 50 percent savings in power. A billion Tri-Gate transistors will be on each of Intel’s next line of processor chips.

Because imprinting transistors on silicon is involved in many information technologies, from cameras to medical sensors, Moore’s law applies to them, too. But Moore was theorizing about integrated circuits, not the many linked worlds of information technology that include both products and processes. So, Kurzweil’s more general law, the Law of Accelerating Returns, is a better fit. And more technologies are
becoming
information technologies, as computers, and even robots, grow ever more intimately involved with every aspect of product design, manufacture, and sales. Consider that every smart phone’s manufacture—not just its processor chip—took advantage of the digital revolution. It’s been just six years since Apple’s iPhone first came out, and Apple has released
six
versions. Apple has more than doubled its speed and for most users halved its price, or better. That’s because hardware speed has been regularly doubling in the components within the end product. But it’s also been doubling in every link in the production pipeline that led to its creation.

The effects anticipated by LOAR reach far beyond the computer and smart phone businesses. Recently Google’s cofounder Larry Page met with Kurzweil to discuss global warming, and they parted optimistic. In twenty years, they claimed, nanotechnology will enable sun-powered energy to become more economical than oil or coal. The industry will be able to provide the earth with 100 percent of its energy needs. While solar energy supplies just a half a percent of the world’s energy requirements today, they reason that rate is doubling every two years as it has for the last twenty years. So, in two more years solar energy will account for 1 percent of world energy requirements, in four years, 2 percent, and in sixteen more years, eight more doublings, or two to the eighth power, equaling 256 percent of world energy needs. Even accounting for increased population and energy demands two decades hence, that should be enough solar power to cover it and then some. And so, according to Kurzweil and Page, global warming will be solved.

And so will, um, mortality. According to Kurzweil, the means are almost in reach for extending life indefinitely.

“We now have the actual means of understanding the software of life and reprogramming it; we can turn genes off without any interference, we can add new genes, whole new organs with stem cell therapy,” Kurzweil said. “The point is that medicine is now an information technology—it’s going to double in power every year. These technologies will be a million times more powerful for the same cost in twenty years.”

Kurzweil believes that the shortest route to AGI is to reverse engineer the brain—intricately scanning it to yield a collection of brain-based circuits. Represented in algorithms or hardware networks, these circuits will then be fired up on a computer as a unified synthetic brain, and taught everything it needs to know. Several organizations are working on projects to accomplish this path to AGI. We’ll discuss some approaches and roadblocks ahead.

*   *   *

The evolutionary pace of the hardware needed to run a virtual brain calls for a closer look. Let’s start with the human brain and work our way toward computers that could emulate it. Kurzweil writes that the brain has about 100 billion neurons, each connected to about a thousand other neurons. That makes about 100 trillion interneuronal connections. Each connection is capable of making some 200 calculations per second (electronic circuits are at least 10 million times faster). Kurzweil multiplies the brain’s interneuronal connections by their calculations per second and gets 20 million billion calculations per second, or 20,000,000,000,000,000.

The title of fastest supercomputer changes hands almost monthly, but right now the Department of Energy’s Sequoia reigns with more than sixteen petaflops. That’s 16,000,000,000,000,000 calculations per second, or roughly 80 percent of the speed of the human brain, as calculated by Kurzweil in 2000. But by 2005’s
The Singularity Is Near
, Kurzweil had trimmed his brain-speed calculation down from twenty to sixteen petaflops, and estimated a supercomputer would reach it by 2013. Sequoia achieved it a year sooner.

Are we that close to brute-forcing brainpower? The numbers can be deceiving. Brains are parallel processors and excel at some jobs, while computers operate serially and excel at others. Brains are slow and work in spikes of neuronal activity. Computers can process faster and for longer, even indefinitely.

But human brains remain our sole example of advanced intelligence. If “brute force” can compete, computers will have to perform impressive cognitive feats. But consider a few of the complex systems today’s supercomputers routinely model: weather systems, 3-D nuclear detonations, and molecular dynamics for manufacturing. Does the human brain contain a similar magnitude of complexity, or an order of magnitude higher? According to all indications, it’s in the same ballpark.

Perhaps, as Kurzweil says, conquering the human brain is just around the corner, and the next thirty years of computer science will be like a hundred and forty at today’s rate of progress. Factor in too that
creating
AGI is also an information technology. With exponentially increased computer speed, AI researchers can conduct their work faster. That means writing more complex algorithms, more processing-intensive algorithms, taking on harder computational problems, and conducting more experiments. Faster computers contribute to a more robust AI industry, which in turn produces more computer researchers, and faster, more useful tools for achieving AGI.

Kurzweil writes that after researchers produce a computer capable of passing the Turing test by 2029, things will accelerate even faster. But he doesn’t predict a full-blown Singularity until sixteen years later, or 2045. Then the pace of technological advance will exceed our brains’ ability to steer it. Therefore, he argues, we must augment them in order to keep up. That means plugging brain-boosting technologies directly into our neurocircuitry, in the same way that today’s cochlear implants connect to auditory nerves to help the hearing impaired. We’ll perk up those slow interneuronal connections, and think faster, more deeply, while remembering more. We’ll have access to all of human knowledge, and will, computerlike, instantly be able to share our thoughts and experiences with others, while experiencing theirs. Ultimately technology will enable us to upgrade our brains to a medium more durable than brain tissue, or we will upload our minds to computers, all the while preserving the qualities that make us
us.

This outline of the future assumes of course that the
you
in you, your self
,
is transportable, and that’s one whale of an assumption. But for Kurzweil, this is the path to immortality, and a breadth of knowledge and experience beyond what we can currently comprehend. Intelligence augmentation will happen so incrementally that few will reject it. But “incrementally” means by 2045, so over roughly the next thirty years, with the vast majority of change taking place in the last half decade or so. Is that gradual? I don’t think so.

As we noted earlier, Apple came out with six versions of the iPhone in six years. According to Moore’s law, their hardware was sufficiently advanced to undergo two or more doublings in that time, yet it underwent just one. Why? Because of the lag time taken by development, prototyping, and manufacture of the iPhone’s components, including its processor, camera, memory, storage, screen, and so on, followed by the marketing and sale of the iPhone itself.

Will the lag time from marketing to sales ever go away? Perhaps someday hardware, like software, will upgrade itself automatically. But that probably won’t happen until science has mastered nanotechnology or 3-D printing becomes ubiquitous. And when we’re upgrading components of our own brains, instead of updating Microsoft Office or buying a few chips of RAM, it’ll be a much more delicate procedure than anything we’ve experienced before, at least at first.

Yet Kurzweil claims that in this century we’ll experience 200,000 years of technological progress in a hundred calendar years. Could we tolerate so much progress coming so fast?

Nicholas Carr, author of
The Shallows,
argues that smart phones and computers are lowering the quality of our thoughts, and changing the shape of our brains. In his book,
Virtually You,
psychiatrist Elias Aboujaoude warns that social networking and role-playing games encourage a swarm of maladies, including narcissism and egocentricity. Immersion in technology weakens individuality and character, proposes the programmer who
pioneered
virtual reality, Jaron Lanier, author of
You Are Not a Gadget: A Manifesto.
These experts warn that detrimental effects come from computers
outside
our bodies. Yet Kurzweil proposes only good things will come of computers
inside
our bodies. I think it’s implausible to expect that hundreds of thousands of years of evolution will turn on a dime in thirty years, and that we can be reprogrammed to love an existence that is so different from the lives we’ve evolved to fit.

It’s more likely that humans will decide on a rate of change they can manage and control. Each may choose differently, with many people settling on similar rates just as they settle on similar clothing styles, cars, and computers. We know Moore’s law and LOAR are economic rather than deterministic laws. If enough people with enough resources want to artificially accelerate their brains, they’ll create
some
demand. However, I think Kurzweil is greatly overestimating future persons’ commitment to faster thinking and longer living. I don’t think a Singularity full of delights, as he defines it, will be available anyway. AI developed without sufficient safeguards will prevent it.

The quest to create AGI is unstoppable and probably ungovernable. And because of the dynamics of doublings expressed by LOAR, AGI will take the world stage (and I mean
take
) much sooner than we think.

 

Chapter Ten

The Singularitarian

In contrast with our intellect, computers double their performance every eighteen months. So the danger is real that they could develop intelligence and take over the world.

—Stephen Hawking, physicist

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive?

—Vernor Vinge, author, professor, computer scientist

Each year since 2005, the Machine Intelligence Research Institute, formerly the Singularity Institute for Artificial Intelligence, has held a Singularity Summit. Over two days, a roster of speakers preach to about a thousand members of the choir about the Singularity big picture—its impact on jobs and the economy, health and longevity, and its ethical implications. Speakers at the 2011 summit in New York City included science legends, like Mathematica’s Stephen Wolfram, Peter Thiel, a dot-com billionaire who pays tech-savvy teens to skip college and start companies, and IBM’s David Ferrucci, principal investigator for the DeepQA/Watson Project. Eliezer Yudkowsky always speaks, and there’s usually an ethicist or two as well as spokespeople for the extropian and transhuman communities. Extropians explore technologies and therapies that will permit humans to live forever. Transhumans think about hardware and cosmetic ways for increasing human capability, beauty, and … opportunities to live forever. Standing astride all the factions is the Colossus of the Singularity, a cofounder of the Singularity Summits, and the star of each gathering, Ray Kurzweil.

The 2011 summit’s theme was IBM’s DeepQA (question answering) computer Watson, and Kurzweil gave a rote presentation on the history of chatbots and Q&A systems, entitled “From Eliza to Watson.” But midtalk he came to life to gut a maladroit essay coauthored by Microsoft cofounder Paul Allen, attacking his Singularity hypothesis.

Kurzweil didn’t look particularly well—thin, a little halting, quieter than usual. He’s not the kind of speaker who eats up the stage or delivers zingers. To the contrary, he’s got a mild, robotic delivery, the kind that seems designed for hostage negotiations or bedtime stories. But it plays well against the casually revolutionary things he has to say. In an age when dot-com billionaires give presentations in pressed jeans, Kurzweil is wearing your uncle’s brown slacks, along with tasseled loafers, sports jacket, and glasses. He’s neither large nor small, but recently has started looking old, especially in comparison to the vigorous Kurzweil of my memory. He had been a mere fifty-two or so when I last interviewed him, and hadn’t yet started the intense calorie restriction diet that’s now part of his plan to slow his aging. With a fine-tuned regimen of diet, exercise, and supplements, Kurzweil plans to fend off death until technology finds the cure he’s certain will come.

Other books

Dreamer (Highland Treasure Trilogy) by McGoldrick, May, Cody, Nicole, Coffey, Jan, McGoldrick, Nikoo, McGoldrick, James
Autumn Sacrifice by Green, Bronwyn
Duty Before Desire by Elizabeth Boyce
An Accidental Mother by Katherine Anne Kindred
Cold Light by John Harvey