Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (10 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
3.8Mb size Format: txt, pdf, ePub
ads

*   *   *

Let’s look at that again. Generally intelligent systems are by definition self-aware. And goal-seeking, self-aware systems will
make
themselves self-improving. However, improving oneself is a delicate operation, something like giving yourself a face-lift with a knife and mirror. Omohundro told me, “Improving itself is very sensitive for the system—as sensitive a moment as when the chess robot thinks about turning itself off. If they improve themselves, say, to increase their efficiency, they can always reverse that, if it becomes nonoptimal in the future. But if they make a mistake, like subtly changing their goals, from their current perspective that would be a disaster. They would spend their future pursuing a defective version of their present goals. So because of this possible outcome, any self-improvement is a sensitive issue.”

But self-aware, self-improving AI is up to the challenge. Like us, it can predict, or model, possible futures.

“It has a model of its own programming language and a model of its own program, a model of the hardware that it is sitting on, and a model of the logic that it uses to reason. It is able to create its own software code and watch itself executing that code so that it can learn from its own behavior. It can reason about possible changes that it might make to itself. It can change every aspect of itself to improve its behavior in the future.”

Omohundro predicts self-aware, self-improving systems will develop four primary drives that are similar to human biological drives: efficiency, self-preservation, resource acquisition, and creativity. How these drives come into being is a particularly fascinating window into the nature of AI. AI doesn’t develop them because these are intrinsic qualities of rational agents. Instead, a sufficiently intelligent AI will develop these drives to
avoid
predictable problems in achieving its goals, which Omohundro calls
vulnerabilities.
The AI
backs into
these drives, because without them it would blunder from one resource-wasting mistake to another.

The first drive, efficiency, means that a self-improving system will make the most of the resources at its disposal—space, time, matter, and energy. It will strive to make itself compact and fast, computationally and physically. For maximum efficiency it will balance and rebalance how it apportions resources to software and hardware. Memory allocation will be especially important for a system that learns and improves; so will improving rationality and avoiding wasteful logic. Suppose, Omohundro says, an AI prefers being in San Francisco to Palo Alto, being in Berkeley to San Francisco, and being in Palo Alto to Berkeley. If it acted on these preferences, it’d be stuck in a three-city loop, like an Asimov robot. Instead, Omohundro’s self-improving AI would anticipate the problem in advance and solve it. It might even use a clever technique like genetic programming, which is especially good at solving “Traveling Salesman” type routing puzzles. A self-improving system might be taught genetic programming, and apply it to yield fast, energy-conserving results. And if it wasn’t taught genetic programming, it might invent it.

Modifying its own hardware is within this system’s capability, so it would seek the most efficient materials and structure. Since atomic precision in its construction would reward the system with greater resource efficiency, it would seek out nanotechnology. And remarkably, if nanotech didn’t yet exist, the system would feel pressure to invent it, too. Recall the dark turn of events in the Busy Child scenario, when the ASI set about transforming Earth and its inhabitants into computable resource material? This is the drive that compels the Busy Child to use or develop any technology or procedure that reduces waste, including nanotechnology. Creating virtual environments in which to test hypotheses is also an energy-saver, so self-aware systems might
virtualize
what they do not need to do in “meat space” (programmer lingo for real life).

*   *   *

It’s with the next drive, self-preservation, that AI really jumps the safety wall separating machines from tooth and claw. We’ve already seen how Omohundro’s chess-playing robot feels about turning itself off. It may decide to use substantial resources, in fact all the resources currently in use by mankind, to investigate whether now is the right time to turn itself off, or whether it’s been fooled about the nature of reality. If the prospect of turning itself off agitates a chess-playing robot, being destroyed makes it downright angry. A self-aware system would take action to avoid its own demise, not because it intrinsically values its existence, but because it can’t fulfill its goals if it is “dead.” Omohundro posits that this drive could make an AI go to great lengths to ensure its survival—making multiple copies of itself, for example. These extreme measures are expensive—they use up resources. But the AI will expend them if it perceives the threat is worth the cost, and resources are available. In the Busy Child scenario, the AI determines that the problem of escaping the AI box in which it is confined is worth mounting a team approach, since at any moment it could be turned off. It makes duplicate copies of itself and swarms the problem. But that’s a fine thing to propose when there’s plenty of storage space on the supercomputer; if there’s little room it is a desperate and perhaps impossible measure.

Once the Busy Child ASI escapes, it plays strenuous self-defense: hiding copies of itself in clouds, creating botnets to ward off attackers, and more.

Resources used for self-preservation
should
be commensurate with the threat. However, a purely rational AI may have a different notion of commensurate than we partially rational humans. If it has surplus resources, its idea of self-preservation may expand to include proactive attacks on future threats. To sufficiently advanced AI, anything that has the potential to develop into a future threat may constitute a threat it should eliminate. And remember, machines won’t think about time the way we do. Barring accidents, sufficiently advanced self-improving machines are immortal. The longer you exist, the more threats you’ll encounter, and the longer your lead time will be to deal with them. So, an ASI may want to terminate threats that won’t turn up for a thousand years.

Wait a minute, doesn’t that include humans? Without explicit instructions otherwise, wouldn’t it always be the case that we humans would pose a current or future risk to smart machines that we create? While we’re busy avoiding risks of unintended consequences from AI, AI will be scrutinizing humans for dangerous consequences of sharing the world with us.

Consider an artificial superintelligence a thousand times more intelligent than the smartest human. As we noted in chapter 1, nuclear weapons are our own species’ most destructive invention. What kinds of weapons could a creature a thousand times more intelligent devise? One AI maker, Hugo de Garis, thinks a future AI’s drive to protect itself will contribute to catastrophic political tensions. “When people are surrounded by ever increasingly intelligent robots and other artificial brain–based products, the general level of alarm will increase to the point of panic. Assassinations of brain builder company CEOs will start, robot factories will be arsoned and sabotaged, etc.”

In his 2005 nonfiction book
The Artilect War
, de Garis proposes a future in which megawars are ignited by political divisions brought about by ASI development. This panic isn’t hard to envision once you’ve considered the consequences of ASI’s self-protection drive. First, de Garis proposes that technologies including AI, nanotechnology, computational neuroscience, and quantum computing (using subatomic particles to perform computational processes) will come together to allow the creation of “artilects,” or artificial intellects. Housed in computers as large as planets, artilects will be
trillions
of times more intelligent than man. Second, a political debate about whether or not to build artilects comes to dominate twenty-first-century politics. The hot issues are:

Will the robots become smarter than us? Should humanity place an upper limit on robot and artificial brain intelligence? Can the rise of artificial intelligence be stopped? If not, then what are the consequences for human survival if we become the number 2 Species?

Mankind divides into three camps: those who want to destroy artilects, those who want to keep developing them, and humans seeking to merge with artlilects and control their overwhelming technology. Nobody wins. In the climax of de Garis’s scenario, using the fearsome weapons of the late twenty-first century, the three parties clash. The result? “Gigadeath,” a term de Garis coined to describe the demise of
billions
of humans.

Perhaps de Garis overestimates the zeal of anti-artilect forces, supposing that they’ll engage in a war almost certain to kill billions of people in order to stop technology that
might
kill billions of people. But I think the AI maker’s analysis of the dilemma we’ll face is correct: shall we build our robot replacements or not? On this, de Garis is clear. “Humans should not stand in the way of a higher form of evolution. These machines are godlike. It is human destiny to create them.”

In fact, de Garis has laid the groundwork for creating them himself. He plans to combine two “black box” techniques, neural networks and evolutionary programming, to build mechanical brains. His device, a so-called Darwin Machine, is intended to evolve its own architecture.

*   *   *

AI’s second most dangerous drive, resource acquisition, compels the system to gather whatever assets it needs to increase its chances of achieving its goals. According to Omohundro, in the absence of careful instructions on how it should acquire resources, “a system will consider stealing them, committing fraud and breaking into banks as a great way to get resources.” If it needs energy, not money, it will take ours. If it needs atoms, not energy or money, ours again.

“These systems intrinsically want more stuff. They want more matter, they want more free energy, they want more space, because they can meet their goals more effectively if they have those things.”

Unprompted by us, extremely powerful AI will open the door to all sorts of new resource-acquiring technology. We just have to be alive to enjoy it.

“They are going to want to build fusion reactors to extract the energy that’s in nuclei and they’re going to want to do space exploration. You’re building a chess machine, and the damn thing wants to build a spaceship. Because that’s where the resources are, in space, especially if their time horizon is very long.”

And as we’ve discussed, self-improving machines could live forever. In chapter 3 we learned that if ASI got out of our control, it could be a threat not just to our planet, but to the galaxy. Resource acquisition is the drive that would push an ASI to quest beyond Earth’s atmosphere. This twist in rational agent behavior may bring to mind bad science-fiction films. But consider the motives that drove humans into space: Cold War one-upmanship, the spirit of exploration, American and Soviet manifest destiny, establishing a defense foothold in space, and developing weightless industrial manufacturing (which seemed like a good idea at the time). An ASI’s drive to go into space would be stronger, more akin to survival.

“Space holds such an abundance of riches that systems with longer time horizons are likely to devote substantial resources to developing space exploration independent of their explicit goals,” says Omohundro. “There is a first-mover advantage to reaching unused resources first. If there is competition for space resources, the resulting ‘arms race’ is likely to ultimately lead to expansion at speeds approaching the speed of light.”

Yes, he said
the speed of light.
Let’s review how we got here from a chess-playing robot.

First, a self-aware, self-improving system will be rational. It is rational to acquire resources—the more resources the system has, the more likely it is to meet its goals and to avoid vulnerabilities. If no instructions limiting its resource acquisition have been engineered into its goals and values, the system will look for means to acquire more resources. It might do a lot of things that are counterintuitive to how we think about machines, like breaking into computers and even banks, to satisfy its drives.

A self-aware, self-improving system has enough intelligence to perform the R&D necessary to improve itself. As its intelligence grows, so do its R&D abilities. It may seek or manufacture robotic bodies, or exchange goods and services with humans to do so, to construct whatever infrastructure it needs. Even spaceships.

Why robotic bodies? Robots, of course, are a venerable trope in movies, books, and film, a theatrical stand-in for artificial intelligence. But robot bodies belong in discussions of AI for two reasons. First, as we’ll explore later, occupying a body may be the best way for an AI to develop knowledge about the world. Some theorists even think intelligence cannot develop without being contained in a body. Our own intelligence is a strong argument for that. Second, a resource-acquiring AI would seek a robotic body for the same reason Honda gave its robot ASIMO a humanoid body. So it can use our stuff.

Since 1986, ASIMO has been developed to assist the elderly—Japan’s fastest growing demographic—at home. Human shape and dexterity are best for a machine that will be called upon to climb stairs, turn on lights, sweep up, and manipulate pots and pans, all in a human dwelling. Similarly, an AI that wanted to efficiently use our manufacturing plants, our buildings, our vehicles, and our tools, would want a humanoid shape.

Now let’s get back to space.

We’ve discussed how nanotechnology would bring broad benefits for a superintelligence, and how a rational system would be motivated to develop it. Space travel is a way to gain access to materials and energy. What drives the system into space is the desire to fulfill its goals as well as to avoid vulnerabilities. The system looks into possible futures and avoids those in which its goals are not fulfilled.
Not
taking advantage of outer space’s seemingly limitless resources is an obvious path to disappointment.

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
3.8Mb size Format: txt, pdf, ePub
ads

Other books

Azabache by Alberto Vázquez-Figueroa
Twist Me by Zaires, Anna
Medora: A Zombie Novel by Welker, Wick
Happy Birthday by Danielle Steel
Final Flight by Stephen Coonts
Catla and the Vikings by Mary Nelson
The Golden Apple by Michelle Diener
Mrs. Roopy Is Loopy! by Dan Gutman