Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (12 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
3.87Mb size Format: txt, pdf, ePub

“What about humanity is worth preserving?” is a profoundly interesting and important question, one we humans have been asking in various forms for a long time. What constitutes the good life? What are valor, righteousness, excellence? What art is inspiring and what music is beautiful? The necessity of specifying our values is one of the ways in which the quest for general artificial intelligence compels us to get to know ourselves better. Omohundro believes this deep self-exploration will yield enriching, not terrifying technology. He writes, “With both logic and inspiration we can work toward building a technology that empowers the human spirit rather than diminishing it.”

*   *   *

Of course I have a different perspective—I don’t share Omohundro’s optimism. But I appreciate the critical importance of developing a science of understanding our intelligent creations. His warning about advanced AI bears repeating:

I don’t think most AI researchers thought there’d be any danger in creating, say, a chess-playing robot. But my analysis shows that we should think carefully about what values we put in or we’ll get something more along the lines of a psychopathic, egoistic, self-oriented entity.

My anecdotal evidence says he’s right about the AI makers—those I’ve spoken with, busily beavering away to make intelligent systems, don’t think what they’re doing is dangerous. Most, however, have a deep-seated sense that machine intelligence will replace human intelligence. But they don’t speculate about how that will come about.

AI makers tend to believe intelligent systems will only do what they’re programmed to do. But Omohundro says they’ll do that, but lots more, too, and we can know with some precision how advanced AI systems will behave. Some of that behavior is unexpected and creative. It’s embedded in a concept that’s so alarmingly simple that it took insight like Omohundro’s to spot it:
for a sufficiently intelligent system, avoiding vulnerabilities is as powerful a motivator as explicitly constructed goals and subgoals.

We must beware the unintended consequences of the goals we program into intelligent systems, and also beware the consequences of what we leave out.

 

Chapter Seven

The Intelligence Explosion

From the standpoint of existential risk, one of the most critical points about Artificial Intelligence is that an Artificial Intelligence might increase in intelligence extremely fast. The obvious reason to suspect this possibility is recursive self-improvement. (Good 1965.) The AI becomes smarter, including becoming smarter at the task of writing the internal cognitive functions of an AI, so the AI can rewrite its existing cognitive functions to work even better, which makes the AI still smarter, including smarter at the task of rewriting itself, so that it makes yet more improvements … The key implication for our purposes is that an AI might make a huge jump in intelligence after reaching some threshold of criticality.

—Eliezer Yudkowsky, research fellow, Machine Intelligence Research Institute

Did you mean:
recursion

—Google search engine upon looking up “recursion”

So far in this book we’ve considered an AI scenario so catastrophic that it begs for closer scrutiny. We’ve investigated a promising idea about how to construct AI to defuse the danger—Friendly AI—and found that it is incomplete. In fact, the general idea of coding an intelligent system with permanently safe goals or evolvable safe goal-generating abilities, intended to endure through a large number of self-improving iterations, just seems wishful.

Next, we explored why AI would ever be dangerous. We found that many of the drives that would motivate self-aware, self-improving computer systems could easily lead to catastrophic outcomes for humans. These outcomes highlight an almost liturgical peril of sins of commission and omission in error-prone human programming.

AGI, when achieved, could be unpredictable and dangerous, but probably not catastrophically so in the short term. Even if an AGI made multiple copies of itself, or team-approached its escape, it’d have no greater potential for dangerous behavior than a group of intelligent people. Potential AGI danger lies in the hard kernel of the Busy Child scenario, the rapid recursive self-improvement that enables an AI to bootstrap itself from artificial general intelligence to artificial superintelligence. It’s commonly called the “intelligence explosion.”

A self-aware, self-improving system will seek to better fulfill its goals, and minimize vulnerabilities, by improving itself. It won’t seek just minor improvements, but major, ongoing improvements to every aspect of its cognitive abilities, particularly those that reflect and act on improving its intelligence. It will seek better-than-human intelligence, or superintelligence. In the absence of ingenious programming we have a great deal to fear from a superintelligent machine.

From Steve Omohundro we know the AGI will naturally seek an intelligence explosion. But what exactly
is
an intelligence explosion? What are its minimum hardware and software requirements? Will factors such as insufficient funding and the sheer complexity of achieving computational intelligence block an intelligence explosion from ever taking place?

Before addressing the mechanics of the intelligence explosion, it’s important to explore exactly what the term means, and how the idea of explosive artificial intelligence was proposed and developed by mathematician I. J. Good.

*   *   *

Interstate 81 starts in New York State and ends in Tennessee, traversing almost the entire range of the Appalachian Mountains. From the middle of Virginia heading south, the highway snakes up and down deeply forested hills and sweeping, grassy meadows, through some of the most striking and darkly primordial vistas in the United States. Contained within the Appalachians are the Blue Ridge Mountain Range (from Pennsylvania to Georgia) and the Great Smokies (along the North Carolina–Tennessee border). The farther south you go, the harder it is to get a cell phone signal, churches outnumber houses, and the music on the radio changes from Country to Gospel, then to hellfire preachers. I heard a memorable song about temptation called “Long Black Train” by Josh Turner. I heard a preacher begin a sermon about Abraham and Isaac, lose his way, and end with the parable of the loaves and fishes and
hell,
thrown in for good measure. I was closing in on the Smokey Mountains, the North Carolina border, and Virginia Tech—the Virginia Polytechnic Institute and State University in Blacksburg, Virginia. The university’s motto:
INVENT THE FUTURE.

Twenty years ago, driving on an almost identical I-81 you might have been overtaken by a Triumph Spitfire convertible with the license plate 007 IJG. The vanity plate belonged to I. J. Good, who arrived in Blacksburg in 1967, as a Distinguished Professor of Statistics. The “007” was an homage to Ian Fleming and Good’s secret work as a World War II code breaker at Bletchley Park, England. Breaking the encryption system that Germany’s armed forces used to encode messages substantially helped bring about the Axis powers’ defeat. At Bletchley Park, Good worked alongside Alan Turing, called the father of modern computation (and creator of chapter 4’s Turing test), and helped build and program one of the first electrical computers.

In Blacksburg, Good was a celebrity professor—his salary was higher than the university president’s. A nut for numbers, he noted that he arrived in Blacksburg on the seventh hour of the seventh day of the seventh month of the seventh year of the seventh decade, and was housed in unit seven on the seventh block of Terrace View Apartments. Good told his friends that God threw coincidences at atheists like him to convince them of his existence.

“I have a quarter-baked idea that God provides more coincidences the more one doubts Her existence, thereby providing one with evidence without forcing one to believe,” Good said. “When I believe that theory, the coincidences will presumably stop.”

I was headed to Blacksburg to learn about Good, who had died recently at age ninety-two, from his friends. Mostly, I wanted to learn how I. J. Good happened to invent the idea of an intelligence explosion, and if it really was possible. The intelligence explosion was the first big link in the idea chain that gave birth to the Singularity hypothesis.

Unfortunately, for the foreseeable future, the mention of Virginia Tech will evoke the Virginia Tech Massacre. Here on April 16, 2007, senior English major Seung-Hui Cho killed thirty-two students and faculty and wounded twenty-five more. It is the deadliest shooting incident by a lone gunman in U.S. history. The broad outlines are that Cho shot and killed an undergraduate woman in Ambler Johnston Hall, a Virginia Tech dormitory, then killed a male undergraduate who came to her aid. Two hours later Cho began the rampage that caused most of the casualties. Except for the first two, he shot his victims in Virginia Tech’s Norris Hall. Before he started shooting Cho had chained and padlocked shut the building’s heavy oaken doors to prevent anyone from escaping.

When I. J. Good’s longtime friend and fellow statistician Dr. Golde Holtzman showed me Good’s former office in Hutcheson Hall, on the other side of the beautiful green Drillfield (a military parade ground in Tech’s early life), I noticed you could just see Norris Hall from his window. But by the time the tragedy unfolded, Holtzman told me, Good had retired. He was not in his office but at home, perhaps calculating the probability of God’s existence.

According to Dr. Holtzman, sometime before he died, Good updated that probability from zero to point one. He did this because as a statistician, he was a long-term Bayesian. Named for the eighteenth-century mathematician and minister Thomas Bayes, Bayesian statistics’ main idea is that in calculating the probability of some statement, you can start with a personal belief. Then you update that belief as new evidence comes in that supports your statement or doesn’t.

If Good’s original
disbelief
in God had remained 100 percent, no amount of data, not even God’s appearance, could change his mind. So, to be consistent with his Bayesian perspective, Good assigned a small positive probability to the existence of God to make sure he could learn from new data, if it arose.

In the 1965 paper “Speculations Concerning the First Ultraintelligent Machine,” Good laid out a simple and elegant proof that’s rarely left out of discussions of artificial intelligence and the Singularity:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make …

The Singularity has three well-developed definitions—Good’s, above, is the first. Good never used the term “singularity” but he got the ball rolling by positing what he thought of as an inescapable and beneficial milestone in human history—the invention of smarter-than-human machines. To paraphrase Good, if you make a superintelligent machine, it will be better than humans at everything we use our brains for, and that includes making superintelligent machines. The first machine would then set off an intelligence explosion, a rapid increase in intelligence, as it repeatedly self-improved, or simply made smarter machines. This machine or machines would leave man’s brainpower in the dust. After the intelligence explosion, man wouldn’t have to invent anything else—all his needs would be met by machines.

This paragraph of Good’s paper rightfully finds its way into books, papers, and essays about the Singularity, the future of artificial intelligence, and its risks. But two important ideas almost always get left out. The first is the introductory sentence of the paper. It’s a doozy: “The survival of man depends on the early construction of an ultraintelligent machine.” The second is the frequently omitted
second half
of the last sentence in the paragraph. The last sentence of Good’s most often quoted paragraph
should
read in its entirety:

Thus the first ultraintelligent machine is the last invention that man need ever make
, provided that the machine is docile enough to tell us how to keep it under control
(italics mine).

These two sentences tell us important things about Good’s intentions. He felt that we humans were beset by so many complex, looming problems—the nuclear arms race, pollution, war, and so on—that we could only be saved by better thinking, and that would come from superintelligent machines. The second sentence lets us know that the father of the intelligence explosion concept was acutely aware that producing superintelligent machines, however necessary for our survival, could blow up in our faces. Keeping an ultraintelligent machine under control isn’t a given, Good tells us. He doesn’t believe we will even know how to do it—the machine will have to
tell us
itself.

Good knew a few things about machines that could save the world—he had helped build and run the earliest electrical computers ever, used at Bletchley Park to help defeat Germany. He also knew something about existential risk—he was a Jew fighting against the Nazis, and his father had escaped pogroms in Poland by immigrating to the United Kingdom.

As a boy, Good’s father, a Pole and self-educated intellectual, learned the trade of watchmaking by staring at watchmakers through shop windows. He was just seventeen in 1903 when he headed to England with thirty-five rubles in his pocket and a large wheel of cheese. In London he performed odd jobs until he could set up his own jewelry shop. He prospered and married. In 1915, Isidore Jacob Gudak (later Irving John “Jack” Good) was born. A brother followed and a sister, a talented dancer who would later die in a theater fire. Her awful death caused Jack Good to disavow the existence of God.

Good was a mathematics prodigy, who once stood up in his crib and asked his mother what a thousand times a thousand was. During a bout with diphtheria he independently discovered irrational numbers (those that cannot be expressed as fractions, such as √2). Before he was fourteen he’d rediscovered mathematical induction, a method of making mathematical proofs. By then his mathematics teachers just left him alone with piles of books. At Cambridge University, Good snatched every math prize available on his way to a Ph.D., and discovered a passion for chess.

Other books

Down The Hatch by John Winton
Margaret and the Moth Tree by Brit Trogen, Kari Trogen
Smokeless Fire by Samantha Young
Limelight by Jet, M
A Convenient Husband by Kim Lawrence
The Book of Love by Kathleen McGowan