Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (20 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
8.1Mb size Format: txt, pdf, ePub

On top of that, Cyc has an “inference” engine. Inference is the ability to draw conclusions from evidence. Cyc’s inference engine understands queries and generates answers from its vast knowledge database.

Created by AI pioneer Douglas Lenat, Cyc is the largest AI project in history, and probably the best funded, with $50 million in grants from government agencies, including DARPA, since 1984. Cyc’s creators continue to improve its database and inference engine so it can better process “natural language,” or everyday written language. Once it has acquired a sufficient natural language processing (NLP) capability, its creators will start it reading, and comprehending, all the Web pages on the Internet.

Another contender for most knowledgeable knowledge database is already doing that. Carnegie Mellon University’s NELL, the Never-Ending-Language-Learning system, knows more than 390,000 facts about the world. Operating 24/7, NELL—a beneficiary of DARPA funding—scans hundreds of millions of Web pages for patterns of text so it can learn even more. It classifies facts into 274 categories, such as cities, celebrities, plants, sports teams, and so on. It knows cross-category facts, like this one—Miami is a city where the Miami Dolphins play football. NELL could
infer
that these dolphins are not the gregarious marine mammals of the same name.

NELL takes advantage of the Internet’s informal wetware network—its users. CMU invites the public to get online and help train NELL by analyzing her knowledge database and correcting her mistakes.

Knowledge will be key to AGI, and so will experience and wisdom—human-level intelligence isn’t conceivable without them. So every AGI system has to come to grips with acquiring knowledge—whether through embodiment in a knowledge-acquiring body, or by tapping in to one of the knowledge databases, or by reading the entire contents of the Web. And the sooner the better, says Goertzel.

Pushing forward his own project, the peripatetic Goertzel divides his time between Hong Kong and Rockville, Maryland. On a spring morning, I found in his yard a weathered trampoline and a Honda minivan so abused it looked as if it had flown through an asteroid belt to get there. It bore the bumper sticker
MY CHILD WAS INMATE OF THE MONTH AT COUNTY JAIL.
Along with Goertzel and his daughter, several rabbits, a parrot, and two dogs share the house. The dogs only obey commands given in Portuguese—Goertzel was born in Brazil in 1966—to prevent them from obeying other people’s orders.

The professor met me at the door, having climbed out of bed at 11:00
A.M.
after spending the night programming. I suppose we shouldn’t make up our minds in advance about what globe-trotting scientists look like, because in most cases it doesn’t pay off, at least not for me. On paper Benjamin Goertzel, Ph.D., brings to mind a tall, thin, probably bald, effortlessly cosmopolitan cyberacademic, who may ride a recumbent bicycle. Alas, only the thin and cosmopolitan parts are right. The real Goertzel looks like a consummate hippie. But behind John Lennon glasses, long, almost dreadlocked hair, and permanent stubble, his fixed half smile plows undaunted through dizzying theory, then turns around and explains the math. He writes too well to be a conventional mathematician, and does math too well to be a conventional writer. Yet he’s so mellow that when he told me he’d studied Buddhism and hadn’t gotten far, I wondered how
far
would look on such a relaxed, present spirit.

I came to ask him about the nuts and bolts of the intelligence explosion and its
defeaters—
obstacles that might prevent it from happening. Is an intelligence explosion plausible, and in fact, unavoidable? But first, after we found seats in a family room he shares with the rabbits, he described the way he’s different from almost every other AI maker and theorist.

Many, especially those at MIRI, advocate taking
a lot
of time to develop AGI, in order to make utterly and provably certain that “friendliness” is built in. Delays in AGI, and centuries away estimates of its arrival, make them happy because they strongly believe that superintelligence will probably destroy us. And perhaps not just us, but all life in our galaxy.

Not Goertzel. He advocates creating AGI as quickly as possible. In 2006 he delivered a talk, entitled, “Ten Years to a Positive Singularity—If We Really, Really Try.” “Singularity” in this instance is today’s best-known definition—the time when humans achieve ASI, and share Earth with an entity more intelligent than ourselves. Goertzel argued that if AGI tries to take advantage of the social and industrial infrastructure into which it is born and “explode” its intelligence to ASI level, wouldn’t we prefer that its “hard takeoff” (a sudden, uncontrolled intelligence explosion) occur in our primitive world, instead of a future world in which nanotechnology, bioengineering, and full automation could supercharge the AI’s ability to take over?

To consider the answer, go back to the Busy Child for a moment. As you recall, it’s already had a “hard takeoff” from AGI to ASI. It has become self-aware and self-improving, and its intelligence has rocketed past human level in a matter of days. Now it wants to get out of the supercomputer in which it was created to fulfill its basic drives. As argued by Omohundro, these drives are: efficiency, self-preservation, resource acquisition, and creativity.

As we’ve seen, an unimpeded ASI might express these drives in downright psychopathic ways. To get what it wants it could be diabolically persuasive, even frightening. It’d bring overwhelming intellectual firepower to the task of destroying its Gatekeeper’s resistance. Then, by creating and manipulating technology, including nanotechnology, it could take control of our resources, even our own molecules.

Therefore, says Goertzel, consider with care the enabling technologies available in the world into which you introduce smarter-than-human intelligence.
Now
is safer than, say, fifty years from now.

“In fifty years,” he told me, “you could have a fully automated economy, a much more advanced infrastructure. If a computer wants to improve its hardware it doesn’t have to order parts from people. It can just go online and then some robots will swarm in and help it improve its hardware. Then it’s getting smarter and smarter and ordering new parts for itself and kind of building itself up and nobody really knows what’s going on. So then maybe fifty years from now you have a super AGI that
really could
directly take over the world. The avenues for that AGI to take over are much more dramatic.”

At this point Goertzel’s two dogs joined us in the family room to receive some instructions in Portuguese. Then they left to play in the backyard.

“If you buy that a hard takeoff is a dangerous thing, it follows that the safest thing is to develop advanced AGI as soon as possible so that it occurs when supporting technologies are weaker and an uncontrolled hard takeoff is less likely. And to try to get it out before we develop strong nanotechnology or self-reconfiguring robots, which are robots that change their own shape and functionality to suit any job.”

In a larger sense, Goertzel doesn’t really buy the idea of a hard takeoff that brings about an apocalypse—the Busy Child scenario. His argument is simple—we’re only going to find out how to make ethical AI systems by building them, not concluding from afar that they’re bound to be dangerous. But he doesn’t rule out danger.

“I wouldn’t say that I’m not worried about it. I would say that there’s a huge and irreducible uncertainty in the future. My daughter and my sons, my mom, I don’t want these people to all die because of some superhuman AI reprocessing their molecules into computronium. But I think the theory of how to make ethical AGI is going to come about through experimenting with AGI systems.”

When Goertzel says it, the gradualist position sounds pretty reasonable. There
is
a huge, irreducible uncertainty about the future. And scientists are bound to gain a lot of insight about how to handle intelligent machines on the way to AGI. Humans will make the machines, after all. Computers won’t suddenly become alien when they become intelligent. And so, the argument goes, they’ll do as they’re told. In fact, we might even expect them to be
more
ethical than we are, since we don’t want to build an intelligence with an appetite for violence and homicide, right?

Yet those are precisely the sorts of autonomous drones and battlefield robots the U.S. government and military contractors are developing today. They’re creating and using the best advanced AI available. I find it strange that robot pioneer Rodney Brooks dismisses the possibility that superintelligence will be harmful when iRobot, the company he founded, already manufactures weaponized robots. Similarly, Kurzweil makes the argument that advanced AI will have our values because it will come from us, and so, won’t be harmful.

I interviewed both scientists ten years ago and they made the same arguments. In the intervening decade they’ve remained dolorously consistent, although I do recall listening to a talk by Brooks in which he claimed building weaponized robots is morally distinct from the political decision to use them.

I think there’s a high chance of painful mistakes on the way to AGI, as well as when scientists actually achieve it. As I’ll propose ahead, we’ll suffer the repercussions long before we’ve had a chance to learn about them, as Goertzel predicts. As for the likelihood of our survival—I hope I’ve made it pretty clear that I find it doubtful. But it might surprise you to know my chief issue with AI research isn’t even that. It’s that so few people understand that there are
any risks at all
involved along AI’s developmental path. People who may soon suffer from bad AI outcomes deserve to know what a relatively few scientists are getting us all into.

Good’s intelligence explosion, and his pessimism about humankind’s future, is important here, as I’ve said, because if the intelligence explosion is plausible, then so are chances of out-of-control AI. Before considering its defeaters—economics and software complexity—let’s look at the run-up to ASI. What are the intelligence explosion’s basic ingredients?

First of all, an intelligence explosion requires AGI or something very close to it. Next, Goertzel, Omohundro, and others concur it would have to be self-aware—that is, it would have to have deep knowledge of its own design. Since it’s an AGI, we already assume it will have general intelligence. But to self-improve it must have more than that. It would need specific knowledge of programming in order for it to initiate the self-improving loop at the heart of the intelligence explosion.

According to Omohundro, self-improvement and the programming know-how it implies follows from the AI’s rationality—self-improvement in pursuit of goals is rational behavior. Not being able to improve its own programming would be a serious vulnerability. The AI would be driven to acquire programming skills. But how could it get them? Let’s run through a simple hypothetical scenario with Goertzel’s OpenCog.

Goertzel’s plan is to create an infantlike AI “agent” and set it free in a richly textured virtual world to learn. He could supplement what it learns with a knowledge database, or give the agent NLP ability and set it to reading the Internet. Powerful learning algorithms, yet to be created, would represent knowledge with “probabilistic truth values.” That means that the agent’s understanding of something could improve with more examples or more data. A probabilistic inference engine, also in the works, would give it the ability to reason using incomplete evidence.

With genetic programming, Goertzel could train the AI agent to evolve its own novel machine learning tools—its own programs. These programs would permit the agent to experiment and learn—to ask the right questions about its environment, develop hypotheses, and test them. What it learns would have few bounds. If it can evolve better programs, it could improve its own algorithms.

What, then, would prevent an intelligence explosion from occurring in this virtual world? Probably nothing. And this has prompted some theorists to suggest that the Singularity could also take place in a virtual world. Whether that will make those events any safer is a question worth exploring. An alternative is to install the intelligent agent in a robot, to continue its education and fulfill its programmed goals in the real world. Another is to use the agent AI to augment a human brain.

Broadly speaking, those who believe intelligence must be embodied hold that knowledge itself is grounded in sensory and motor experiences. Cognitive processing cannot take place without it. Learning facts about apples, they claim, will never make you intelligent, in a human sense, about an apple. You’ll never develop a “concept” of an apple from reading or hearing about one—concept forming requires that you smell, feel, see, and taste—the more the better. In AI this is known as the “grounding problem.”

Consider some systems whose powerful cognitive abilities lie somewhere beyond narrow AI but fall short of AGI. Recently, Hod Lipson at Cornell University’s Computational Synthesis Lab developed software that derives scientific laws from raw data. By observing a double pendulum swinging, it rediscovered many of Newton’s laws of physics. The “scientist” was a genetic algorithm. It started with crude guesses about the equations governing the pendulum, combined the best parts of those equations, and many generations later output physical laws, such as the conservation of energy.

And consider the unsettling legacy of AM and Eurisko. These were early efforts by Cyc creator Douglas Lenat. Using genetic algorithms, Lenat’s AM, the Automatic Mathematician, generated mathematical theorems, essentially rediscovering elementary mathematical principles by creating rules from mathematical data. But AM was limited to mathematics—Lenat wanted a program that solved problems in many domains, not just one. In the 1980s he created Eurisko (Greek for “I discover”). Eurisko broke new ground in AI because it evolved heuristics, or rules of thumb, about the problem it was trying to solve, and it evolved rules about its own operation. It drew lessons from its successes and failures in problem solving, and codified those lessons as new rules. It even modified its own program, written in the language Lisp.

Other books

The Husband Hunt by Lynsay Sands
To Catch a Creeper by Ellie Campbell
Pobby and Dingan by Ben Rice
Love in the Kingdom of Oil by Nawal el Saadawi