Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover (22 page)

BOOK: Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover
4.78Mb size Format: txt, pdf, ePub

Another reason why economics won’t slow an intelligence explosion is this: when AGI appears, or even gets close, everyone will want some. And I mean everyone. Goertzel points out that the arrival of human-level intelligent systems would have stunning implications for the world economy. AGI makers will receive immense investment capital to complete and commercialize the technology. The range of products and services intelligent agents of human caliber could provide is mind-boggling. Take white-collar jobs of all kinds—who wouldn’t want smart-as-human teams working around the clock doing things normal flesh-and-blood humans do, but without rest and without error. Take computer programming, as Steve Omohundro said back in chapter 5. We humans are lousy programmers, and computer intelligence would be uniquely suited to program better than we do (and in short order use that programming know-how on their own internal processes).

According to Goertzel, “If an AGI could understand its own design, it could also understand and improve other computer software, and so have a revolutionary impact on the software industry. Since the majority of financial trading on the U.S. markets is now driven by program trading systems, it is likely that such AGI technology would rapidly become indispensable to the finance industry. Military and espionage establishments would very likely also find a host of practical applications for such technology. The details of how this development frenzy would play out are open to debate, but we can at least be sure that any limitations to the economic growth rate and investment climate in an AGI development period would quickly become irrelevant.”

Next, robotize the AGI—put it in a robot body—and whole worlds open up. Take dangerous jobs—mining, sea and space exploration, soldiering, law enforcement, firefighting. Add service jobs—caring for the elderly and children, valets, maids, personal assistants. Robot gardeners, chauffeurs, bodyguards, and personal trainers. Science, medicine, and technology—what human enterprise couldn’t be wildly advanced with teams of tireless and ultimately expendable human-level-intelligent agents working for them around the clock?

Next, as we’ve discussed before, international competition will thrust many nations into bidding on the technology, or compel them to have another look at AGI research projects at home. Goertzel says, “If a working AGI prototype were to approach the level at which an explosion seemed possible, governments around the world would recognize that this was a critically important technology, and no effort would be spared to produce the first fully functional AGI ‘before the other side does.’ Entire national economies might well be sublimated to the goal of developing the first superintelligent machine. Far from limiting an intelligence explosion, economic growth rate would be defined by the various AGI projects taking place around the world.”

In other words, a lot will change once we’re sharing the planet with smart-as-human intelligence, then it will change again as Good’s intelligence explosion detonates, and ASI appears.

But before considering these changes, and other important obstacles to AGI development and the intelligence explosion, let’s wrap up the question of funding as a critical barrier. Simply put, it isn’t one. AGI development isn’t wanting for cash, in three ways. First, there’s no shortage of narrow AI projects that will inform or even become components of general AI systems. Second, a handful of “uncloaked” AGI projects are in the works and making significant headway with various sources of funding, to say nothing of probable stealth projects. Third, as AI technology approaches the level of AGI, a flood of funding will push it across the finish line. So large will the cash infusion be, in fact, that the tail will wag the dog. Barring some other bottleneck, the world’s economy will be driven by the creation of strong artificial intelligence, and fueled by the growing global apprehension of all the ways it will change our lives.

Up ahead we’ll explore another critical roadblock—software complexity. We’ll find out if the challenge of creating software architectures that match human-level intelligence is just too difficult to conquer, and whether or not all that stretches out ahead is a perpetual AI winter.

 

Chapter Twelve

The Last Complication

How can we be so confident that we will build superintelligent machines? Because the progress of neuroscience makes it clear that our wonderful minds have a physical basis, and we should have learned by now that our technology can do anything that’s physically possible. IBM’s Watson, playing Jeopardy! as skillfully as human champions, is a significant milestone and illustrates the progress of machine language processing. Watson learned language by statistical analysis of the huge amounts of text available online. When machines become powerful enough to extend that statistical analysis to correlate language with sensory data, you will lose a debate with them if you argue that they don’t understand language.

—Bill Hibbard, AI scientist

Is it really so far-fetched to believe that we will eventually uncover the principles that make intelligence work and implement them in a machine, just like we have reverse engineered our own versions of the particularly useful features of natural objects, like horses and spinnerets? News flash: the human brain is a natural object.

—Michael Anissimov, MIRI Media Director

Normalcy bias
—the refusal to plan for, or react to, a disaster that has never happened before.

—Brief Treatment and Crisis Intervention

Durable themes have emerged from our exploration of the intelligence explosion. AGI, when it is achieved, will by most accounts be a complex system, and complex systems fail, whether or not they involve software. The AI systems and cognitive architectures we’ve begun exploring are the kinds of systems that
Normal Accidents
author Charles Perrow might indict as being so complex that we cannot anticipate the variety of combined failures that may occur. It’s no stretch to say AGI will likely be created in a cognitive architecture whose size and complexity might surpass that of the recent 30,000 processor cloud array set up by Cycle Computing. And according to the company’s own boast, Monster Cat was a system too complex to be monitored (read
understood
) by a human being.

Add to that the unsettling fact that parts of probable AGI systems, such as genetic algorithms and neural networks, are inherently unknowable—we don’t fully understand why they make the decisions they do. And still, of all the people working in AI and AGI, a minority are even aware there may be dangers on the horizon. Most are not planning for disaster scenarios or life-saving responses. At Chernobyl and Three Mile Island, nuclear engineers had deep knowledge of emergency scenarios and procedures, yet they still failed to effectively intervene. What chance do the unprepared have for managing an AGI?

Finally, consider DARPA. Without DARPA, computer science and all we gain from it would be at a much more primitive state. AI would lag far behind if it existed at all. But DARPA is a defense agency. Will DARPA be prepared for just how complex and inscrutable AGI will be? Will they anticipate that AGI will have its own drives, beyond the goals with which it is created? Will DARPA’s grantees weaponize advanced AI before they’ve created an ethics policy regarding its use?

The answers to these questions may not be ones we’d like, particularly since the future of the human race is at stake.

*   *   *

Consider the next possible barrier to an intelligence explosion—software complexity. The proposition is this: we will never achieve AGI, or human-level intelligence, because the problem of creating human-level intelligence will turn out to be too hard. If that happens, no AGI will improve itself sufficiently to ignite an intelligence explosion. It will never create a slightly smarter iteration of itself, so that version won’t build a more intelligent version, and so on. The same restriction would apply to human-computer interfaces—they would augment and enhance human intelligence, but never truly exceed it.

Yet, in one sense we already have surpassed AGI, or the intelligence level of any human, with a boost from technology. Just pair a human of average IQ with Google’s search engine and you’ve got a team that’s smarter than human—a human whose
i
ntelligence is
a
ugmented.
IA
instead of
AI
. Vernor Vinge believes this is one of three sure routes to an intelligence explosion in the future, when a device can be attached to your brain that imbues it with additional speed, memory, and
intelligence
.

Consider the smartest human you can bring to mind, and pit him or her against our hypothetical human-Google team in a test of factual knowledge and factoring. The human-Google team will win hands down. In complex problem-solving, the more intelligent human will likely win, although armed with the body of knowledge on the Web, Google and Co. could put up a good fight.

Is knowledge the same thing as intelligence? No, but knowledge is an intelligence amplifier, if intelligence is, among other things, the ability to act nimbly and powerfully in your environment. Entrepreneur and AI maker Peter Voss speculated that had Aristotle possessed Einstein’s knowledge base, he could’ve come up with the theory of general relativity. The Google, Inc. search engine in particular has multiplied worker productivity, especially in occupations that call for research and writing. Tasks that formerly required time-consuming research—a trip to the library to pore over books and periodicals, perform Lexis/Nexis searches, and look up experts and write or phone them—are now fast, easy, and cheap. Much of this increased productivity is due, of course, to the Internet itself. But the vast ocean of information it holds is overwhelming without intelligent tools to extract the small fraction you need. How does Google do it?

Google’s proprietary algorithm called PageRank gives every site on the entire Internet a score of 0 to 10. A score of 1 on PageRank (allegedly named after Google cofounder Larry Page, not because it ranks Web pages) means a page has twice the “quality” of a site with a PageRank of 0. A score of 2 means twice the quality as score of 1, and so on.

Many variables account for “quality.” Size is important—bigger Web sites are better, and so are older ones. Does the page have a lot of content—words, graphics, download options? If so, it gets a higher rank. How fast is the site and how many links to high-quality Web sites does it have? These factors and more add to PageRank rankings.

When you enter a word or phrase, Google performs hypertext-matching analysis to find the sites most relevant to your search. Hypertext-matching analysis looks for the word or phrase you entered, but also probes page content, including the richness of font use, page divisions, and where words are placed. It looks at how your search words are used by the page, and neighboring pages at the site. Because PageRank has already chosen the most important sites on the entire Internet, Google does not have to evaluate the whole Web for relevance, only the highest quality sites. The combined text-matching and ranking serves up thousands of sites in seconds, milliseconds, or as fast as you type your query.

Now, how much more productive today is a team of information workers than before Google? Twice as productive? Five times? What’s the impact on our economy when such a large percentage of workers’ productivity has doubled or tripled or more? On the bright side we get a higher gross national product, owing to the impact of information technology on worker productivity. On the dark side, worker displacement and unemployment, caused by a range of information technologies, including Google.

Clever programming shouldn’t be confused with intelligence, of course, but I’d argue that Google and the like
are
intelligent tools, not just clever programs. They have mastered a narrow domain—search—with ability no human could touch. Furthermore, Google puts the Internet—the largest compilation of human knowledge ever amassed—at your fingertips. And significantly, all that knowledge is available in an instant, faster than ever before (sorry Yahoo, Bing, Altavista, Excite, Dogpile, Hotbot, and the Love Calculator). Writing has often been described as
outsourcing
memory. It enables us to store our thoughts and memories for later retrieval and distribution. Google outsources important kinds of intelligence that we don’t possess, and could not develop without it.

Combined, Google and you
are
ASI.

In a similar way, our intelligence is broadly enhanced by the
mobilization
of powerful information technology, for example, our mobile phones, many of which have roughly the computing power of personal computers circa 2000, and a billion times the power per dollar of sixties-era mainframe computers. We humans are mobile, and to be truly relevant, our intelligence enhancements must be mobile. The Internet, and other kinds of knowledge, not the least of which is navigation, gain vast new power and dimension as we are able to take them wherever we go. For a simple example, how much is your desktop computer worth to you when you’re lost at night in a crime-ridden section of a city? I’ll wager not as much as your iPhone with its talking navigation app.

For reasons like this, MIT’s
Technology Review
writer Evan Schwartz boldly claims mobile phones are becoming “mankind’s primary tool.” He notes that more than five
billion
are deployed worldwide, or not far short of one per person.

The next step for intelligence augmentation is to put all the enhancement contained in a smart phone inside us—to connect it to our brains. Right now we interface with our computers with our ears and eyes, but in the future imagine implanted devices that permit our brains to connect wirelessly to a cloud, from anywhere. According to Nicholas Carr, author of the
Big Switch
, that’s what Google’s cofounder Larry Page has in mind for the search engine’s future.

“The idea is that you no longer have to sit down at a keyboard to locate information,” said Carr. “It becomes automatic, a sort of machine-mind meld. Larry Page has discussed a scenario where you merely think of a question, and Google whispers the answer into your ear through your cell phone.” See for instance, the recent announcement of “Project Glass.” They are glasses that allow you to perform Google queries and see the results while you are walking down the street—right in your field of view.

Other books

Emily's Reasons Why Not by Carrie Gerlach
Uncommon Valour by Paul O'Brien
Greater Expectations by Alexander McCabe
Teaching the Common Core Math Standards With Hands-On Activities, Grades 3-5 by Judith A. Muschla, Gary Robert Muschla, Erin Muschla-Berry
Greenshift by Heidi Ruby Miller
Summer Lightning by Cynthia Bailey Pratt
Murder, Served Simply by Isabella Alan