Rise of the Robots: Technology and the Threat of a Jobless Future (35 page)

BOOK: Rise of the Robots: Technology and the Threat of a Jobless Future
3.41Mb size Format: txt, pdf, ePub

*
In
Elysium,
the rabble eventually infiltrates the elite orbital fortress by hacking into its systems. That’s at least one hopeful note regarding this scenario: the elite would have to be very careful about whom they trusted to design and manage their technology. Hacking and cyber attack would likely be the greatest dangers to their continued rule.

*
For example, waiting tables in a full-service restaurant would require a very advanced robot—something that we’re unlikely to see anytime soon. However, when consumers are struggling, restaurant meals are one of the first things to go, so waiters would still be at risk.

Chapter 9

SUPER-INTELLIGENCE AND THE SINGULARITY

In May 2014, Cambridge University physicist Stephen Hawking penned an article that set out to sound the alarm about the dangers of rapidly advancing artificial intelligence. Hawking, writing in the UK’s
The Independent
along with co-authors who included Max Tegmark and Nobel laureate Frank Wilczek, both physicists at MIT, as well as computer scientist Stuart Russell of the University of California, Berkeley, warned that the creation of a true thinking machine “would be the biggest event in human history.” A computer that exceeded human-level intelligence might be capable of “outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.” Dismissing all this as science fiction might well turn out to be “potentially our worst mistake in history.”
1

All the technology I’ve described thus far—robots that move boxes or make hamburgers, algorithms that create music, write reports, or trade on Wall Street—employ what is categorized as specialized or “narrow” artificial intelligence. Even IBM’s Watson, perhaps the most impressive demonstration of machine intelligence to date,
doesn’t come close to anything that might reasonably be compared to general, human-like intelligence. Indeed, outside the realm of science fiction, all functional artificial intelligence technology is, in fact, narrow AI.

One of the primary arguments I’ve put forth here, however, is that the specialized nature of real-world AI doesn’t necessarily represent an impediment to the ultimate automation of a great many jobs. The tasks that occupy the majority of the workforce are, on some level, largely routine and predictable. As we’ve seen, rapidly improving specialized robots or machine learning algorithms that churn through reams of data will eventually threaten enormous numbers of occupations at a wide range of skill levels. None of this requires machines that can think like people. A computer doesn’t need to replicate the entire spectrum of your intellectual capability in order to displace you from your job; it only needs to do the specific things you are paid to do. Indeed, most AI research and development, and nearly all venture capital, continue to be focused on specialized applications, and there’s every reason to expect these technologies to become dramatically more powerful and flexible over the coming years and decades.

Even as these specialized undertakings continue to produce practical results and attract investment, a far more daunting challenge lurks in the background. The quest to build a genuinely intelligent system—a machine that can conceive new ideas, demonstrate an awareness of its own existence, and carry on coherent conversations—remains the Holy Grail of artificial intelligence.

Fascination with the idea of building a true thinking machine traces its origin at least as far back as 1950, when Alan Turing published the paper that ushered in the field of artificial intelligence. In the decades that followed, AI research was subjected to a boom-and-bust cycle in which expectations repeatedly soared beyond any realistic technical foundation, especially given the speed of the computers available at the time. When disappointment inevitably
followed, investment and research activity collapsed and long, stagnant periods that have come to be called “AI winters” ensued. Spring has once again arrived, however. The extraordinary power of today’s computers combined with advances in specific areas of AI research, as well as in our understanding of the human brain, are generating a great deal of optimism.

James Barrat, the author of a recent book on the implications of advanced AI, conducted an informal survey of about two hundred researchers in human-level, rather than merely narrow, artificial intelligence. Within the field, this is referred to as Artificial General Intelligence (AGI). Barrat asked the computer scientists to select from four different predictions for when AGI would be achieved. The results: 42 percent believed a thinking machine would arrive by 2030, 25 percent said by 2050, and 20 percent thought it would happen by 2100. Only 2 percent believed it would never happen. Remarkably, a number of respondents wrote comments on their surveys suggesting that Barrat should have included an even earlier option—perhaps 2020.
2

Some experts in the field worry that another expectations bubble might be building. In an October 2013 blog post, Yann LeCun, the director of Facebook’s newly created AI research lab in New York City, warned that “AI ‘died’ about four times in five decades because of hype: people made wild claims (often to impress potential investors or funding agencies) and could not deliver. Backlash ensued.”
3
Likewise, NYU professor Gary Marcus, an expert in cognitive science and a blogger for the
New Yorker,
has argued that recent breakthroughs in areas like deep learning neural networks, and even some of the capabilities attributed to IBM Watson, have been significantly over-hyped.
4

Still, it seems clear that the field has now acquired enormous momentum. In particular, the rise of companies like Google, Facebook, and Amazon has propelled a great deal of progress. Never before have such deep-pocketed corporations viewed artificial intelligence
as absolutely central to their business models—and never before has AI research been positioned so close to the nexus of competition between such powerful entities. A similar competitive dynamic is unfolding among nations. AI is becoming indispensable to militaries, intelligence agencies, and the surveillance apparatus in authoritarian states.
*
Indeed, an all-out AI arms race might well be looming in the near future. The real question, I think, is not whether the field as a whole is in any real danger of another AI winter but, rather, whether progress remains limited to narrow AI or ultimately expands to Artificial General Intelligence as well.

If AI researchers do eventually manage to make the leap to AGI, there is little reason to believe that the result will be a machine that simply matches human-level intelligence. Once AGI is achieved, Moore’s Law alone would likely soon produce a computer that exceeded human intellectual capability. A thinking machine would, of course, continue to enjoy all the advantages that computers currently have, including the ability to calculate and access information at speeds that would be incomprehensible for us. Inevitably, we would soon share the planet with something entirely unprecedented: a genuinely alien—and superior—intellect.

And that might well be only the beginning. It’s generally accepted by AI researchers that such a system would eventually be driven to direct its intelligence inward. It would focus its efforts on improving its own design, rewriting its software, or perhaps using evolutionary programming techniques to create, test, and optimize enhancements to its design. This would lead to an iterative process of “recursive improvement.” With each revision, the system would become smarter
and more capable. As the cycle accelerated, the ultimate result would be an “intelligence explosion”—quite possibly culminating in a machine thousands or even millions of times smarter than any human being. As Hawking and his collaborators put it, it “would be the biggest event in human history.”

If such an intelligence explosion were to occur, it would certainly have dramatic implications for humanity. Indeed, it might well spawn a wave of disruption that would scale across our entire civilization, let alone our economy. In the words of futurist and inventor Ray Kurzweil, it would “rupture the fabric of history” and usher in an event—or perhaps an era—that has come to be called “the Singularity.”

The Singularity

The first application of the term “singularity” to a future technology-driven event is usually credited to computer pioneer John von Neumann, who reportedly said sometime in the 1950s that “ever accelerating progress . . . gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
5
The theme was fleshed out in 1993 by San Diego State University mathematician Vernor Vinge, who wrote a paper entitled “The Coming Technological Singularity.” Vinge, who is not given to understatement, began his paper by writing that “[w]ithin thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”
6

In astrophysics, a singularity refers to the point within a black hole where the normal laws of physics break down. Within the black hole’s boundary, or event horizon, gravitational force is so intense that light itself is unable to escape its grasp. Vinge viewed the technological singularity in similar terms: it represents a discontinuity in human progress that would be fundamentally opaque until it
occurred. Attempting to predict the future beyond the Singularity would be like an astronomer trying to see inside a black hole.

The baton next passed to Ray Kurzweil, who published his book
The Singularity Is Near: When Humans Transcend Biology
in 2005. Unlike Vinge, Kurzweil, who has become the Singularity’s primary evangelist, has no qualms about attempting to peer beyond the event horizon and give us a remarkably detailed account of what the future will look like. The first truly intelligent machine, he tells us, will be built by the late 2020s. The Singularity itself will occur some time around 2045.

Kurzweil is by all accounts a brilliant inventor and engineer. He has founded a series of successful companies to market his inventions in areas like optical character recognition, computer-generated speech, and music synthesis. He’s been awarded twenty honorary doctorate degrees as well as the National Medal of Technology and was inducted into the US Patent Office’s Hall of Fame.
Inc.
magazine once referred to him as the “rightful heir” to Thomas Edison.

His work on the Singularity, however, is an odd mixture composed of a well-grounded and coherent narrative about technological acceleration, together with ideas that seem so speculative as to border on the absurd—including, for example, a heartfelt desire to resurrect his late father by gathering DNA from the gravesite and then regenerating his body using futuristic nanotechnology. A vibrant community, populated with brilliant and often colorful characters, has coalesced around Kurzweil and his ideas. These “Singularians” have gone so far as to establish their own educational institution. Singularity University, located in Silicon Valley, offers unaccredited graduate-level programs focused on the study of exponential technology and counts Google, Genentech, Cisco, and Autodesk among its corporate sponsors.

Among the most important of Kurzweil’s predictions is the idea that we will inevitably merge with the machines of the future. Humans will be augmented with brain implants that dramatically enhance intelligence. Indeed, this intellectual amplification is seen as
essential if we are to understand and maintain control of technology beyond the Singularity.

Perhaps the most controversial and dubious aspect of Kurzweil’s post-Singularity vision is the emphasis that its adherents place on the looming prospect of immortality. Singularians, for the most part, do not expect to die. They plan to accomplish this by achieving a kind of “longevity escape velocity”—the idea being that if you can consistently stay alive long enough to make it to the next life-prolonging innovation, you can conceivably become immortal. This might be achieved by using advanced technologies to preserve and augment your biological body—or it might happen by uploading your mind into some future computer or robot. Kurzweil naturally wants to make sure that he’s still around when the Singularity occurs, and so he takes as many as two hundred different pills and supplements every day and receives others through regular intravenous infusions. While it’s quite common for health and diet books to make outsized promises, Kurzweil and his physician co-author Terry Grossman take things to an entirely new level in their books
Fantastic Voyage: Live Long Enough to Live Forever
and
Transcend: Nine Steps to Living Well Forever.

It’s not lost on the Singularity movement’s many critics that all this talk of immortality and transformative change has deeply religious overtones. Indeed, the whole idea has been derided as a quasi-religion for the technical elite and a kind of “rapture for the nerds.” Recent attention given to the Singularity by the mainstream media, including a 2011 cover story in
Time,
has led some observers to worry about its eventual intersection with traditional religions. Robert Geraci, a professor of religious studies at Manhattan College, wrote in an essay entitled “The Cult of Kurzweil” that if the movement achieves traction with the broader public, it “will present a serious challenge to traditional religious communities, whose own promises of salvation may appear weak in comparison.”
7
Kurzweil, for his part, vociferously denies any religious connotation and argues that his predictions are based on a solid, scientific analysis of historical data.

The whole concept might be easy to dismiss completely were it not for the fact that an entire pantheon of Silicon Valley billionaires have demonstrated a very strong interest in the Singularity. Both Larry Page and Sergey Brin of Google and PayPal co-founder (and Facebook investor) Peter Thiel have associated themselves with the subject. Bill Gates has likewise lauded Kurzweil’s ability to predict the future of artificial intelligence. In December 2012 Google hired Kurzweil to direct its efforts in advanced artificial intelligence research, and in 2013 Google spun off a new biotechnology venture named Calico. The new company’s stated objective is to conduct research focused on curing aging and extending the human lifespan.

Other books

Rude Boy USA by Victoria Bolton
Owned for Christmas by Willa Edwards
1512298433 (R) by Marquita Valentine
A Fresh Start by Martha Dlugoss
A Line in the Sand by Seymour, Gerald
Charles and Emma by Deborah Heiligman