Machines of Loving Grace (47 page)

Read Machines of Loving Grace Online

Authors: John Markoff

BOOK: Machines of Loving Grace
6.71Mb size Format: txt, pdf, ePub

For now, Trower has focused on a clear and powerful role for robots as assistants for the infirm and the elderly.
This is an excellent example of AI used directly in the service of humans, but what happens if AI-based machines spread quickly through the economy?
We can only hope that the Keynesians are vindicated—in the long run.

T
he twin paths of AI and IA place a tremendous amount of power and responsibility in the hands of the two communities of designers described in this book.
For example, when Steve Jobs set out to assemble a team of engineers to reinvent personal computing with Lisa and the Macintosh, he had a clear goal in mind.
Jobs thought of computing as a “bicycle for our minds.”
Personal computing, which was initially proposed by a small group of engineers and visionaries in the 1970s, has since then had a tremendous impact on the economy and the modern workforce.
It has both empowered individuals and unlocked human creativity on a global scale.

Three decades later, Andy Rubin’s robotics project at Google is representative of a similar small group of engineers who are advancing the state-of-the-art of robots.
Rubin set out with an equally clear—if dramatically different—vision in mind.
When he started acquiring technology and talent for Google’s foray into robotics, he described a ten- to fifteen-year-long effort to radically advance an array of developments in robotics, from walking machines to robot arms and sensor technology.
He sketched a vision of bipedal Google delivery robots arriving at homes by sitting on the back of Google cars, from which they would hop off to deliver packages.

Designing humans either into or out of computer systems is increasingly possible today.
Further advances in both artificial intelligence and augmentation tools will confront roboticists and computer scientists with clear choices about the design of the systems in the workplace and, increasingly, in the surrounding
world.
We will soon be living—either comfortably or uncomfortably—with autonomous machines.

Brad Templeton, a software designer and consultant to the Google car project, has asserted, “A robot will be truly autonomous when you instruct it to go to work and it decides to go to the beach instead.”
5
It is a wonderful turn of phrase, but he has conflated self-awareness with autonomy.
Today, machines are beginning to act without
meaningful
human intervention, or at a level of independence that we can consider autonomous.
This level of autonomy poses difficult questions for designers of intelligent machines.
For the most part, however, engineers ignore the ethical issues posed by the use of computer technologies.
Only occasionally does the community of artificial intelligence researchers sense a quiver of foreboding.

At the Humanoids 2013 conference in Atlanta, which focused on the design and application of robots that appear humanlike, Ronald Arkin, a Georgia Tech roboticist, made a passionate plea to audiences in his speech “How to NOT Build a Terminator.”
He reminded the group that in addition to his famous three laws, Asimov later added the fundamental “zeroth” law of robotics, which states, “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
6
Speaking to a group of more than two hundred roboticists and AI experts from universities and corporations, Arkin challenged them to think more deeply about the consequences of automation.
“We all know that [the DARPA Robotics Challenge] is motivated by urban seek-and-destroy,” he said sardonically, adding, “Oh no, I meant urban search-and-rescue.”

The line between robots as rescuers and enforcers is already gray, if it exists at all.
Arkin showed clips from sci-fi movies, including James Cameron’s 1984
The Terminator
.
Each of the clips depicted evil robots performing tasks that DARPA has specified as part of its robotics challenge: clearing debris, opening doors, breaking through walls, climbing ladders and stairs, and riding in utility vehicles.
Designers can exploit these
capabilities either constructively or destructively, depending on their intent.
The audience laughed nervously—but Arkin refused to let them off the hook.
“I’m being facetious,” he said, “but I’m just trying to tell you that these kinds of technologies you are developing may have uses in places you may not have fully envisioned.”
In the world of weapons design, the potential for unexpected consequences has long been true for what are described as “dual-use” technologies, like nuclear power, which can be used to produce both electric power and weapons.
Now it is also increasingly true of robotics and artificial intelligence technologies.
These technologies are dual-use not just as weapons, but also in terms of their potential to either augment or replace humans.
Today, we are still “in the loop”—machines that either replace or augment humans are the product of human designers, so the designers cannot easily absolve themselves of the responsibility for the consequences of their inventions.
“If you would like to create a Terminator, then I would contend: Keep doing what you are doing, because you are creating component technologies for such a device,” Arkin said.
“There is a big world out there, and this world is listening to the consequences of what we are creating.”

The issues and complications of automation have extended beyond the technical community.
In a little-noted, unclassified Pentagon report entitled “The Role of Autonomy in DoD Systems,”
7
the report’s authors pointed out the ethical quandaries involved in the automation of battle systems.
The military itself is already struggling to negotiate the tension between autonomous systems, like drones, that promise both accuracy and cost efficiency, and the consequences of stepping ever closer to the line where humans are no longer in control of decisions on life and death.
Arkin has argued elsewhere that, unlike human soldiers, autonomous war-fighting robots might have the advantage that they wouldn’t feel a threat to their personal safety, which could potentially reduce collateral damage and avoid war crimes.
This question is part of a debate that dates
back at least to the 1970s, when the air force generals who controlled the nation’s fleets of strategic bombers used the human-in-the-loop argument—that it was possible to recall a bomber and use human pilots to assess damage—in an attempt to justify the value of bomber aircraft in the face of more modern ballistic missiles.

But Arkin also posed a new set of ethical questions in his talk.
What if we have moral robots but the enemy doesn’t?
There is no easy answer to that question.
Indeed, increasingly intelligent and automated weapons technologies have inspired the latest arms race.
Adding inexpensive intelligence to weapons systems threatens to change the international balance of power between nations.

When Arkin concluded his talk at the stately Historic Academy of Medicine in Atlanta, Gill Pratt, the DARPA director of the agency’s Robotics Challenge, was one of the first to respond.
He didn’t refute Arkin’s point.
Instead, he acknowledged that robots are a “dual-use” technology.
“It’s very easy to pick on robots that are funded by the Defense Department,” he said.
“It’s very easy to pick on a robot that looks like the Terminator, but in fact with dual-use being everywhere, it really doesn’t matter.
If you’re designing a robot for health care, for instance, the autonomy it needs is actually in excess of what you would need for a disaster response robot.”
8
Advanced technologies have long posed questions about dual-use.
Now, artificial intelligence and machine autonomy have reframed the problem.
Until now, dual-use technologies have explicitly required that humans make ethical decisions about their use.
The specter of machine autonomy either places human ethical decision-making at a distance or removes it entirely.

In other fields, certain issues have forced scientists and technologists to consider the potential consequences of their work, and many of those scientists acted to protect humanity.
In February of 1975, for example, Nobel laureate Paul Berg encouraged the elite of the then new field of biotechnology to
meet at the Asilomar Conference Grounds in Pacific Grove, California.
At the time, recombinant DNA—inserting new genes into the DNA of living organisms—was a fledgling development.
It presented both the promise for dramatic advances in medicine, agriculture, and new materials and the horrifying possibility that scientists could unintentionally bring about the end of humanity by engineering a synthetic plague.
For the scientists, the meeting led to an extraordinary resolution.
The group recommended that molecular biologists refrain from certain kinds of research and embark on a period of self-regulation during which they would pause their research while the scientists considered how to make it safe.
To monitor the field, biotechnologists set up an independent committee at the National Institutes of Health to review research.
After a little more than a decade, the NIH had gathered sufficient evidence from a wide array of experiments to suggest that it should lift the restrictions on research.
It was a singular example of how society might thoughtfully engage with the consequences of scientific advance.

Following in the footsteps of the biologists, in February of 2009, a group of artificial intelligence researchers and roboticists also met at Asilomar to discuss the progress of AI after decades of failure.
Eric Horvitz, the Microsoft AI researcher who was serving as president of the Association for the Advancement of Artificial Intelligence, called the meeting.
During the previous five years, the researchers in the field had begun discussing twin alarms.
One came from Ray Kurzweil, who had heralded the relatively near-term arrival of computer superintelligences.
Bill Joy, a founder of Sun Microsystems, also offered a darker view of artificial intelligence.
He wrote a
Wired
magazine article that detailed a trio of technology threats from the fields of robotics, genetic engineering, and nanotechnology.
9
Joy believed that the technologies represented a triple threat to human survival and he did not see an obvious solution.

The artificial intelligence researchers who met at Asilomar chose to act less cautiously than their predecessors in the field of biotechnology.
The group of computer science and robotics luminaries, including Sebastian Thrun, Andrew Ng, Manuela Veloso, and Oren Etzioni, who is now the director of Paul Allen’s Allen Institute for Artificial Intelligence, generally discounted the possibility of superintelligences that would surpass humans as well as the possibility that artificial intelligence might spring spontaneously from the Internet.
They agreed that robots that can kill autonomously have already been developed, yet, when it emerged toward the end of 2009, the group’s report proved to be an anticlimax.
The field of AI had not yet arrived at the moment of imminent threat.
“The 1975 meeting took place amidst a recent moratorium on recombinant DNA research.
In stark contrast to that situation, the context for the AAAI panel is a field that has shown relatively graceful, ongoing progress.
Indeed, AI scientists openly refer to progress as being somewhat disappointing in its pace, given hopes and expectations over the years,”
10
the authors wrote in a report summarizing the meeting.

Five years later, however, the question of machine autonomy emerged again.
In 2013, when Google acquired DeepMind, a British artificial intelligence firm that specialized in machine learning, popular belief held that roboticists were very close to building completely autonomous robots.
The tiny start-up had produced a demonstration that showed its software playing video games, in some cases better than human players.
Reports of the acquisition were also accompanied by the claim that Google would set up an “ethics panel” because of concerns about potential uses and abuses of the technology.
Shane Legg, one of the cofounders of DeepMind, acknowledged that the technology would ultimately have dark consequences for the human race.
“Eventually, I think human extinction will probably occur, and technology will likely play a part in this.”
11
For an artificial intelligence researcher who had just reaped hundreds
of millions of dollars, it was an odd position to take.
If someone believes that technology will likely evolve to destroy humankind, what could motivate them to continue developing that same technology?

At the end of 2014, the 2009 AI meeting at Asilomar was reprised when a new group of AI researchers, funded by one of the Skype founders, met in Puerto Rico to again consider how to make their field safe.
Despite a new round of alarming statements about AI dangers from luminaries such as Elon Musk and Stephen Hawking, the attendees wrote an open letter that notably fell short of the call to action that had been the result of the original 1975 Asilomar biotechnology meeting.

Other books

Scattered Petals by Amanda Cabot
Lenin: A Revolutionary Life by Christopher Read
Black Friday: Exposed by Ashley;JaQuavis
Brothers and Bones by Hankins, James
When Men Betray by Webb Hubbell
Grayson by Lisa Eugene
Lady Windermere's Lover by Miranda Neville
Violin Warrior Romance by Kristina Belle