Authors: David Shenk
Now, in Game 2 of the 2003 match, he had the unenviable task of proving he could beat Junior again—and this time as Black, which always has the inherent disadvantage of moving second. But Kasparov did not come to the table to fight for a draw. From the start, he surprised expert onlookers with another aggressive game, an unorthodox version of one of his specialties, the Sicilian Defense. Just as in the first game, he boldly took on the computer in tactical play. And he seemed in command for much of the game. Now there was no doubt about it: tactical play—trying to achieve short-term gain—was clearly an emerging theme in this match. In recent years, it had become axiomatic:
Humans cannot win tactical battles against computers
. A squishy and vulnerable human brain cannot compete move by move with a computer that analyzes millions of moves per second. In New York, Kasparov was challenging this widely held belief, and in Game 2 he again took on Deep Junior both strategically and tactically. Specifically in this game, explained commentator John Fernandez, “Kasparov’s wrinkle was to employ a rare development of his dark-squared Bishop on the square a7, where it controls many squares in the heart of Deep Junior’s position from the protective bunker of the corner of the board.”
D
EEP
J
UNIOR VS.
G
ARRY
K
ASPAROV
J
ANUARY
28, 2003
N
EW
Y
ORK
G
AME
2
Commentators noted that Kasparov had successfully played this exact Bishop move in a recent exhibition match. It worked this time too, for a while. But then on move 25 he was outfoxed. Deep Junior offered Kasparov the chance to check with his Queen. Kasparov had been planning another move, but the check was too tempting to pass up. At the least, he couldn’t see how taking the check could do any harm.
“It was a human move,” he said later. “You see a check like that and you simply play it. But I immediately realized that I had let [Junior] off the hook.”
The game ended in a draw—all in all, not a bad deal for Kasparov, who as Black had avoided a loss. The ambassador for human intelligence was still doing humans proud, still winning the match against an inexhaustible and savvy machine. At the same time, Deep Junior was surprising the experts with its
humanity
. “Its play has been almost completely indistinguishable from that of a human master…it hasn’t made any obvious computer-like moves,” commented popular American chess columnist Mig Greengard.
“Deep Junior,” he declared, “has so far passed the chess Turing Test.”
I
N THE WORLD
of computer professionals, Greengard’s remark was equivalent to declaring that someone had just landed on Mars. Passing the Turing test was an extraordinary feat of engineering. It meant that machines were now crossing the threshold into the realm of human intelligence—or at least the appearance of intelligence.
Trained as a mathematical logician in the 1930s, British computer pioneer Alan Turing was recruited by British Intelligence in World War II. At the Bletchley Park military intelligence campus north of London, he led a team that cracked the vexing Enigma encryption code used by German U-boats. (Field Marshal “Monty” Montgomery thanked Turing’s squad for letting him “know what the Jerries are having for breakfast.”) They also helped the Allies create uncrackable encryptions of their own so that commanders and leaders, including Roosevelt and Churchill, could talk to one another in confidence.
After the war, Turing introduced concepts necessary for the invention of digital computing. Among other things, explains Andrew Hodges in his biography
Alan Turing: The Enigma
, Turing contributed “the crucial twentieth-century insight that symbols representing instructions are no different in kind from symbols representing numbers.” That meant that computers could potentially do much more than calculate—they could also take on a wide variety of other tasks involving the manipulation of data, patterns, and even decision making. Building on that and other Turing insights, the first generation of primitive computers (including the famous ENIAC and UNIVAC machines) was built in the late 1940s and early 1950s. The early history of computing is nearly impossible to imagine without him.
His legendary Turing test came in response to the giant question that he posed in a 1950 article for the journal
Mind
: “Can machines think?” After considering the technological, cognitive, philosophical, and theological implications of that question, Turing argued that yes, a true thinking machine could eventually be built—and he expressed confidence that one day it would happen.
But how to tell? How could anyone properly determine if a machine was engaged in humanlike thought? Turing concluded that there would never be a satisfactory objective standard. Instead, he proposed, it was ultimately a matter of human perception. If, in response to human questions, a computer could consistently provide answers indistinguishable from human answers—answers that would fool a human on the other side of a curtain—then that machine would ipso facto be demonstrating thought. The Turing test was born.
In chess, the equivalent question was whether a computer player might someday fool people into thinking it was a human player. Any computer could be programmed to respond to certain moves with other moves, or to value certain pieces above other pieces. But could humanlike play involving intuition, creativity, risk taking, and opponent psychology ever be convincingly mimicked by a machine?
Alan Turing loved chess and played all the time, though he wasn’t nearly as adept on the chessboard as he was on the chalkboard. At Bletchley Park he was fortunate to be surrounded by accomplished players, and the chess pieces were always handy. The onetime British champion Conel Hugh O’Donel Alexander was Turing’s deputy. Future British champion Harry Golombek was also on the staff; Golombek’s chess superiority over Turing was such that he could overwhelm Turing in a chess game, force Turing’s resignation, and then turn the board around to play Turing’s pieces against his own original pieces—and win.
Turing and his colleagues played not just for the diversion, but also because chess was such a useful tool. It helped them work through ideas and problems, explore logic and mathematics, and experiment with mechanical instructions. Contrary to what one might have supposed, the busy nexus of chess and mathematics had not diminished as mathematics itself became more nuanced in the modern age. One might expect that highly advanced concepts like cycloids, primary decomposition, and transcendental numbers would render the medieval chessboard an obsolete tool. On the contrary, the game seemed only to become more and more entrenched in classrooms, journals, blackboards, and, eventually, on Web sites. In the late nineteenth century, number theory pioneer Edmund Landau wrote two books on mathematical problems inherent in chess. (More than a century later, the connection would still be vibrant: in 2004 Harvard University offered the course “Chess and Mathematics,” whose aim was to “illustrate the interface between chess problems and puzzles on the one hand, and mathematical theory and computation on the other.” Chess, it seemed, would never lose relevance, since its vitality was based not on any particular set of ideas, but on its symbolic power.)
For Turing during World War II, chess was also particularly attractive as just about the only part of his intellectual life that was not top secret. Turing and his Bletchley Park colleagues could discuss chess problems anytime and anywhere without compromising their military work. One emerging thread of their discussions was the possibility of building a chess-playing machine, which would allow them to test their ideas about the mechanization of thought. They considered chess an excellent model for such a test. Among other attributes, it was an elegant example of what Princeton mathematics guru John von Neumann (a mentor of Turing) called games of perfect information, meaning that all variables were always known to all players. Nothing about the game was hidden. The same was true of less complex games like checkers, tic-tac-toe, and others—all of these games stood in contrast to poker, for example, where cards are concealed and players can bluff. In his work, von Neumann had established that each game of perfect information has an ideal “pure strategy”—a set of rules that will suit every possible contingency. Theoretically at least, the perfectly designed computer could play the perfect game of chess.
In 1946, as part of an exploration about what computers could potentially do, Turing became perhaps the first person to seriously broach the concept of machine intelligence. Chess was his vehicle for conveying the notion. “Can [a] machine play chess?” he asked, and then offered an answer:
It could fairly easily be made to play a rather bad game. It would be bad because chess requires intelligence…. There are indications however that it is possible to make the machine display intelligence at the risk of its making occasional serious mistakes. By following up this aspect the machine could probably be made to play very good chess.
Today the words fall flat on the page. Sixty years ago, they were revolutionary. The most startling word of all was
intelligence
, which Turing did not use casually. He was not merely talking about the ability to follow complex instructions. “What we want,” Turing explained, “is a machine that can learn from experience…[the] possibility of letting the machine alter its own instructions.” It was a stunning prognostication, and Turing is today revered for his vision. For someone surrounded at the time by machines not much smarter than a light switch to imagine a machine that could someday learn from mistakes and alter its own code was like an eighteenth-century stagecoach driver envisioning a sporty car with a hybrid engine and satellite navigation.
Two years later, in 1948, Turing and his colleague David Champernowne built a computer chess program called “Turochamp.” Compared to later such programs, it was extremely primitive. But at the time their program was too complex for the available hardware. Of the few actual computers in existence at that time, none of them was even remotely powerful enough to execute their software. So Turing himself became a machine: in a game against Champernowne, Turing followed the Turochamp instruction code as if he were the computer, making the computations by hand and moving his pieces accordingly. It took Turing about thirty minutes to calculate each move. Not surprisingly, the program lost to the experienced human chess player. But subsequently, it managed to beat Champernowne’s wife, a chess novice. Chess computing—and artificial intelligence (AI) itself—had taken its first baby step forward.
V
ERY EARLY ON
, AI pioneers in the United States and Britain hit on a conceptual quandary: should they design machines to actually think like human beings—incorporating experience, recognizing patterns, and formulating and executing plans—or should they play to the more obvious strengths of machines’ ability to conduct brute-force mathematical calculations? In his 1947 essay on machine intelligence, Turing had suggested that he would pursue the former—“letting the machine alter its own instructions.” But from a practical standpoint, he focused on the latter. Turing’s counterparts across the Atlantic, including MIT’s Claude Shannon, independently came to the same way of thinking. Like Turing, Shannon was fascinated by chess’s potential in the pursuit of what he called “mechanized thinking.” But he became convinced that computer chess and other AI pursuits should not be modeled on human thought. Unlike human brains, computers did not have scores of different specialized components that could read information, contextualize it, prioritize it, store it in different forms, recall it in a variety of ways, and then decide on how to apply it; computers, at least as they were understood then, could calculate very quickly, following programmed instructions. This particular strength—and limitation—of computers suggested a different route for AI, a new sort of quasi-intelligence based on mathematical computation.
Chess would be a central proving ground for this new type of intelligence. Theoretically, at least, the game could be fully converted into one long mathematical formula. The board could be represented as a numerical map, pieces weighted according to their relative value, moves and board positions scored according to the numerical gain or loss that each would bring. But the scope of computation was immense—too much for the earliest computers to handle. One of the first was John von Neumann’s Maniac I, built in 1956 in Los Alamos, New Mexico, to help refine the American hydrogen bomb arsenal. With 2,400 vacuum tubes, the machine could process a staggering ten thousand instructions per second. It could not, though, handle a full-scale chess-board. Playing a simplified version of chess on a chessboard six squares by six with no Bishops, no castling, and no double-square first Pawn move, Maniac I required twelve minutes to look just two full moves ahead. (With Bishops, it would have needed three hours.) The machine did go on to help design potent nuclear warheads, but as a chess player it was pretty hopeless.