Authors: Fred Kaplan
Clarke also knew that, even if the government did take control of Internet traffic, few agencies possessed the resources or the technical talent to do much with itâthe exceptions being the Defense Department, which had the authority only to defend its own networks, and the NSA, which had twice been excluded from any role in monitoring civilian computers or telecommunications: first, back in 1984, in the aftermath of Ronald Reagan's NSDD-145; and, again, early on in the Clinton presidency, during the Clipper Chip controversy.
Clarke spent much of the next year and a half, in between various crises over terrorism, writing a 159-page document called the
National Plan for Information Systems Protection: Defending America's Cyberspace
, which President Clinton signed on January 7, 2000.
In an early draft, Clarke had proposed hooking up all civilian government agenciesâand, perhaps, eventually critical infrastructure companiesâto a Federal Intrusion Detection Network.
FIDNET, as he called it, would be a parallel Internet, with sensors wired to some government agency's monitor (which agency was left unclear). If the sensors detected an intrusion, the monitor would automatically
be alerted. FIDNET would unavoidably have a few access points to the regular Internet, but sensors would sit atop those points and alert officials of intrusions there, as well. Clarke modeled the idea on the intrusion-detection systems installed in Defense Department computers in the wake of Solar Sunrise. But that was a case of the military monitoring itself. To have the governmentâand, given what agencies did this sort of thing, it would probably be the militaryâmonitoring civilian officials, much less private industry, was widely seen, and loathed, as something different.
When someone leaked Clarke's draft to
The New York Times
, in July 1999, howls of protest filled the air. Prominent members of Congress and civil-liberties groups denounced the plan as
“Orwellian.” Clarke tried to calm these fears, telling reporters that FIDNET wouldn't infringe on individual networks or privacy rights in the slightest. Fiercer objections still came from the executives and board members of the infrastructure companies, who lambasted the plan as the incarnation of their worst nightmares about government regulation.
The idea was scuttled; the
National Plan
was rewritten.
When the revision was finished and approved six months later, President Clinton scrawled his signature under a dramatic cover note, a standard practice for such documents. But, in a departure from the norm, Clarkeâunder his own nameâpenned a separate introduction, headlined, “Message from the National Coordinator.”
In it, he tried to erase the image of his presumptuousness.
“While the President and Congress can order Federal networks to be secured,” he wrote, “they cannot and should not dictate solutions for private sector systems,” nor will they “infringe on civil liberties, privacy rights, or proprietary information.” He added, just to make things clearer, that the government “will eschew regulation.”
Finally, in a gesture so conciliatory that it startled friends and foes alike, Clarke wrote, “This is Version 1.0 of the Plan. We earnestly seek and solicit views about its improvement. As private sector
entities make more decisions and plans to reduce their vulnerabilities and improve their protections, future versions of the Plan will reflect that progress.”
Then, one month later, the country's largest online companiesâincluding eBay, Yahoo, and Amazonâwere hit with a massive denial-of-service attack. Someone hacked into thousands of their computers, few of which were protected in any way, and flooded them with endless requests for data, overloading the servers to the point where they shut down for several hours, in some cases days.
Here was Clarke's chance to jump-start national policyâif not to revive FIDNET (that seemed out of the question for now), then at least to impose some rules on wayward bureaucracies and corporations. He strode into the Oval Office, where Clinton had already heard the news, and said, “This is the future of e-commerce, Mr. President.”
Clinton replied, a bit distantly, “Yeah, Gore's always going on about âe-commerce.'â”
Still, Clarke persuaded the president to hold a summit in the White House Cabinet Room, inviting twenty-one senior executives from the major computer and telecom companiesâAT&T, Microsoft, Sun Microsystems, Hewlett-Packard, Intel, Cisco, and othersâalong with a handful of software luminaries from consulting firms and academia. Among this group was the now-famous Peiter Zatko, who identified himself on the official guest list as “Mudge.”
Zatko came into the meeting starstruck, nearly as much by the likes of Vint Cerf, one of the Internet's inventors, as by the president of the United States. But after a few minutes of sitting through the discussion, he grew impatient. Clinton was impressive, asking insightful questions, drawing pertinent analogies, grasping the problem at its core. But the corporate execs were faking it, intoning that the attack had been “very sophisticated” without acknowledging that their own passivity had allowed it to happen.
A few weeks earlier, Mudge had gone legit. The L0pht was purchased by an Internet company called @stake, which turned the Watertown warehouse into a research lab for commercial software to block viruses and hackers. Still, he had no personal stake in the piece of theater unfolding before him, so he spoke up.
“Mr. President,” he said, “this attack was
not
sophisticated. It was trivial.” All the companies should have known that this could happen, but they hadn't invested in preventive measuresâwhich were readily availableâbecause they had no incentive to do so. He didn't elaborate on the point, but everyone knew what he meant by “incentives”: if an attack took place, no one would get punished, no stock prices would tank, and it would cost no more to repair the damage than it would have cost to obstruct an attack in the first place.
The room went silent. Finally, Vint Cerf, the Internet pioneer, said, “Mudge is right.” Zatko felt flattered and, under the circumstances, relieved.
As the meeting broke up, with everyone exchanging business cards and chatting, Clarke signaled Zatko to stick around. A few minutes later, the two went into the Oval Office and talked a bit more with the president. Clinton admired Zatko's cowboy boots, hoisted his own snakeskins onto his desk, and disclosed that he owned boots made of every mammal on the planet. (“Don't tell the liberals,” he whispered.) Zatko followed the president's lead, engaging in more small talk. After a few minutes, a handshake, and a photo souvenir, Zatko bid farewell and walked out of the office with Clarke.
Zatko figured the president had enough on his mind, what with the persistent fallout from the Monica Lewinsky scandal (which had nearly led to his ouster), the fast-track Middle East peace talks (which would go nowhere), and the upcoming election (which Vice President Gore, the carrier of Clinton's legacy, would lose to George W. Bush).
What Zatko didn't know was that, while Clinton could muster
genuine interest in the topicâor any other topicâat a meeting of high-powered executives, he didn't care much about cyber and, really, never had. Clarke was the source, and usually the only White House source, of any energy and pressure on the issue.
Clarke knew that Zatko's Cabinet Room diatribe was on the mark. The industry execs would never fix things voluntarily. In this sense, the meeting was almost comical, with several of them imploring the president to take action, then, a moment later, assuring him that they could handle the problem without government fiat.
The toned-down version of his
National Plan for Information Systems Protection
called for various cooperative ventures between the government and private industry to get under way by the end of 2000 and to be fully in place by May 2003. But the timetable seemed implausible. The banks were game; a number of them had readily agreed to form an industry-wide ISACâan Information Sharing and Analysis Centerâto deal with the challenge. This wasn't so surprising: banks had been the targets of dozens of hackings, costing them millions of dollars and, potentially, the trust of high-rolling customers; some of the larger financial institutions had already hired computer specialists. But most of the other critical infrastructuresâtransportation, energy, water supply, emergency servicesâhadn't been hacked: executives of those companies saw the threat as hypothetical; and, as Zatko had observed, they saw no incentive in spending money on security.
Even the software industry included few serious takers: they knew that security was a problem, but they also knew that installing truly secure systems would slow down a server's operations, at a time when customers were paying good money for more speed. Some executives asked security advocates for a cost-benefit analysis: what were the odds of a truly catastrophic event; what would such an event cost them; how much would a security system cost, and what were the chances that the system would actually prevent intrusions?
No one could answer these questions; there were no data to support an honest answer.
The Pentagon's computer network task force was facing similar obstacles. Once, when Art Money, the assistant secretary of defense for command, control, communications, and intelligence, pushed for a 10 percent budget hike for network security, a general asked him whether the program would yield a 10 percent increase in security. Money went around to his technical friends, in the NSA and elsewhere, posing the question. No one could make any such assurance. The fact was, most generals and admirals wanted more tanks, planes, and ships; a billion dollars more for staving off computer attacksâa threat that most regarded as far-fetched, even after Eligible Receiver, Solar Sunrise, and Moonlight Maze (because, after all, they'd done no discernible damage to national security)âmeant a billion dollars less for weapons.
But things were changing on the military side: in part because more and more colonels, even a few generals,
were
starting to take the problem seriously; in part because the flip side of cyber securityâcyber warfareâwas taking off in spades.
B
ACK
in the summer of 1994, while Ken Minihan and his demon-dialers at Kelly Air Force Base were planning to shut down Haiti's telephone network as a prelude to President Clinton's impending invasion, a lieutenant colonel named Walter “Dusty” Rhoads was sitting in a command center in Norfolk, Virginia, waiting for the attack to begin.
Rhoads was immersed in Air Force black programs, having started out as a pilot of, first, an F-117 stealth fighter, then of various experimental aircraft in undisclosed locations. By the time of the Haiti campaign, he was chief of the Air Combat Command's Information Warfare Branch at Nellis Air Force Base, Virginia, and, in that role, had converted Minihan's phone-jamming idea into a detailed plan and coordinated it with other air operations.
For days, Rhoads and his staff were stuck in that office in Norfolk, going stir-crazy, pigging out on junk food, while coining code words for elaborate backup plans, in case one thing or another went wrong. The room was strewn with empty MoonPie boxes and Fresca cans,
so he made those the code words: “Fresca” for Execute the war plan, “MoonPie” for Stand down.
After the Haitian putschists fled and the invasion was canceled, Rhoads realized that the setup had been a bit convoluted. He was working through Minihan's Air Force Information Warfare Center, which was an intelligence shop, not an operations command; and, strictly speaking, intel and combat ops were separate endeavors, with Title 10 of the U.S. Code covering combat and Title 50 covering intelligence. Rhoads thought it would be a good idea to form an Air Force operations unit dedicated to information warfare.
Minihan pushed for the idea that fall, when he was reassigned to the Pentagon as the assistant chief of staff for intelligence. He sold the idea well. On August 15, 1995, top officials ordered the creation of the 609th Air Information Warfare Squadron, to be located at Shaw Air Force Base, in South Carolina.
The official announcement declared that the squadron would be
“the first of its kind designed to counter the increasing threat to Air Force information systems.” But few at the time took any such threat seriously; the Marsh Report, Eligible Receiver, Solar Sunrise, and Moonlight Maze wouldn't dot the landscape for another two years. The squadron's other, main missionâthough it was never mentioned in public statementsâwas to develop ways to threaten the information systems of America's adversaries.
Rhoads would be the squadron's commander, while its operations officer would be a major named Andrew Weaver. The previous spring, Weaver had written an Air Staff pamphlet called
Cornerstones of Information Warfare
, defining the term as
“any action to deny, exploit, corrupt, or destroy the enemy's information and its functions,” with the ultimate intent of “degrading his will or capability to fight.” Weaver added, by way of illustration, “Bombing a telephone switching facility is information warfare. So is destroying the switching facility's software.”
On October 1, the 609th was up and running, with a staff of just three officersâRhoads, Weaver, and a staff assistantâoccupying a tiny room in the Shaw headquarters basement, just large enough for three desks, one phone line, and two computers.
Within a year, the staff grew to sixty-six officers. Two thirds of them worked on the defensive side of the mission, one third on offense. But in terms of time and energy, the ratio was reversedâone third was devoted to defense, two thirds to offenseâand those working the offensive side were kept in separate quarters, behind doors with combination locks.
In February 1997, the squadron held its first full Blue Flag exercise. The plan was for the offensive crew to mount an information warfare attack on Shaw's air wing, while the defensive crew tried to blunt the attack. One of the air wing's officers scoffed at the premise: the wing's communications were all encrypted, he said; nobody can get in there.