Authors: Fred Kaplan
But, like many conglomerates, Sony ran its various branches in stovepipe fashion: the executives at PlayStation had no contact with those at Online Entertainment, who had no contact with those at Sony Pictures.
So the lessons learned in one realm were not shared with the others.
Now, the executives realized, they had to get serious. To help
track down the hacker and fix the damage, they contacted not only the FBI but also FireEye, which had recently purchased Mandiant, the companyâheaded by the former Air Force cyber crime investigator Kevin Mandiaâthat had, most famously, uncovered the massive array of cyber attacks launched by Unit 61398 of the Chinese army. Soon enough, both FireEye and the FBI, the latter working with NSA, identified the attackers as a group called
“DarkSeoul,” which often did cyber jobs for the North Korean government from outposts scattered across Asia.
Sony Pictures had planned to release on Christmas Day a comedy called
The Interview
, starring James Franco and Seth Rogen as a frothy TV talk show host and his producer who get mixed up in a CIA plot to assassinate North Korea's ruler, Kim Jong-un. The previous June, when the project was announced, the North Korean government released a statement warning that it would
“mercilessly destroy anyone who dares hurt or attack the supreme leadership of the country, even a bit.” The hack, it seemed, was the follow-up to the threat.
Some independent cyber specialists doubted that North Korea was behind the attack, but those deep inside the U.S. intelligence community were unusually confident.
In public, officials said that the hackers used many of the same “signatures” that DarkSeoul had used in the past, including an attack two years earlier that wiped out forty thousand computers in South Koreaâthe same lines of code, encryption algorithms, data-deletion methods, and IP addresses.
But the real reason for the government's certainty was that the NSA had long ago penetrated North Korea's networks: anything that its hackers did, the NSA could follow; when the hackers monitored what they were doing, the NSA could intercept the signal from their monitorsânot in real time (unless there was a reason to be watching the North Koreans in real time), but the agency's analysts could retrieve the files, watch the images, and compile the evidence retroactively.
It was another case of a cyber attack launched not for money,
trade secrets, or traditional espionage, but to influence a private company's behavior.
This time, the blackmail worked. One week before opening day, Sony received an email threatening violence against theaters showing the film. Sony canceled its release; and, suddenly the flow of embarrassing emails and data to the tabloids and the blogosphere ceased.
The studio's cave-in only deepened its problems. At his year-end press conference, traditionally held just before flying off to his Hawaii home for the holidays, President Obama told the world that Sony
“made a mistake” when it canceled the movie. “I wish they had spoken to me first,” he went on. “I would have told them, âDo not get into a pattern in which you're intimidated by these kinds of criminal acts.'â” He also announced that the United States government would “respond proportionally” to the North Korean attack, “in a place and time and manner that we choose.”
Some in the cyber world were perplexed. Hundreds of American banks, retailers, utilities, defense contractors, even Defense Department networks had been hacked routinely, sometimes at great cost, with no retributive action by the U.S. government, at least not publicly. But a Hollywood studio gets breached, over a
movie
, and the president pledges retaliation in a televised news conference?
Obama did have a point in making the distinction. Jeh Johnson, the secretary of homeland security, said on the same day that the Sony attack constituted
“not just an attack against a company and its employees,” but “also an attack on our freedom of expression and way of life.” A Seth Rogen comedy may have been an unlikely emblem of the First Amendment and American values; but so were many other works that had come under attack through the nation's history, yet were still worth defending, because an attack on basic values had to be answeredâhowever ignoble the targetâlest some future assailant threaten to raid the files of some other studio,
publisher, art museum, or record company if their executives didn't cancel some other film, book, exhibition, or album.
The confrontation ticked off a debate inside the Obama White House, similar to the debates discussed, but never resolved, under previous presidents: What
was
a “proportional” response to a cyber attack? Did this response have to be delivered in cyberspace? Finally, what role
should
government play in responding to cyber attacks on citizens or private corporations? A bank gets hacked, that's the bank's problem; but what if two, three, or a dozen banksâbig banksâwere hacked? At what point did these assaults become a concern for national security?
It was a broader version of the question that Robert Gates had asked the Pentagon's general counsel eight years earlier: at what point did a cyber attack constitute an act of war? Gates never received a clear reply, and the fog hadn't lifted since.
On December 22, three days after Obama talked about the Sony hack at his press conference, someone disconnected North Korea from the Internet. Kim Jong-un's spokesmen accused Washington of launching the attack. It was a reasonable guess: Obama had pledged to launch a “proportional” response to the attack on Sony; shutting down North Korea's Internet for ten hours seemed to fit the bill, and it wouldn't have been an onerous task, given that the whole country had just 1,024 Internet Protocol addresses (fewer than the number on some
blocks
in New York City), all of them connected through a single service provider in China.
In fact, though, the United States government played no part in the shutdown. A debate broke out in the White House over whether to deny the charge publicly. Some argued that it might be good to clarify what a proportional response was
not
. Others argued that making any statement would set an awkward precedent: if U.S. officials issued a denial now, then they'd also have to issue a denial the next
time a digital calamity occurred during a confrontation; otherwise everyone would infer that America did launch
that
attack, whether or not it actually had, at which point the victim might fire back.
I
In this instance, the North Koreans didn't escalate the conflict, in part because they
couldn't
. But another power, with a more robust Internet, might have.
Gates's question was more pertinent than ever, but it was also, in a sense, beside the point. Because of its lightning speed and the initial ambiguity of its source, a cyber attack could provoke a counterattack, which might escalate to war, in cyberspace and in real space, regardless of anyone's intentions.
At the end of Bush's presidency and the beginning of Obama's, in casual conversations with aides and colleagues in the Pentagon and the White House, Gates took to mulling over larger questions about cyber espionage and cyber war.
“We're wandering in dark territory,” he would say on these occasions.
It was a phrase from Gates's childhood in Kansas, where his grandfather worked for nearly fifty years as a stationmaster on the Santa Fe Railroad. “Dark territory” was the industry's term for a stretch of rail track that was uncontrolled by signals. To Gates, it was a perfect parallel to cyberspace, except that this new territory was much vaster and the danger was greater, because the engineers were unknown, the trains were invisible, and a crash could cause far more damage.
Even during the darkest days of the Cold War, Gates would tell his colleagues, the United States and the Soviet Union set and followed
some basic rules: for instance, they agreed not to kill each other's spies. But today, in cyberspace, there were no such rules, no rules of any kind. Gates suggested convening a closed-door meeting with the other major cyber powersâthe Russians, Chinese, British, Israelis, and Frenchâto work out some principles, some “rules of the road,” that might diffuse our mutual vulnerabilities: an agreement, say, not to launch cyber attacks on computer networks controlling dams, waterworks, electrical power grids, and air traffic controlâcritical civilian infrastructureâexcept perhaps in wartime, and maybe not even then.
Those who heard Gates's pitch would furrow their brows and nod gravely, but no one followed up; the idea went nowhere.
Over the next few years, this dark territory's boundaries widened, and the volume of traffic swelled.
In 2014, there were almost eighty thousand security breaches in the United States, more than two thousand of which resulted in losses of dataâa quarter more breaches, and 55 percent more data losses, than the year before.
On average, the hackers stayed inside the networks they'd breached for 205 daysânearly seven monthsâbefore being detected.
These numbers were likely to soar, with the rise of the Internet of Things. Back in 1996, Matt Devost, the computer scientist who simulated cyber attacks in NATO war games, co-wrote a paper called “Information Terrorism: Can You Trust Your Toaster?” The title was a bit facetious, but twenty years later, with the most mundane items of everyday lifeâtoasters, refrigerators, thermostats, and carsâsprouting portals and modems for network connectivity (and thus for hackers too), it seemed prescient.
II
President Obama tried to stem the deluge. On February 12, 2015, he signed an executive order titled “Improving Critical Infrastructure Cybersecurity,” setting up forums in which private companies could share data about the hackers in their midstâwith one another and with government agencies. In exchange, the agenciesâmainly the NSA, working through the FBIâwould provide top secret tools and techniques to protect their networks from future assaults.
These forums were beefed-up versions of the Information Sharing and Analysis Centers that Richard Clarke had established during the Clinton administrationâand they were afflicted with the same weakness: both were voluntary; no company executives had to share information if they didn't want to. Obama made the point explicitly:
“Nothing in this order,” his document stated, “shall be construed to provide an agency with authority for regulating the security of critical infrastructure.”
Regulationâit was still private industry's deepest fear, deeper than the fear of losing millions of dollars at the hands of cyber criminals or spies. As the white-hat hacker Peiter “Mudge” Zatko had explained to Dick Clarke fifteen years earlier, these executives had calculated that it cost no more to clean up after a cyber attack than to prevent one in the first placeâand the preventive measures might not work anyway.
Some industries had altered their calculations in the intervening
years, notably the financial sector. Its business consisted of bringing in money and cultivating trust; hackers had made an enormous dent on both, and sharing information demonstrably lowered risk. But the big banks were exceptions to the pattern.
Obama's cyber policy aides had made a run, early on, at drafting mandatory security standards, but they soon pulled back. Corporate resistance was too stiff; the secretaries of treasury and commerce argued that onerous regulations would impede an economic recovery, the number-one concern to a president digging the country out of its deepest recession in seventy years. Besides, the executives had a point: companies that
had
adopted tight security standards were still getting hacked. The government had offered tools, techniques, and a list of “best practices,” but “best” didn't mean perfectâafter the hacker adapted, erstwhile best practices might not even be goodâand, in any case, tools were just tools: they weren't solutions.
Two years earlier, in January 2013, a Defense Science Board task force had released a 138-page report on “the advanced cyber threat.” The product of an eighteen-month study, based on more than fifty briefings from government agencies, military commands, and private companies, the report concluded that there was no reliable defense against a resourceful, dedicated cyber attacker.
In several recent exercises and war games that the panel reviewed, Red Teams, using exploits that any skilled hacker could download from the Internet, “invariably” penetrated even the Defense Department's networks,
“disrupting or completely beating” the Blue Team.
The outcomes were all too reminiscent of Eligible Receiver, the 1997 NSA Red Team assault that first exposed the U.S. military's abject vulnerability.
Some of the task force members had observed up close the early history of these threats, among them Bill Studeman, the NSA director in the late 1980s and early 1990s, who first warned that the agency's radio dishes and antennas were “going deaf” in the global transition from
analog to digital; Bob Gourley, one of Studeman's acolytes, the first intelligence chief of the Pentagon's Joint Task Force-Computer Network Defense, who traced the Moonlight Maze hack to Russia; and Richard Schaeffer, the former director of the NSA Information Assurance Directorate, who spotted the first known penetration of the U.S. military's
classified
network, prompting Operation Buckshot Yankee.
Sitting through the briefings, collating their conclusions, and writing the report, these veterans of cyber wars pastâreal and simulatedâfelt as if they'd stepped into a time machine: the issues, the dangers, and, most surprising, the vulnerabilities were the same as they'd been all those years ago. The government had built new systems and software, and created new agencies and directorates, to detect and resist cyber attacks; but as with any other arms race, the offenseâat home and abroadâhad devised new tools and techniques as well, and, in this race, the offense held the advantage.