Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon (21 page)

BOOK: Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon
12.28Mb size Format: txt, pdf, ePub

The pumps were controlled by two central computers, using proprietary Hunter WaterTech software, which communicated with a remote terminal unit at each pump station via two-way radio signals. Signals got transmitted from the computers to the RTUs, or between the RTUs, via repeater stations in the field that operated on nonpublic frequencies. Only someone on the central computers or within range of a repeater station, using Hunter WaterTech proprietary software and the proper communication protocols, could send commands to the pumping stations. Hunter WaterTech initially suspected an outside hacker was behind the attacks, but the water district had no intrusion-detection tools or logging system
in place to detect a breach. But even after they installed these systems they were still unable to detect a breach.

The attacks continued on and off for weeks and reached a peak one night in March when more than two-dozen incidents occurred. Investigators finally concluded it must be a rogue insider sending malicious commands in the field via two-way radio signals.
14
They zeroed in on a former contractor named Vitek Boden, a forty-nine-year-old engineer who had worked for Hunter WaterTech until his contract expired in December, around the time the first water pump failed. Boden had subsequently sought a full-time job with the water district but was turned down in January—which coincided with when the bulk of the problems began.

Sure enough, when police caught up with Boden one night in April after alarm systems at four pump stations were disabled, they found a laptop in his car with Hunter WaterTech’s proprietary software installed and a two-way radio set to the nonpublic frequency the water district used to communicate with pumping stations. They also found an RTU Boden had apparently used to send out the bogus commands.
15

Boden’s case was the first cyberattack against a critical infrastructure system to come to light, but it likely wasn’t the first to occur. Others no doubt had simply gone undetected or unreported.
16
In the wake of the
Maroochy incident, workers from other utilities told investigators that they would never have pursued criminal charges against Boden as Maroochy had done, in order to keep the matter quiet.
17

The case should have been a wake-up call to control-system operators around the world, but many dismissed it because it involved an inside attacker who had extensive knowledge of the Maroochy Shire system and access to the specialized equipment needed to conduct the attack. No outsider could have done what Boden did, they argued, ignoring a number of security problems with Maroochy’s control-system network that outsiders could have exploited to achieve similar attacks. Peter Kingsley, one of the investigators on the case, later warned attendees at a control-system conference that although the Maroochy hack had been an inside job, breaches from outsiders were by no means impossible. “Some utilities believe they’re protected because they themselves can’t find an unauthorized way to access their systems,” he said. “But hackers don’t restrict themselves to ordinary techniques.”
18

Kingsley’s words seemed quaint in 2002 because there were still few signs that outsiders were interested in hacking critical infrastructure systems. And in the absence of any major disaster, the security of control systems simply wasn’t a concern.

It was around this time that Joe Weiss became an evangelist for control-system security.

Weiss is a lean and energetic sixty-four-year-old who works out of his home in Cupertino, California, the heart of Silicon Valley, and is used to thinking about catastrophic scenarios. He lives just five miles from
California’s notorious San Andreas Fault and the seventy-year-old Stevens Creek Dam. When the Loma Prieta earthquake struck the area in 1989, chimneys toppled, streetlights and phones died for several days, and shockwaves in the swimming pool at nearby DeAnza College ejected polo players from the water and onto the pavement like beached seals.

Weiss first became aware of the security problems with control systems in 1999. A nuclear engineer by training, he was working for the Electric Power Research Institute when the Y2K issue arose. Armageddon warnings in the press predicted dystopian meltdowns when computer clocks struck midnight on New Year’s Eve because of a programming error that failed to anticipate the millennial rollover to triple zeroes on January 1, 2000. Weiss began to wonder: if such a minor thing as a change of date could threaten to bring control systems to a halt, what would more serious issues do? More important, if Y2K could accidentally cause huge problems, what might an intentional attack from hackers do?

Dozens of security conferences held around the world each year focused on general computer security, but none of them addressed control systems. So Weiss began attending them to learn what security guidelines the control-system community should adopt. But the more conferences he attended, the more worried he got. When network administrators talked about using encryption and authentication to prevent unauthorized users from accessing their systems, Weiss realized that control systems had none of the standard protections that normal computer networks used. When security experts asked him what brand of firewall control-system operators at energy plants used or how often they reviewed their network logs for evidence of intruders, Weiss had to reply, “We don’t have firewalls. No network logs, either.”
19
And when he began to ask control-system makers about the security of their products, he got blank stares in response. They told him no one had ever asked about security before.

Then two planes struck the Twin Towers in September 2001 and not
long afterward, authorities uncovered suspicious patterns of searches on government websites in California. The searchers appeared to be exploring digital systems used to manage utilities and government offices in the San Francisco region. The activity, which appeared to originate from IP addresses in Saudi Arabia, Indonesia, and Pakistan, showed a particular interest in emergency phone systems, power and water plants, and gas facilities.
20
Other searches focused on programming controls for fire-dispatch systems and pipelines.

The following year, US forces in Kabul seized a computer in an al-Qaeda office and found models of a dam on it along with engineering software that could be used to simulate its failure.
21
That same year, the CIA issued a Directorate of Intelligence Memorandum stating that al-Qaeda had “far more interest” in cyberterrorism than previously believed and had begun to contemplate hiring hackers.

There were signs that others might be interested in US critical infrastructure too.
22
In 2001, hackers broke into servers at the California Independent System Operator, or Cal-ISO, a nonprofit corporation that manages the transmission system for moving electricity throughout most of the state. The attackers got in through two unprotected servers and remained undetected for two weeks until workers noticed problems with their machines.
23
Cal-ISO officials insisted the breach posed no threat to the grid, but unnamed sources told the
Los Angeles Times
that the hackers were caught just as they were trying to access “key parts of the system” that would have allowed them to cause serious disruptions in electrical service. One person called it a near “catastrophic breach.” The attack appeared to
originate from China, and came in the midst of a tense political standoff between China and the United States after a US spy plane collided in midair with a Chinese fighter jet over the South China Sea.

In response to growing concerns about critical infrastructure, and in particular the security of the nation’s power grids, the Department of Energy launched a National SCADA Test Bed program in 2003 at the Idaho National Lab (INL). The goal was to work with the makers of control systems to evaluate their equipment for security vulnerabilities, and was an initiative that ultimately led to the 2007 Aurora Generator Test.
24

There are 2,800 power plants in the United States and 300,000 sites producing oil and natural gas.
25
Another 170,000 facilities form the public water system in the United States, which includes reservoirs, dams, wells, treatment facilities, pumping stations, and pipelines.
26
But 85 percent of these and other critical infrastructure facilities are in the hands of the private sector, which means that aside from a few government-regulated industries—such as the nuclear power industry—the government can do little to force companies to secure their systems. The government, however, could at least try to convince the makers of control systems to improve the security of their products. Under the test-bed program, the government would conduct the tests as long as the vendors agreed to fix any vulnerabilities uncovered by them.
27

Around the same time, DHS also launched a site-assessment program
through its Industrial Control System Cyber Emergency Response Team (ICS-CERT) to evaluate the security configuration of critical infrastructure equipment and networks already installed at facilities. Between 2002 and 2009, the team conducted more than 100 site assessments across multiple industries—oil and natural gas, chemical, and water—and found more than 38,000 vulnerabilities. These included critical systems that were accessible over the internet, default vendor passwords that operators had never bothered to change or hard-coded passwords that couldn’t be changed, outdated software patches, and a lack of standard protections such as firewalls and intrusion-detection systems.

But despite the best efforts of the test-bed and site-assessment researchers, they were battling decades of industry inertia—vendors took months and years to patch vulnerabilities that government researchers found in their systems, and owners of critical infrastructure were only willing to make cosmetic changes to their systems and networks, resisting more extensive ones.

Weiss, who worked as a liaison with INL to help develop its test-bed program, got fed up with the inertia and launched a conference to educate critical-infrastructure operators about the dangerous security problems with their systems. In 2004, he resorted to scare tactics by demonstrating a remote attack to show them what could be done. The role of hacker was played by Jason Larsen, a researcher at INL, who demonstrated an attack against a substation in Idaho Falls from a computer at Sandia National Laboratory in New Mexico. Exploiting a recently discovered vulnerability in server software, Larsen bypassed several layers of firewalls to hack a PLC controlling the substation and release his payload in several stages. The first stage opened and closed a breaker. The second stage opened all of the breakers at once. The third stage opened all of the breakers but manipulated data sent to operator screens to make it appear that the breakers were closed.

“I call it my ‘wet pants’ demo,” Weiss says. “It was a phenomenal success.”

Weiss followed the demo a few years later with another one and then another, each time enlisting different security experts to demonstrate different modes of attack. The only problem was, they were ahead of their
time. Each time engineers would leave his conference fired up with ideas about improving the security of their networks, they would run up against executives back home who balked at the cost of re-architecting and securing the systems. Why spend money on security, they argued, when none of their competitors were doing it and no one was attacking them?

But what Weiss and the test lab couldn’t achieve in a decade, Stuxnet achieved in a matter of months. The digital weapon shone a public spotlight on serious vulnerabilities in the nation’s industrial control systems for the first time, and critical equipment that for so long had remained obscure and unknown to most of the world now caught the attention of researchers and hackers, forcing vendors and critical-infrastructure owners to finally take note as well.

THE NEWS IN
August 2010 that Stuxnet was sabotaging Siemens PLCs caught the interest of a twenty-five-year-old computer security researcher in Austin, Texas, named Dillon Beresford. Beresford, like most people, had never heard of PLCs and was curious to see how vulnerable they might be. So he bought several Siemens PLCs online and spent two months examining and testing them in the bedroom of his small apartment. It took just a few weeks to uncover multiple vulnerabilities that he could use in an attack.

He discovered, for example, that none of the communication that passed between a programmer’s machine and the PLCs was encrypted, so any hacker who broke into the network could see and copy commands as they were transmitted to the PLCs, then later play them back to a PLC to control and stop it at will. This would not have been possible had the PLCs rejected unauthorized computers from sending them commands, but Beresford found that the PLCs were promiscuous computers that would talk to any machine that spoke their protocol language. They also didn’t require that commands sent to them be digitally signed with a certificate to prove that they came from a trustworthy source.

Although there was an authentication packet, or password of sorts, that passed between a Step 7 machine and the PLC, Beresford was able to decode the password in less than three hours. He also found that he could simply capture the authentication packet as it passed from a Step 7 machine to the PLC and replay it in the same way he replayed commands, eliminating the need to decode the password at all. Once he had control of a PLC, he could also issue a command to change the password to lock out legitimate users.
28

Beresford found other vulnerabilities as well, including a back door that Siemens programmers had left in the firmware of their PLCs—firmware is the basic software that is resident on hardware devices to make them work. Vendors often place global, hard-coded passwords in their systems to access them remotely to provide troubleshooting for customers—like an OnStar feature for control systems. But backdoors that allow vendors to slip in also let attackers in.
29
The username and password for opening the Siemens back door was the same for every system—“
basisk
”—and was hard-coded into the firmware for anyone who examined it to see. Using this back door, an attacker could delete files from the PLC, reprogram it, or issue commands to sabotage whatever operations the PLC controlled.
30

Other books

Class Four: Those Who Survive by Duncan P. Bradshaw
A Fine Cauldron Of Fish by Cornelia Amiri
Kept by Shawntelle Madison
Wicked Brew by Amanda M. Lee
Oaxaca Journal by Oliver Sacks, M.D.
Now You See Me by Rachel Carrington
Decision by Allen Drury