Permanent Record (25 page)

Read Permanent Record Online

Authors: Edward Snowden

BOOK: Permanent Record
4.47Mb size Format: txt, pdf, ePub

I struggled with how to handle this metadata situation. I worried that if I didn’t strip the identifying information from the documents, they might incriminate me the moment the journalists decrypted and opened them. But I also worried that by thoroughly stripping the metadata, I risked altering the files—if they were changed in any way, that could cast doubt on their authenticity. Which was more important: personal safety, or the public good? It might sound like an easy choice, but it took me quite a while to bite the bullet. I owned the risk, and left the metadata intact.

Part of what convinced me was my fear that even if I had stripped away the metadata I knew about, there could be other digital watermarks I wasn’t aware of and couldn’t scan for. Another part had to do with the difficulty of scrubbing single-user documents. A single-user document is a document marked with a user-specific code, so that if any publication’s editorial staff decided to run it by the government, the government would know its source. Sometimes the unique identifier was hidden in the date and
time-stamp coding, sometimes it involved the pattern of microdots in a graphic or logo. But it might also be embedded in something, in some way, I hadn’t even thought of. This phenomenon should have discouraged me, but instead it emboldened me. The technological difficulty forced me, for the first time, to confront the prospect of discarding my lifetime practice of anonymity and coming forward to identify myself as the source. I would embrace my principles by signing my name to them and let myself be condemned.

Altogether, the documents I selected fit on a single drive, which I left out in the open on my desk at home. I knew that the materials were just as secure now as they had ever been at the office. Actually, they were more secure, thanks to multiple levels and methods of encryption. That’s the incomparable beauty of the cryptological art. A little bit of math can accomplish what all the guns and barbed wire can’t: a little bit of math can keep a secret.

24
Encrypt

Most people who use computers, and that includes members of the Fourth Estate, think there’s a fourth basic permission besides Read, Write, and Execute, called “Delete.”

Delete is everywhere on the user side of computing. It’s in the hardware as a key on the keyboard, and it’s in the software as an option that can be chosen from a drop-down menu. There’s a certain finality that comes with choosing Delete, and a certain sense of responsibility. Sometimes a box even pops up to double-check: “Are you sure?” If the computer is second-guessing you by requiring confirmation—click “Yes”—it makes sense that Delete would be a consequential, perhaps even the ultimate decision.

Undoubtedly, that’s true in the world outside of computing, where the powers of deletion have historically been vast. Even so, as countless despots have been reminded, to truly get rid of a document you can’t just destroy every copy of it. You also have to destroy every memory of it, which is to say you have to destroy all the people who remember it, along with every copy of all the other documents that mention it and all the people who remember all those other documents. And then, maybe, just maybe, it’s gone.

Delete functions appeared from the very start of digital computing. Engineers understood that in a world of effectively unlimited options, some choices would inevitably turn out to be mistakes. Users, regardless of whether or not they were really in control at the technical level, had to
feel
in control, especially with regard to anything that they themselves had created. If they made a file, they should be able to unmake it at will. The ability to destroy what they created and start over afresh was a primary function that imparted a sense of agency to the user, despite the fact that they might be dependent on proprietary hardware they couldn’t repair and software they couldn’t modify, and bound by the rules of third-party platforms.

Think about the reasons that you yourself press Delete. On your personal computer, you might want to get rid of some document you screwed up, or some file you downloaded but no longer need—or some file you don’t want anyone to know you ever needed. On your email, you might delete an email from a former lover that you don’t want to remember or don’t want your spouse to find, or an RSVP for that protest you went to. On your phone, you might delete the history of everywhere that phone has traveled, or some of the pictures, videos, and private records it automatically uploaded to the cloud. In every instance, you delete, and the thing—the file—appears to be gone.

The truth, though, is that deletion has never existed technologically in the way that we conceive of it. Deletion is just a ruse, a figment, a public fiction, a not-quite-noble lie that computing tells you to reassure you and give you comfort. Although the deleted file disappears from view, it is rarely gone. In technical terms, deletion is really just a form of the middle permission, a kind of Write. Normally, when you press Delete for one of your files, its data—which has been stashed deep down on a disk somewhere—is not actually touched. Efficient modern operating systems are not designed to go all the way into the bowels of a disk purely for the purposes of erasure. Instead, only the computer’s map of where each file is stored—a map called the “file table”—is rewrit
ten to say “I’m no longer using this space for anything important.” What this means is that, like a neglected book in a vast library, the supposedly erased file can still be read by anyone who looks hard enough for it. If you only erase the reference to it, the book itself still remains.

This can be confirmed through experience, actually. Next time you copy a file, ask yourself why it takes so long when compared with the instantaneous act of deletion. The answer is that deletion doesn’t really do anything to a file besides conceal it. Put simply, computers were not designed to correct mistakes, but to hide them—and to hide them only from those parties who don’t know where to look.

T
HE WANING DAYS
of 2012 brought grim news: the few remaining legal protections that prohibited mass surveillance by some of the most prominent members of the Five Eyes network were being dismantled. The governments of both Australia and the UK were proposing legislation for the mandatory recording of telephony and Internet metadata. This was the first time that notionally democratic governments publicly avowed the ambition to establish a sort of surveillance time machine, which would enable them to technologically rewind the events of any person’s life for a period going back months and even years. These attempts definitively marked, to my mind at least, the so-called Western world’s transformation from the creator and defender of the free Internet to its opponent and prospective destroyer. Though these laws were justified as public safety measures, they represented such a breathtaking intrusion into the daily lives of the innocent that they terrified—quite rightly—even the citizens of other countries who didn’t think themselves affected (perhaps because their own governments chose to surveil them in secret).

These public initiatives of mass surveillance proved, once and for all, that there could be no natural alliance between technology and government. The rift between my two strangely interre
lated communities, the American IC and the global online tribe of technologists, became pretty much definitive. In my earliest years in the IC, I could still reconcile the two cultures, transitioning smoothly between my spy work and my relationships with civilian Internet privacy folks—everyone from the anarchist hackers to the more sober academic Tor types who kept me current about computing research and inspired me politically. For years, I was able to fool myself that we were all, ultimately, on the same side of history: we were all trying to protect the Internet, to keep it free for speech and free of fear. But my ability to sustain that delusion was gone. Now the government, my employer, was definitively the adversary. What my technologist peers had always suspected, I’d only recently confirmed, and I couldn’t tell them. Or I couldn’t tell them yet.

What I could do, however, was help them out, so long as that didn’t imperil my plans. This was how I found myself in Honolulu, a beautiful city in which I’d never had much interest, as one of the hosts and teachers of a CryptoParty. This was a new type of gathering invented by an international grassroots cryptological movement, at which technologists volunteered their time to teach free classes to the public on the topic of digital self-defense—essentially, showing anyone who was interested how to protect the security of their communications. In many ways, this was the same topic I taught for JCITA, so I jumped at the chance to participate.

Though this might strike you as a dangerous thing for me to have done, given the other activities I was involved with at the time, it should instead just reaffirm how much faith I had in the encryption methods I taught—the very methods that protected that drive full of IC abuses sitting back at my house, with locks that couldn’t be cracked even by the NSA. I knew that no number of documents, and no amount of journalism, would ever be enough to address the threat the world was facing. People needed tools to protect themselves, and they needed to know how to use them. Given that I was also trying to provide these tools to journalists,
I was worried that my approach had become too technical. After so many sessions spent lecturing colleagues, this opportunity to simplify my treatment of the subject for a general audience would benefit me as much as anyone. Also, I honestly missed teaching: it had been a year since I’d stood at the front of a class, and the moment I was back in that position I realized I’d been teaching the right things to the wrong people all along.

When I say class, I don’t mean anything like the IC’s schools or briefing rooms. The CryptoParty was held in a one-room art gallery behind a furniture store and coworking space. While I was setting up the projector so I could share slides showing how easy it was to run a Tor server to help, for example, the citizens of Iran—but also the citizens of Australia, the UK, and the States—my students drifted in, a diverse crew of strangers and a few new friends I’d only met online. All in all, I’d say about twenty people showed up that December night to learn from me and my co-lecturer, Runa Sandvik, a bright young Norwegian woman from the Tor Project. (Runa would go on to work as the senior director of information security for the
New York Times
, which would sponsor her later CryptoParties.) What united our audience wasn’t an interest in Tor, or even a fear of being spied on as much as a desire to re-establish a sense of control over the private spaces in their lives. There were some grandparent types who’d wandered in off the street, a local journalist covering the Hawaiian “Occupy!” movement, and a woman who’d been victimized by revenge porn. I’d also invited some of my NSA colleagues, hoping to interest them in the movement and wanting to show that I wasn’t concealing my involvement from the agency. Only one of them showed up, though, and sat in the back, legs spread, arms crossed, smirking throughout.

I began my presentation by discussing the illusory nature of deletion, whose objective of total erasure could never be accomplished. The crowd understood this instantly. I went on to explain that, at best, the data they wanted no one to see couldn’t be un
written so much as overwritten: scribbled over, in a sense, with random or pseudo-random data until the original was rendered unreadable. But, I cautioned, even this approach had its drawbacks. There was always a chance that their operating system had silently hidden away a copy of the file they were hoping to delete in some temporary storage nook they weren’t privy to.

That’s when I pivoted to encryption.

Deletion is a dream for the surveillant and a nightmare for the surveilled, but encryption is, or should be, a reality for all. It is the only true protection against surveillance. If the whole of your storage drive is encrypted to begin with, your adversaries can’t rummage through it for deleted files, or for anything else—unless they have the encryption key. If all the emails in your inbox are encrypted, Google can’t read them to profile you—unless they have the encryption key. If all your communications that pass through hostile Australian or British or American or Chinese or Russian networks are encrypted, spies can’t read them—unless they have the encryption key. This is the ordering principle of encryption: all power to the key holder.

Encryption works, I explained, by way of algorithms. An encryption algorithm sounds intimidating, and certainly looks intimidating when written out, but its concept is quite elementary. It’s a mathematical method of reversibly transforming information—such as your emails, phone calls, photos, videos, and files—in such a way that it becomes incomprehensible to anyone who doesn’t have a copy of the encryption key. You can think of a modern encryption algorithm as a magic wand that you can wave over a document to change each letter into a language that only you and those you trust can read, and the encryption key as the unique magic words that complete the incantation and put the wand to work. It doesn’t matter how many people know that you used the wand, so long as you can keep your personal magic words from the people you don’t trust.

Encryption algorithms are basically just sets of math problems
designed to be incredibly difficult even for computers to solve. The encryption key is the one clue that allows a computer to solve the particular set of math problems being used. You push your readable data, called plaintext, into one end of an encryption algorithm, and incomprehensible gibberish, called ciphertext, comes out the other end. When somebody wants to read the ciphertext, they feed it back into the algorithm along with—crucially—the correct key, and out comes the plaintext again. While different algorithms provide different degrees of protection, the security of an encryption key is often based on its length, which indicates the level of difficulty involved in solving a specific algorithm’s underlying math problem. In algorithms that correlate longer keys with better security, the improvement is exponential. If we presume that an attacker takes one day to crack a 64-bit key—which scrambles your data in one of 2
64
possible ways (18,446,744,073,709,551,616 unique permutations)—then it would take double that amount of time, two days, to break a 65-bit key, and four days to break a 66-bit key. Breaking a 128-bit key would take 2
64
times longer than a day, or fifty million billion years. By that time, I might even be pardoned.

In my communications with journalists, I used 4096- and 8192-bit keys. This meant that absent major innovations in computing technology or a fundamental redefining of the principles by which numbers are factored, not even all of the NSA’s cryptanalysts using all of the world’s computing power put together would be able to get into my drive. For this reason, encryption is the single best hope for fighting surveillance of any kind. If all of our data, including our communications, were enciphered in this fashion, from end to end (from the sender end to the recipient end), then no government—no entity conceivable under our current knowledge of physics, for that matter—would be able to understand them. A government could still intercept and collect the signals, but it would be intercepting and collecting pure noise. Encrypting our communications would essentially delete them from the memories
of every entity we deal with. It would effectively withdraw permission from those to whom it was never granted to begin with.

Any government hoping to access encrypted communications has only two options: it can either go after the keymasters or go after the keys. For the former, they can pressure device manufacturers into intentionally selling products that perform faulty encryption, or mislead international standards organizations into accepting flawed encryption algorithms that contain secret access points known as “back doors.” For the latter, they can launch targeted attacks against the endpoints of the communications, the hardware and software that perform the process of encryption. Often, that means exploiting a vulnerability that they weren’t responsible for creating but merely found, and using it to hack you and steal your keys—a technique pioneered by criminals but today embraced by major state powers, even though it means knowingly preserving devastating holes in the cybersecurity of critical international infrastructure.

Other books

Five Days Dead by Davis, James
Germinal by Emile Zola
Ignis (Book 2, Pure Series) by Mesick, Catherine
Caprice by Carpenter, Amanda
The Nice and the Good by Iris Murdoch
Prochownik's Dream by Alex Miller
Temptation Ridge by Robyn Carr