Liars and Outliers (18 page)

Read Liars and Outliers Online

Authors: Bruce Schneier

BOOK: Liars and Outliers
13.25Mb size Format: txt, pdf, ePub

In a lot of ways, this is similar to how evolution solves security problems. Antelopes don't need perfect protection against lions, and such protection would be too expensive in evolutionary terms. Instead, they accept the cost of losing the occasional antelope from the herd and increase their reproductive efficiency to compensate.

Similarly, we can never ensure perfect security against terrorism. All this talk of terrorism as an
existential threat
to society is nonsense. As long as terrorism is rare enough, and most people survive, society will survive. Unfortunately, it's not politically viable to come out and say that. We're collectively in a pathological state where people expect perfect protection against a host of problems—not just terrorism—and are unwilling to accept that that is not a reasonable goal.

Laws don't always have their intended effect
. They can be a blunt tool, especially when it comes to violent crime and disaffected populations. There isn't a clean cause-and-effect relationship between incentives and behavior; more often than not, incentives are emotional, and are far more compelling than a rational consideration of even the most severe sanction.
11
There's a lot of research in this area, with many counterintuitive—and sometimes contradictory—results. We know that, in general, spending more money on police reduces crime somewhat. On the other hand, there are studies that demonstrate that the
death penalty reduces
murders as well as studies that
demonstrate it doesn't
. While it's easy for politicians to be “tough on crime,” it's not always obvious that that's the best solution. An increase in the severity of punishment often doesn't translate into a drop in crime; an increase in the
probability of punishment
often does.
12
Often the
societal causes of crime
are what's important, and changes in the law do very little to help.

Laws have a clearer effect on more calculating crimes. Increasing penalties against tax fraud reduces tax fraud, at least for a little while. Increasing penalties on corporate crimes reduces those crimes. In those instances, potential defectors have plenty of time to make a rational risk trade-off.
13

It's not always possible to enforce a law.
International law, for example, only matters to the extent that the countries are willing to observe it or are able to enforce it on each other. Viktor Bout was an international arms dealer for about twenty years before his arrest in 2008. He was able to ship weapons to every conflict region imaginable, even those under UN embargo. He benefited from the lack of international law addressing transnational criminal activity, deliberately slack customs enforcement in countries seeking to attract business, and nations that found it convenient to let him do their dirty work.

Laws are open to interpretation, and that interpretation process can be expensive
. Earlier I talked about solving the societal dilemma of pollution with a legal security measure: allowing people downstream from the polluter to sue. This is good in theory, but can be problematic in practice. The polluter can hire a team of lawyers skilled in the art of legal delay. If the cost of the lawyers is less than the cost of cleaning up the pollution, or if the polluter can outspend his legal opponents, he can neutralize their ability to raise the cost of defecting. This kind of expensive legal defense can also work against government regulations, tying the case up in the courts until the government gives up. In the state anti-trust suits against Microsoft, almost all states settled before trial.

Laws can have loopholes
. This can happen by accident, when laws are linguistically ambiguous, contain simple errors, or fail to anticipate some new technological development. It can also happen deliberately, when laws are miswritten to enable the skillful few to evade them.

Examples of accidental loopholes are the “Double Irish” and “Dutch Sandwich” loopholes that allow multinational corporations to avoid U.S.—and other—taxes.
14
It's how
Google pays
only 2.8% of profits in tax. One estimate claims the U.S. loses $60 billion per year in taxes this way. Another loophole allows
large paper mills
to claim $6 billion in tax credits per year for mixing diesel fuel in with a wood byproduct they already burn; the law with the loophole was intended to reduce the consumption of fossil fuels.
15
A variety of loopholes make
video games
one of the most highly subsidized industries in the U.S. And, so as not to entirely pick on the U.S., the International Whaling Commission's loophole for research that
Japan exploits
to hunt whales commercially is another example.

Although it's hard to prove, there are many examples of laws believed to be deliberately written with loopholes to benefit someone. The UN Convention on the
Law of the Sea
provisions on international fisheries are deliberately ambiguous, making much of it impossible to enforce. Also at the UN,
Security Council Resolution 1441
—used to justify invading Iraq—seems to have been designed to be ambiguous enough to both support and oppose the use of force.

More generally, loopholes are ways institutional pressure is subverted by defectors to do things it wasn't originally intended to do. Think of patent law, originally intended to protect inventors but now used by corporations to attack other corporations, or by patent trolls to extort money out of corporations. Or the legal profession, originally intended to serve justice but now used as an offensive weapon. Or stocks, originally intended to provide capital for companies but now used for all sorts of unintended purposes: weird derivatives, indexes, short-term trading, and so on. These are all defections. Either the law should be effective, or it shouldn't exist. A law with a loophole is the worst of both.

Laws can be applied inconsistently
. If laws aren't objective, common, and universally applied, they are seen as unfair; and unfairness can exacerbate the Bad Apple Effect.

Judge Gordon Hewart
put it best:

There is no doubt, as has been said in a long line of cases, that it is not merely of some importance, but of fundamental importance, that justice should both be done and be manifestly seen to be done.

Laws try to outlaw legitimate and moral behavior
. Sometimes it's perfectly legitimate for someone to follow her individual self-interest, regardless of the group interest. There's no inherent dividing line, and different people—and societies—will draw it differently.

Invasive species
are at best a serious problem, and at worst an
ecological disaster
. They also pose a societal dilemma in which even a single defector can cause the group severe harm. All it took was one farmer releasing
silver carp
into the natural waterways of North America for it to invade everywhere, one flight accidentally carrying a
pregnant brown tree snake
to decimate the ecosystem of Guam, and one boat with
zebra mussel larvae
in its ballast water or milfoil clinging to its hull to overwhelm a previously pristine lake. As such, there need to be some pretty strong societal pressures in place to deal with this problem.

Some invasive species are easy to define as pests, but others are not.
Monk parakeets
are an invasive species in the U.S., thought to have been first released by pet owners either accidentally or as an easy way to get rid of them. The main harm they cause is
crop damage
, although they
also cause fires
and blackouts by building massive, elevated nests in electrical equipment, and they outcompete indigenous birds. On the other hand, they make cute pets and a lot of people like them. This results in a legal mess: the Wild Bird Conservation Act of 1992 prohibits importing them into the U.S., but
state laws vary wildly
, with some states banning them, while others have no laws whatsoever.

One of the most useful things a court system does is strike a balance between polarities of interest. How should society balance my individual right to play loud music with my neighbors' right to peace and quiet? Or my right to run a tannery versus my neighbors' right to an unsmelly environment? How should society balance my individual desire to keep a parakeet as a pet with the community's need to minimize the dangers posed by feral birds? Laws that try to outlaw legitimate and moral behavior are less likely to succeed.

Laws don't affect every type of defector equally
. In addition to those who can afford to fight and those who can't, there are three broad types of defectors when it comes to laws. The first are the individuals who know the law, believe the law is good (or at least that they don't want these things happening to
them
), and choose to break it anyway: burglars, muggers, kidnappers, murderers, speeders, and people in desperate straits. The second are individuals who know the law, believe the law is wrong, and choose to break it: pot smokers, some parakeet and ferret owners, and members of the Underground Railway who helped escaped slaves from the American South flee to safety in Canada. There is also a third category: those who don't know they're breaking the law, or don't realize how their actions affect the group. People might speed because they legitimately didn't see the speed limit sign, or they might not realize that certain sexual practices are against the law. These three groups will react differently to different laws, sanctions, and incentives.

Sometimes and for some people, laws aren't enough
. Sometimes the incentives to defect are worth the risk. That's where security technologies come in.

Chapter 10

Security Systems

Security systems are all around us, filling in the gaps where moral, reputational, and institutional pressures aren't effective enough. They include the door locks and burglar alarms in our homes, the anti-counterfeiting technologies in major world currencies, and the system of registering handguns and taking ballistic prints. They can be high-tech, like automatic face recognition systems, or low-tech, like defensive berms and castle walls. They don't even have to be physical systems; they can be procedural systems like neighborhood watches, customs interviews, and police pat-downs.

Theft of hotel towels isn't high in the hierarchy of world problems, but it can be expensive for hotels. Moral prohibitions against stealing prevent most people from stealing towels. Many hotels put their name or logo on their towels. That works as a reputational pressure system; most people don't want their friends to see obviously stolen hotel towels in their bathrooms. Sometimes, though, this has the opposite effect:
making towels souvenirs
of the hotel and more desirable to steal. It's against the law to steal hotel towels, of course, but with the exception of large-scale thefts, the crime will never be prosecuted.
1
The result is that the scope of defection is higher than hotels want. And large, fluffy towels from better hotels are expensive to replace.

The only thing left for hotels to do is take security into their own hands. One system that has become increasingly common is to set prices for towels and other items, and automatically charge the guest for them if they disappear from the rooms. This works with things like bathrobes, but it's too easy for the hotel to lose track of how many towels a guest has in his room, especially if piles of them are available at the pool or can easily be taken from a housekeeper's cart in the hallway.

A newer system, still not widespread, is to embed washable computer chips into the towels and track their movement around the hotel electronically. One anonymous Hawaii hotel claims they've
reduced towel theft
from 4,000 a month to 750, saving $16,000 monthly in replacement costs. Assuming the RFID tags are inexpensive and don't wear out too quickly, that's a pretty good security system.

Let's go back to our two prisoners. They are morally inclined not to betray each other. Their reputation in the underworld depends on them not betraying their fellow criminal. And the criminal organization they're part of has unwritten but very real sanctions against betraying other criminals to the police. That's probably enough for most criminals, but not all. And—depending on the country—the police can be very persuasive.

What some organizations do—terrorists and spies come to mind—is add a security system. They organize themselves in cells so that each member of the criminal organization only knows a few other members: the members of his cell and maybe one or two others. There are a lot of ways to do this, and the organizational structure of the World War II French Resistance wasn't the same as Al Qaeda. If he's arrested or otherwise captured and interrogated, there's only so much damage he can do if he defects. This doesn't help the two captured prisoners, of course, but it does protect the rest of the criminal organization.

Societal Dilemma: Criminals testifying against each other.
Society: The criminal organization.
Group interest: Minimize the amount of jail time for the society.
Competing interest: Minimize personal jail time.
Group norm: Don't testify against each other.
Corresponding defection: Testify against each other in exchange for reduced jail time.
To encourage people to act in the group interest, the society implements a variety of trust mechanisms.

Moral: People feel bad when they let members of their group down.

Reputational: Those who testify against their fellow criminals are shunned.

Institutional: The criminal organization punishes stool pigeons.

Security: The criminal organization limits the amount of damage a defecting criminal can inflict.

Of course, there are some good reasons not to run an organization like this. Imagine how much less effective a corporate worker would be if he only knew the five people in his department, and only communicated with his supervisor using dead drops and the occasional voice-disguised conversation from constantly changing pay phone locations. But sometimes security wins out over clean lines of communication and managerial open-door policies.

In Chapter 6's
Figure 8
, I broke out several different types of security systems:

  • Defenses
    . This is what you normally think of as security: weapons, armor, door locks, bulletproof vests, guard dogs, anti-virus software, speed bumps, bicycle locks, prison walls, panic rooms, chastity belts, and traffic cones. The common aspect of all these things is they try to physically stop potential defectors from doing whatever they're trying to do.
  • Interventions
    . These are other security measures that happen during the defection that either make defection harder or cooperation easier. To make defection harder, think of obfuscation and misdirection measures, security cameras in casinos, guard patrols, and authentication systems. To make cooperation easier, think of automatic face-recognition systems, uniforms, those automatic road-sign radar guns that tell you what speed you're going, and road signs that inform you of the rules.
  • Detection/response systems
    . These include burglar alarms, sensors in smokestacks to detect pollutants, RFID tags attached to store merchandise—or hotel towels—and detectors at the doorways, intrusion-detection systems in computer networks, and a UV light to detect if your hotel's bed sheets are clean.
  • Audit/forensic systems
    . These are primarily enhancements to institutional societal pressure. They include fingerprint- and DNA-matching technology and the expert systems that analyze credit card spending, looking for
    patterns of fraud
    .
  • Recovery systems
    . These are security measures that make it easier for the victim to recover from an attack. Examples are a credit monitoring service or an insurance plan. What's interesting about these measures is that they don't directly influence the risk trade-off. If anything, they make someone more likely to defect, because he can more easily rationalize that the victim won't be hurt by his actions.
  • Preemptive interventions
    . These operate before the attack, and directly affect the risk trade-off. Think of things like forced castration (chemical or otherwise), mandatory drug therapy to alter the personality of a career criminal, or a frontal lobotomy. Yes, these are often punishments after an attack, but they can prevent a future attack, too. Incarceration is also a pre-emptive intervention as well as a punishment; there are entire categories of crimes that someone in jail simply can't commit. So is execution, for the same reason. Also in this category are
    predictive policing programs
    that increase police presence at times and places where crimes are likely to occur.

I'd be the first to admit this classification isn't perfect, and there are probably examples that don't fit neatly into my different boxes. That's okay; I'm less interested in precisely categorizing all possible security countermeasures, and more interested in looking at the breadth of security systems we use every day for societal pressures—many without even realizing it.

Security systems comprise a unique category of societal pressure. They're the last layer of defense—and the most scalable—against defection. You can view them as a way to technologically enhance natural defenses. Even if humans were complete loners and had never formed society, never worried about societal dilemmas, and never invented societal pressures, security systems could still protect individuals.

As a technological analog to natural defenses, they're the only societal pressure that actually puts physical constraints on behavior. Everything else we've discussed so far affects the risk trade-off, either directly, such as moral pressure, or through feedback, such as reputational pressure. Security can work this way as well, but it can also stop someone who decides to defect. A burglar might not have any moral qualms about breaking into a jewelry store, and he might not be worried about his reputation or getting caught—but he won't be able to steal anything unless he can pick the door lock and open the safe. Security might constrain him technically (the ability to pick the lock), financially (the cost to buy an oxyacetylene torch capable of cutting open the safe), or temporally (the time required to cut open the safe). Sometimes the constraints are relative, and sometimes they're absolute. This is what makes security systems so powerful and scalable. Security systems can work even if a defector doesn't realize that he's defecting. For example, a locked gate will stop someone who doesn't realize he's entering private property.

Also as an analog to natural defenses, security systems aren't always used as societal pressures. That distinction depends on who implements the security system and why. Think back to the sealed-bag exchange: the merchant could implement a variety of security systems to prevent his customers from shoplifting, cheating, or otherwise defrauding him. He could install security cameras and put anti-theft tags on his merchandise. He could buy a device that detects counterfeit money. He could use a service that verifies checks. All of this is the merchant's decision and the merchant's doing, and none of it is related to intra-group trust.

If a storeowner installs a camera behind his cash register, it's not societal pressure; if a city installs cameras on every street corner, it is. And if the police use all the individually installed cameras in the area to track a suspect—as was done with
Timothy McVeigh's van
—then it's societal pressure. If society decides to subsidize in-store cameras, that's also societal pressure.

If I carry a gun for self-defense, it's not societal pressure; if we as a society collectively arm our policemen, it is. You could argue there is no societal dilemma involved in the hotel's towel-security decision. This is certainly true, and illustrates that the boundary between individual security and security as societal pressure can be fuzzy. The same security measure—a camera, for example—might be individual in one instance and societal in another. There are also going to be security measures that are some of both. I'm less concerned with the hard-to-classify edge cases than I am with the general categories.

Even if a security system is implemented entirely by individuals, that doesn't mean it can't also serve as societal pressure. A security camera is more likely to displace crime than reduce it; a potential thief can just go to another store instead. But if enough stores install hidden cameras, potential burglars might decide that the overall risk is too great. Lojack, widely deployed, will
reduce car theft
(and will increase car theft in neighboring regions that don't have the same system). Various computer security systems can have a similar result. If a security system becomes prevalent enough, potential defectors might go elsewhere because the value of defection is reduced.

Of course, society often limits what sort of security systems someone can implement. It may be illegal for a store to install security cameras in dressing rooms, even if it would reduce shoplifting. And I'm not allowed to bury land mines in my front yard, even if I think it would deter burglars.

Our security systems are also limited by our own abilities. Carrying a gun for self-defense makes less sense if you don't know how to use one. And I don't have the time to test every piece of food I eat for poison, even if I wanted to. A more realistic example: a store might have a policy to test if large bills are counterfeit, but not bother with smaller bills. (Of course, defectors take advantage of this: it's why $20 bills are counterfeited more often than $100 bills.)

Security systems are both their own category of societal pressure and augment the other three categories, allowing them to scale better. Quite a lot of the societal pressures we've talked about in the previous three chapters have a security component. Examples include:

  • Security-augmented moral pressure
    . Something as simple as a sign stating “Employees must wash hands after using the restroom” can be viewed as a security system. Measures that make compliance easier are another way to enhance morals, such as the electronic completion and filing of tax returns, photography to put a human face on victims and potential victims, and recycling bins in prominent locations. Other, more modern, technologies directly affect moral societal pressures: psychiatric therapies, personality-altering drugs, and brain-altering surgeries.
  • Security-augmented reputational pressure
    . The eBay feedback mechanism is a reputational system that requires security to ensure the system can't be hacked and manipulated by unscrupulous merchants. Other examples are letters of introduction, tribal clothing, employee background checks, sex offender databases, diplomas posted on walls, and U.S. State Department travel advisories. Informal online reviews of doctors allow us to trust people we don't know anything about, with our health. Online reputational systems allow us to trust unknown products on Amazon, unknown commenters on Slashdot, and unknown “friends” on Facebook. Credit-rating systems codify reputation. In online games, security systems are less of an enhancement to, and more of a replacement of, moral and reputational pressures for ensuring game fairness.
  • Security-augmented institutional pressure
    . A community might install cameras to help enforce speed limits. Or a government might use correlation software to analyze millions of tax returns, looking for evidence of cheating. Other examples include alarm systems that summon the police, surveillance systems that allow the police to track suspects, and forensic technologies that help prove guilt. Also time-lock safes, anti-shoplifting tags, cash register tapes, hard-to-forge currency, time cards and time clocks, credit card PIN pads, formal licensing of doctors, and the entire legal profession.

Other books

Checkmate by Annmarie McKenna
Dreaming of Amelia by Jaclyn Moriarty
Only the Brave by Mel Sherratt
Legacy by Tom Sniegoski
Intrusion by Kay, Arlene
Backstage with Her Ex by Louisa George
Darling Jasmine by Bertrice Small
Squiggle by B.B. Wurge