Cybersecurity and Cyberwar (37 page)

Read Cybersecurity and Cyberwar Online

Authors: Peter W. Singer Allan Friedman,Allan Friedman

BOOK: Cybersecurity and Cyberwar
2.48Mb size Format: txt, pdf, ePub

The Facebook team learned this from their own self-testing experience. They saw how an effective response required cooperation between teams that ranged from the development team for the Facebook commercial product to the internal information security team responsible for the company network. The prior testing didn't
just help them plug vulnerabilities, it yielded lessons in cooperation that proved critical when the war game came to life in an actual attack. As the director of incident response
Ryan McGeehan said
, “We're very well prepared now and I attribute that to the drill.”

Build Cybersecurity Incentives: Why Should I Do What You Want?

“If the partnership doesn't make
tangible and demonstrable progress
, other initiatives are going to take their place.”

This is what DHS cybersecurity chief Amit Yoran told a gathering of security and technology executives. It was meant to be an ultimatum, threatening the private sector to get its act together soon on enhancing cybersecurity through voluntary, public-private partnerships, or face the alternative of involuntary massive regulation. The problem is that Yoran made this statement in 2003. Over a decade later, the situation remains essentially the same, showing the emptiness of these kinds of threats.

Cyberspace may be a realm of public concern, but many, if not most, of the decisions to secure it are and will continue to be made by private actors. Our security depends on so many variables: individuals deciding whether or not to click a link; the companies these individuals work for deciding whether or not to invest in security and how; technology vendors and the creators whom they buy from both owning up to vulnerabilities and issuing patches for what they have already put out there (and customers downloading the patches when available); and so on.

Unfortunately, we've seen how again and again the incentives to make good decisions are often misaligned. The average user doesn't bear the full consequence of ignoring basic computer hygiene. Managers often don't see any return on money thrown at security solutions. Software developers are compensated for speed and new features, not making their code more secure. Why does this market fail?

As we've repeatedly seen, the basic question is one of incentives. In the language of economics, security is an externality. Externalities are costs or benefits from an action that fall to someone other than that actor. Pollution is a classic negative externality, where the firm benefits from production, but those benefits are countered by the public harm of toxic chemicals in the environment.

The predicament of insecurity in cyber today is that it has many of the same characteristics of such negative externalities. The owner gets the benefits of using a system but doesn't bear a huge cost for the vulnerabilities it introduces. Sometimes, the owner is not even aware that their own system is causing harm to others, such as when it is part of a botnet. The
FBI repeatedly has found
cases where companies have been infiltrated for months or years and only learned of their insecurity from a federal investigation. If a small utility company can lower the costs of maintenance by controlling remote transformers over the Internet, and no one will blame it if an attack disrupts service, it is likely do so. It may not be the most responsible thing from a public cybersecurity perspective, but it is the most rational action from its own private profit-seeking perspective.

The problem is that individual bad security decisions make many others worse off. When you fail to update your personal computer's defenses, its compromised security could add it to a botnet that is attacking the wider Internet. When a company fails to come clean about an attack, it allows attackers to go free, the vulnerabilities they targeted to be exploited elsewhere, and the security of partners and clients that connect to that company's system to be endangered. And at a wider level, it harms the economy by limiting investors' information about company risk and liability.

Understanding incentives is not always intuitive. Banks spend a great deal of time and money trying to combat phishing websites that try to steal their customers' banking and financial credentials. This is rational since those banks will have to absorb much of the costs of fraud. However, the same banks ignore other parts of the problem. The thieves using banking credentials will have to extract the money through a network of “money mules.” The attackers transfer the money through a network of accounts owned by the mules, until they can safely extract it out of the financial system without being traced. So these mule networks are a key part of the phishing network. And making them even more ripe for takedown, these mules are often recruited via public websites. But the financial industry is not as concerned with removing the websites that recruit money mules. The reason is that unlike the phishing sites, the mule networks do not have the
banks' individual brand names
on them.

In other situations, when there are too many players involved at different steps, the market structure simply does not have a
natural equilibrium to assign incentives to one party. This can happen even when all the players want security solutions. Android is a mobile phone operating system developed and distributed for free by Google. Google writes the specified architecture of the system and the high-level code. However, different Android phones are produced by different manufacturers that use different hardware. The operating system, which instructs programs on how to use the hardware, is customized by each phone manufacturer, often for each phone. The phones are then sold to mobile carriers that sell them to consumers. New versions of the Android OS are released frequently by Google, on an average of about once a year.

None of these groups wants their systems to be cracked, but when a vulnerability is discovered in an operating system, it's often unclear who has the responsibility to inform the consumer and issue a patch. As a result, fewer patches are actually made and there is no mechanism to get security updates out to older phones. In 2012, a technical study estimated that over half of Android devices have
unpatched vulnerabilities
.

Occasionally, incentives actually make things worse. TRUSTe is a company that offers certifications to companies that offer a privacy policy and commit to following it. The idea is that consumers can see the TRUSTe seal on the website and feel more comfortable using that site. This, in turn, gives websites an incentive to pay TRUSTe for the certification process. Unfortunately, since TRUSTe gets paid for every seal, they have incentives to attest to the trustworthiness of websites even when they are less than pure. One study compared “certified” sites with the rest of the Web and found that sites with the TRUSTe seal were actually
more
likely to be rated as “untrustworthy” by an
automated security tool
.

It is possible, however, to get the incentives structure right. In the United States, consumer protection laws limit the liability of credit card customers to $50 for unauthorized transactions. These laws were passed in the 1970s just as the credit card industry was taking off. Liability for charges from a stolen credit card was then decided between the customer's bank and the merchant's bank for a variety of circumstances, and the merchant's bank often passed the responsibility to the merchant. For in-person transactions where the main risk of fraud was a stolen card, this aligned incentives. The merchant was in the best position to verify that the user of the card was the
legitimate owner of the card. This process imposed some cost on the merchant in terms of efficiency and customer relations. The merchant balanced the risk of fraud with the costs of fraud prevention and could make a rational decision.

The arrival of web commerce dramatically increased the amount of fraud, since scammers could use stolen credit card numbers remotely. It also increased the number of legitimate transactions where the merchants did not have access to the card. To rebalance the risk assignment, card issuers added an extra secret to the card, the code verification value. This value was only to be printed on the back of the card itself, and known to the card-issuing bank, so it could be a shared secret to verify that the user of the card had access to that card. Merchants who ask for the CVV are not liable for fraudulent transactions, although there are severe penalties if they store these values.

Now the credit card-issuing bank needed to detect mass use of stolen credit card numbers without the CVV. This job fell to the credit card networks (such as Visa and American Express), which are in the best position since they manage the exchange of payment data. The final step was to safeguard the CVV data to minimize its theft and abuse. The credit card companies worked together developed the Payment Card Industry Data Security Standards, which enforce a set of security rules to minimize collection and leakage of data that can lead to fraud. The result wasn't a magic solution to all credit card fraud, but it did create a marketplace where a smart law properly aligned incentives to drive sensible security investments.

By contrast, in looking at the cyber realm more broadly today, one can see how the incentives are broken for the user, the enterprise, and the system developer, each presenting their own challenges and opportunities. The individual users may be much maligned for not caring about security, but they really are just trying to use the Internet to do their jobs, or at least watching videos of cute cats playing a keyboard when they are supposed to be doing their jobs. Users are also human, so they will usually follow the path of least resistance and seldom deviate from system defaults. Make it easy for them and they will follow. This has been proven in areas that range from retirement planning to organ donation (countries that have default organ donation have a participation rate almost five times higher than those with an
opt-in model
).

Businesses equally act rationally. To justify an investment in security, a profit-minded organization (or even nonprofits, which are resource-constrained) must see some justification. Every dollar or man-hour spent on security is not spent on the organization's actual goal. There are a number of models for calculating some return on investment, but all depend on having some value for the harms of a security incident and the reduced likelihood or smaller impact of an incident that a specific security tool might have. As we discussed above, for severe incidents such as the theft of competitive data, both the harms and the defense are poorly understood.

The drive for incentivizing these kind of investments will have to come from three likely sources. The first is internal. This is a maxim in business that “If you can't measure it, you can't manage it.” The Department of Homeland Security has found that organizations that are able to aggregate and analyze cyber data end up changing how they understand their incentives. They begin to see “how investments in cyber health can reduce operating costs, improve business agility, or avoid
extensive mitigation costs
.” The more leaders in organizations understand cybersecurity and its near and long-term benefits, the more likely they are to invest in it.

The second source is external. Companies and other organizations exist in a marketplace of competitors and consumers. If industry norms highlight cybersecurity investments as requisite, companies will fall into line, so as not to fall behind their competitors. In turn, DHS has found that the more customers understand about cybersecurity, “Such insights would likely strengthen consumer demand for healthy products and services and
reduce risks to participants
.” Security can be a virtuous cycle, with awareness driving a market response that, in turn, drives more awareness.

Finally, there may be situations where the incentives have to come from a source beyond the market. As we explored earlier, government regulation is a stopgap that can often set standards and amend situations where the market just isn't able or willing to respond on its own. It isn't by any means a silver bullet solution, but examples like the aforementioned Top 20 Critical Security Controls (put together by a joint team of US government and private security experts, so both the private and public were represented) can establish a set of baseline best practices. Just like safe building or fire codes, the Top 20 Controls lay out minimal requirements that any government
agency or entity operating in an important business area should follow. They range from conducting an inventory of all authorized and unauthorized devices to controlling the use of administrative privileges that act as
keys to the kingdom
for hackers. Rather than onerous regulations, these imposed incentives should be easy to adapt and easy to verify as they are often just measures that smart companies are already taking. That they are easy, though, doesn't mean that even the most minimal norms won't have a major impact. The Top 20 controls, for example, were found to stop
94 percent of security risks
in one study.

With the three forces of organizations, markets, and nukes in play, then we can get creative to shape change further. For instance, incentives can be introduced to add security by creating new markets. Selling knowledge of “zero-day” vulnerabilities was once the domain of researchers, who tried to find holes and tell companies about them, before criminals could exploit them. As cybersecurity has taken off, these “vulnerabilities markets” have become a big business.

Such new markets must be carefully monitored though. With a looming cyber-industrial complex, the buyer side is also evolving to include many new actors other than the software manufacturers looking to figure out how to protect themselves and their customers. As one report from Watchguard found,

Vulnerability markets or auctions are a new trend in information security, allowing so-called “security” companies to sell zero day software vulnerabilities to the highest bidder. While they claim to “vet” their customers and only sell to NATO governments and legitimate companies, there are few safeguards in place to prevent
nefarious entities
to take advantage.

Other books

El libro de los manuales by Paulo Coelho
Forgotten Man, The by Amity Shlaes
His Dark Bond by Marsh, Anne
Outback by Robin Stevenson
Ticket to India by N. H. Senzai
Living Witness by Jane Haddam