Authors: Bruce Schneier
In many industries, the options you’re offered, the price you pay, and the service
you receive depend on information about you: bank loans, auto insurance, credit cards,
and so on. Internet surveillance facilitates a fine-tuning of this practice. Online
merchants already show you different prices and options based on your history and
what they know about you. Depending on who you are, you might see a picture of a red
convertible or a picture of a minivan in online car ads, and be offered different
options for financing and discounting when you visit dealer websites. According to
a 2010
Wall Street Journal
article, the price you pay on the Staples website depends on where you are located,
and how close a competitor’s store is to you. The article states that other companies,
like Rosetta Stone and Home Depot, are also adjusting prices on the basis of information
about the individual user.
More broadly, we all have a customer score. Data brokers assign it to us. It’s like
a credit score, but it’s not a single number, and it’s focused on what you buy, based
on things like purchasing data from retailers, personal financial information, survey
data, warranty card registrations, social media interactions, loyalty card data, public
records, website interactions, charity donor lists, online and offline subscriptions,
and health and fitness information. All of this is used to determine what ads and
offers you see when you browse the Internet.
In 2011, the US Army created a series of recruiting ads showing soldiers of different
genders and racial backgrounds. It partnered with a
cable company to deliver those ads according to the demographics of the people living
in the house.
There are other ways to discriminate. In 2012, Orbitz highlighted different prices
for hotel rooms depending on whether viewers were using Mac or Windows. Other travel
sites showed people different offers based on their browsing history. Many sites estimate
your income level, and show you different pages based on that. Much of this is subtle.
It’s not that you can’t see certain airfares or hotel rooms, it’s just that they’re
ordered so that the ones the site wants to show you are easier to see and click on.
We saw in Chapter 3 how data about us can be used to predict age, gender, race, sexual
preference, relationship status, and many other things. This gives corporations a
greater advantage over consumers, and as they amass more data, both on individuals
and on classes of people, that edge will only increase. For example, marketers know
that women feel less attractive on Mondays, and that that’s the best time to advertise
cosmetics to them. And they know that different ages and genders respond better to
different ads. In the future, they might know enough about specific individuals to
know you’re not very susceptible to offers at 8:00 am because you haven’t had your
coffee yet and are grouchy, you get more susceptible around 9:30 because you’re fully
caffeinated, and then are less susceptible again by 11:00 because your blood sugar
is low just before lunch.
People are also judged by their social networks. Lenddo is a Philippine company that
assesses people’s credit risk by looking at the creditworthiness of the people they
interact with frequently on Facebook. In another weblining example, American Express
has reduced people’s credit limits based on the types of stores they shop at.
University of Pennsylvania law professor Oscar Gandy presciently described all this
in 1993 as the “panoptic sort”: “The collection, processing, and sharing of information
about individuals and groups that is generated through their daily lives as citizens,
employees, and consumers and is used to coordinate and control their access to the
goods and services that define life in the modern capitalist economy.” Those who have
this power have enormous power indeed. It’s the power to use discriminatory criteria
to dole out different opportunities, access, eligibility, prices (mostly in terms
of special offers and discounts), attention (both positive and negative), and exposure.
This practice can get very intrusive. High-end restaurants are starting to Google
their customers, to better personalize their dining experiences. They can’t give people
menus with different prices, but they can certainly hand them the wine list with either
the cheaper side up or the more expensive side up. Automobile insurance companies
are experimenting with usage-based insurance. If you allow your insurance company
to monitor when, how far, and how fast you drive, you could get a lower insurance
rate.
The potential for intrusiveness increases considerably when it’s an employer–employee
relationship. At least one company negotiated a significant reduction in its health
insurance costs by distributing Fitbits to its employees, which gave the insurance
company an unprecedented view into its subscribers’ health habits. Similarly, several
schools are requiring students to wear smart heart rate monitors in gym class; there’s
no word about what happens to that data afterwards. In 2011, Hewlett-Packard analyzed
employee data to predict who was likely to leave the company, then informed their
managers.
Workplace surveillance is another area of enormous potential harm. For many of us,
our employer is the most dangerous power that has us under surveillance. Employees
who are regularly surveilled include call center workers, truck drivers, manufacturing
workers, sales teams, retail workers, and others. More of us have our corporate electronic
communications constantly monitored. A lot of this comes from a new field called “workplace
analytics,” which is basically surveillance-driven human resources management. If
you use a corporate computer or cell phone, you have almost certainly given your employer
the right to monitor everything you do on those devices. Some of this is legitimate;
employers have a right to make sure you’re not playing Farmville on your computer
all day. But you probably use those devices on your own time as well, for personal
as well as work communications.
Any time we’re monitored and profiled, there’s the potential for getting it wrong.
You are already familiar with this; just think of all the irrelevant advertisements
you’ve been shown on the Internet, on the basis of some algorithm misinterpreting
your interests. For some people, that’s okay; for others, there’s low-level psychological
harm from being categorized, whether correctly or incorrectly. The opportunity for
harm rises as the judging becomes more important: our credit ratings
depend on algorithms; how we’re treated at airport security depends partly on corporate-collected
data.
There are chilling effects as well. For example, people are refraining from looking
up information about diseases they might have because they’re afraid their insurance
companies will drop them.
It’s true that a lot of corporate profiling starts from good intentions. Some people
might be denied a bank loan because of their deadbeat Facebook friends, but Lenddo’s
system is designed to enable banks to give loans to people without credit ratings.
If their friends had good credit ratings, that would be a mark in their favor. Using
personal data to determine insurance rates or credit card spending limits might cause
some people to get a worse deal than they otherwise would have, but it also gives
many people a better deal than they otherwise would have.
In general, however, surveillance data is being used by powerful corporations to increase
their profits at the expense of consumers. Customers don’t like this, but as long
as (1) sellers are competing with each other for our money, (2) software systems make
price discrimination easier, and (3) the discrimination can be hidden from customers,
it is going to be hard for corporations to resist doing it.
SURVEILLANCE-BASED MANIPULATION
Someone who knows things about us has some measure of control over us, and someone
who knows everything about us has a lot of control over us. Surveillance facilitates
control.
Manipulation doesn’t have to involve overt advertising. It can be product placement
ensuring you see pictures that have a certain brand of car in the background. Or just
an increase in how often you see that car. This is, essentially, the business model
of search engines. In their early days, there was talk about how an advertiser could
pay for better placement in search results. After public outcry and subsequent guidance
from the FTC, search engines visually differentiated between “natural” results by
algorithm and paid ones. So now you get paid search results in Google framed in yellow,
and paid search results in Bing framed in pale blue. This worked for a while, but
recently the trend has shifted back. Google is now accepting money to insert particular
URLs into search results, and not just in the
separate advertising areas. We don’t know how extensive this is, but the FTC is again
taking an interest.
When you’re scrolling through your Facebook feed, you don’t see every post by every
friend; what you see has been selected by an automatic algorithm that’s not made public.
But people can pay to increase the likelihood that their friends or fans will see
their posts. Payments for placement represent a significant portion of Facebook’s
income. Similarly, a lot of those links to additional articles at the bottom of news
pages are paid placements.
The potential for manipulation here is enormous. Here’s one example. During the 2012
election, Facebook users had the opportunity to post an “I Voted” icon, much like
the real stickers many of us get at polling places after voting. There is a documented
bandwagon effect with respect to voting; you are more likely to vote if you believe
your friends are voting, too. This manipulation had the effect of increasing voter
turnout 0.4% nationwide. So far, so good. But now imagine if Facebook manipulated
the visibility of the “I Voted” icon on the basis of either party affiliation or some
decent proxy of it: ZIP code of residence, blogs linked to, URLs liked, and so on.
It didn’t, but if it had, it would have had the effect of increasing voter turnout
in one direction. It would be hard to detect, and it wouldn’t even be illegal. Facebook
could easily tilt a close election by selectively manipulating what posts its users
see. Google might do something similar with its search results.
A truly sinister social networking platform could manipulate public opinion even more
effectively. By amplifying the voices of people it agrees with, and dampening those
of people it disagrees with, it could profoundly distort public discourse. China does
this with its 50 Cent Party: people hired by the government to post comments on social
networking sites supporting, and to challenge comments opposing, party positions.
Samsung has done much the same thing.
Many companies manipulate what you see according to your user profile: Google search,
Yahoo News, even online newspapers like the
New York Times
. This is a big deal. The first listing in a Google search result gets a third of
the clicks, and if you’re not on the first page, you might as well not exist. The
result is that the Internet you see is increasingly tailored to what your profile
indicates your interests are. This leads to a phenomenon that
political activist Eli Pariser has called the “filter bubble”: an Internet optimized
to your preferences, where you never have to encounter an opinion you don’t agree
with. You might think that’s not too bad, but on a large scale it’s harmful. We don’t
want to live in a society where everybody only ever reads things that reinforce their
existing opinions, where we never have spontaneous encounters that enliven, confound,
confront, and teach us.
In 2012, Facebook ran an experiment in control. It selectively manipulated the newsfeeds
of 680,000 users, showing them either happier or sadder status updates. Because Facebook
constantly monitors its users—that’s how it turns its users into advertising revenue—it
could easily monitor the experimental subjects and collect the results. It found that
people who saw happier posts tended to write happier posts, and vice versa. I don’t
want to make too much of this result. Facebook only did this for a week, and the effect
was small. But once sites like Facebook figure out how to do this effectively, they
will be able to monetize this. Not only do women feel less attractive on Mondays;
they also feel less attractive when they feel depressed. We’re already seeing the
beginnings of systems that analyze people’s voices and body language to determine
mood; companies want to better determine when customers are getting frustrated, and
when they can be most profitably upsold. Manipulating those emotions to market products
better is the sort of thing that’s acceptable in the advertising world, even if it
sounds pretty horrible to us.
Manipulation is made easier because of the centralized architecture of so many of
our systems. Companies like Google and Facebook sit at the center of our communications.
That gives them enormous power to manipulate and control.
Unique harms can arise from the use of surveillance data in politics. Election politics
is very much a type of marketing, and politicians are starting to use personalized
marketing’s capability to discriminate as a way to track voting patterns and better
“sell” a candidate or policy position. Candidates and advocacy groups can create ads
and fund-raising appeals targeted to particular categories: people who earn more than
$100,000 a year, gun owners, people who have read news articles on one side of a particular
issue, unemployed veterans . . . anything you can think of. They can target outraged
ads
to one group of people, and thoughtful policy-based ads to another. They can also
fine-tune their get-out-the-vote campaigns on Election Day, and more efficiently gerrymander
districts between elections. Such use of data will likely have fundamental effects
on democracy and voting.
Psychological manipulation—based both on personal information and on control of the
underlying systems—will get better and better. Even worse, it will become so good
that we won’t know we’re being manipulated. This is a hard reality for us to accept,
because we all like to believe we are too smart to fall for any such ploy. We’re not.