The Naked Future (24 page)

Read The Naked Future Online

Authors: Patrick Tucker

BOOK: The Naked Future
5.31Mb size Format: txt, pdf, ePub

In the years ahead, if more managers begin to experiment with telemetric solutions and if those experiments prove to be successful, we may become more accustomed to the idea of digging deeper into the secret signals inside our private communication. Not everyone will want to monitor their relationship signals for warning signs but for those who do, the process will become easier and cheaper. More of our talking, chatting, and signaling is taking place online and will be retrievable later at lower cost.

The last predictor of relationship longevity, according to Finkel, could be called the stress test. The late psychologist Reuben Hill first proposed it in 1949 after surveying couples who had been separated during World War II. What he found was that how a couple
deals with unexpected emergencies—the loss of a child, critical injury, a sudden drop in wealth—portends strongly for the future of that couple. In much the same way reactions to stressful events predict future health states for individuals, couples who were able to survive high stress events grew more stable and became less likely to break up.
30

Stress is the fiery crucible in which truly stable marriages are formed. We've long known this, but today the effects of stress on husbands and wives—or potential husbands and wives—can be modeled or run through a simulation, in the same way we simulate hurricanes, floods, and credit defaults to test the resilience of infrastructure and institutions. Some of the most promising work in treating post-traumatic stress disorder (PTSD) right now is based on running simulations of the traumatic events. There's no reason why a couple who was really serious about forging the strongest long-term relationship possible couldn't run simulations or game traumatic events beforehand to see how such an event would impact their relationship. If current experiments treating PTSD with simulations continue to prove effective, marriage counselors could recommend traumatic event role-playing as a means to better ensure relationship health.

Currently, scenario testing traumatic events is not an action that people consider when planning a future with someone. There just never seems to be a good time to tell the person that you are with, “Before we take this any further, let's do a few virtual reality natural disaster simulations!” In other words, there is no science or data yet on the effects of stress testing on marital relationship longevity. That's just not the way we think of love, but we
know
that stress tests and simulations in business and engineering are effective in finding problems before those problems blow up. No one would get on a bridge that was marked with a sign reading
THIS BRIDGE HAS NEVER BEEN LOAD-TESTED
. Yet we carry on for years in relationships to which we've never applied any sort of objective strength test outside a
Cosmo
quiz. When we learn to approach personal relationship decisions with the same seriousness that we collectively
approach issues of public safety, then all of us will experience fewer relationship disasters. The idea might seem far-fetched but a few years ago so did the notion that a majority of singles would turn to the Internet to find love before heading out to a local bar. As stress tests and virtual reality simulations prove their utility in other areas of life, we will eventually get around to applying them to love and then the last component of the formula will be in place. We can finally create our soul-mate predictor app.

The Love Machine

So what is the future of love? We know that personality profiles can help you predict who will or won't be a great date; sociometrics can tell you how well a date will go. A data set of sociometric scores going back for years will reveal how your personality, and that of someone else, might interact over a period of years. Trauma simulation can even give you a sense of how your marriage will weather life's big storms. An ensemble of these scores won't tell you if you're in love, but you can predict arguments and resolve them in advance. You can get a window into the future of your relationship with someone. You can find a mate who is indeed scientifically suited to you. More important, you can use science to make your relationship stronger.

The first step toward a better science of dating is getting customers to give up more data about themselves and how they date, beyond simple information about the sort of person whom they're interested in.

What Yagan wants is a lot more data from his users, not just information on how they answer questions about what they're looking for in a relationship but also Amazon reviews that provide a sense of why some people find some products are superior to others, Facebook and Foursquare information about comings and goings, Fitbit data to measure the beating heart. These signals, formerly inaudible but now detectable online, make up what Yagan calls “true identity” and leveraging true identity “broaches a line that no site has managed to cross.”

Sandy Pentland reached a similar realization early on in his work on sociometric data; that “by adjusting for personal characteristics and the history of previous interactions, we can dramatically increase our ability to predict people's behavior.”
31
With enough data a naked future emerges, a profile that is more living and thus predictive than any survey questionnaire because it is assembled from action, because it changes, as do you and I.

But would we dare call this love?

Perhaps we need to change our definition of what love is. We tend to view it as something we own and thus can lose, something we want and are entitled to, and something we lend in the hope of getting back. Perhaps love is more fluid, less connected to who we are and more firmly attached to what we do. Love is more than dopamine (at first) and oxytocin (later). It's a decision that, if we are lucky, we are called upon to make over and over again. We make hundreds of decisions in our relationships every day. If we could develop the ability to pick up just a few more of the signals that the person we love sends out continuously, then that decision making would improve. Love becomes less work.

Though the Brahmins understood little about the makeup of the universe compared with what we know today, they understood that idea well enough.

After describing why Vedic astrology is an expert practice worthy of admiration, Paramahansa Yogananda, in his
Autobiography of a Yogi
, effectively devalues the entire endeavor to predict the future and launches an eloquent defense of free will: “The message boldly blazoned across the heavens at the moment of birth is not meant to emphasize fate, the result of past good and evil, but to arouse man's will to escape from his universal thralldom. What he has done, he can undo. None other than himself was the instigator of the causes of whatever effects are now prevalent in his life. He can overcome any limitation, because he created it by his own actions in the first place.”
32

CHAPTER 9

Crime Prediction: The Where and the When

WHEN
crack first got to Pittsburgh, Pennsylvania, in the late 1980s, police attacked the problem the only way they knew how: busting dealers who were working out in the open, performing sting operations, and planting patrols on blocks where they had disrupted drug traffic before. Getting a dealer off a particular block was a big victory. If you're a Pittsburgh crack pusher, you can't just send a letter to your clients with your new address. And naturally, as anyone who has ever seen
The Wire
knows, whenever a dealer is forced to relocate to a new block he runs the risk of encroaching on territory that belongs to another dealer, which can lead to . . . disagreement. The police understood that clearing and holding blocks were crucial to slowing the spread of crack but they didn't have the resources to clear and hold
every
neighborhood. Some dealers were going to relocate. Finding a new market that isn't yet occupied buys a drug dealer a lot of time to set up. If the police could anticipate which neighborhoods were the most conducive to drug dealing and why, theoretically they could predict where the dealers were going to go set up.

How to figure this out? The most well-established approach to
predicting which neighborhoods were going to experience an uptick in crime was called broken-windows theory. In 1982 researchers James Q. Wilson and George L. Kelling observed a correlation between neighborhood dereliction, vandalism, vacancy, little lifestyle crimes like prostitution and panhandling, and broad crime increases. Neighborhood dereliction took the form of broken windows. To this day, the theory remains the basis for the zero-tolerance police efforts in places such as New York under the Giuliani administration and, to a lesser extent, Baltimore under former mayor Martin O'Malley. While it offered an effective if controversial approach for mayors looking to appear tough on crime, it was a lousy tool for predicting what
sorts
of crimes were going to take place and where and when they were likely to occur. Exactly how many windows have to be broken in your neighborhood before a crack dealer sets up on your corner?
1

Enter Andreas Olligschlaeger, a systems researcher and public policy scholar at Carnegie Mellon University in downtown Pittsburgh. He knew that certain variables make a neighborhood attractive for a drug dealer looking for new turf. One factor is the presence of commercially zoned space. Passersby are much less likely to call police on potential drug dealing near a factory or warehouse than by their own homes (and there are also fewer people around at night). The next factor was seasonality. Drug dealing is mostly an outdoor activity and tends, like cherry trees, to blossom in the spring and flourish in the summer.

Olligschlaeger also knew that the number of 911 calls related to weapons (shots fired), robberies, and assaults provided an indication of an emerging drug-dealing neighborhood. All these elements were clues as to where the pushers were going to go. The question became how to weight those variables. Was the presence of a potential competitor in one neighborhood more or less of a factor than a lot of residents hanging around? Exactly how big a role did seasonality play? And were the variables dependent or independent? Did an assault in one neighborhood affect the
attractiveness of another neighborhood as a new drug-dealing spot or did it not matter? At the time, Pittsburgh had a computer system called DMAP that allowed for the tracking of crimes across geographical space. But a straight averaging of these factors would likely result in a model that treated all the variables too equally. It would overfit.

Classical statistics doesn't lend itself well to modeling chaotic interactions with lots of moving parts, but artificial neural networks, which were a relatively recent innovation in 1991, were showing some interesting promise in the field of high-energy physics. An artificial neural network (aka neural net) is a mathematical program modeled on the way neurons exchange signals in the human brain. One of the core features of a neural net is that the weighting of variables changes as the system processes the problem repeatedly. In the same way that a kid shooting free throws from the same spot eventually becomes a great free-throw shooter, or the novice artist who does a thousand different sketches of hands becomes a better artist, neural nets learn by applying a particular set of tools to a particular problem over and over again. Though he was specializing in public policy at the time, Olligschlaeger is also the sort of guy who reads physics journals, which turned out to be a good thing.

The movement of drug dealers around Pittsburgh had to be subject to mathematical laws just like the movement of particles. Olligschlaeger trained a neural net system on every 911 call related to assault, robbery, and weapons from 1990 to 1992, as well as six other variables (for a total of nine) and then ran fifteen thousand simulations. The system came up with a series of predictions for which 2,150-square-feet sections of the city would see an uptick in drug-related 911 calls. At the end of August 1992 Olligschlaeger made three maps: one showed the predictions made by the straight statistical model (regression forecasts), the other two were neural net based. The result was that the neural net model presented a clear 54 percent improvement over the straight averaging model.
2

The models all performed differently, and none predicted the
actual number of calls perfectly. But the straight statistical regression model overestimated the number of calls that occurred by a great deal so if the PD had used that model, they would have sent a lot of cops to quiet neighborhoods in anticipation of calls that would not come. That means they wouldn't be covering the problem areas as well. The neural net, conversely, missed the relatively few calls that occurred in the southwestern portion of the city. Yet compared with the statistical regression model, it was the model that was
least
likely to send police to a place where there was definitely not going to be any action. It did a much better job predicting not only where crime would
not
be but also the number of drug-related calls that would occur in each map cell where they did happen. It provided far better value than straight guessing or even traditional statistical analysis. Unfortunately, that wasn't good enough to convince the city of Pittsburgh. They never adopted neural nets as a crime-fighting tool.
3

You can't blame the city hall bureaucrats for not buying the neural net concept. The connection between the input (data) and the output (prediction) was too opaque. Even though the predictions themselves were good, the lack of transparency as to how the net reached its conclusion made the entire system unattractive from a policy standpoint. In his paper on the subject, Olligschlaeger himself admitted this: “One disadvantage of neural networks is that there currently are no tests of statistical significance for the estimated weight structures. However, if the main goal of a model is to provide good forecasts rather than to analyze relationships between dependent and independent variables, then this should not be an issue.”

Though the use of neural nets did not become standard practice, Olligschlaeger's study represents a key evolutionary moment of what is today called predictive policing, the use of computational databases and statistics to identify emergent crime patterns and deploy cops preemptively.

Skip ahead to 1994, newly appointed New York City police commissioner William Bratton institutes what he called a “strategic
re-engineering” of the city's police department. The use of up-to-the-minute data, citywide crime statistics, and crime mapping will go on to bring down the city crime rate by 37 percent in three years. Bratton's reengineering became another important victory for predictive policing, but not a decisive one because stats were only one portion of Bratton's overall strategy. Today, many scholars credit tougher zero-tolerance and stop-and-frisk policies, coupled with the use of crime mapping, for bringing down New York City's crime rate in the 1990s. These measures were not without controversy. New York's aggressive law enforcement strategies under Bratton led to complaints and charges of harassment and overly aggressive tactics, particularly the stop-and-frisk provision, which targeted mainly minority youth.
4

The first unequivocal victory for predictive policing in practice occurred in 2003 in Richmond, Virginia. Criminologist Colleen McCue was using IBM's Statistical Package for the Social Sciences (SPSS) software as part of her research into crime patterns. She realized that incidents of random gunfire around New Year's Day in Richmond happened within a very specific time period, between 10
P.M
. on New Year's Eve and 2
A.M.
on New Year's Day. And these incidents occurred in very particular neighborhoods and under unique conditions. With these variables she built a model to show that on New Year's Eve the department could dramatically cut down on gunfire complaints, nab a lot of illegal firearms, and do it all with far fewer officers than they had used for patrol the year before by placing police in the places where the gunfire was most likely to occur.
5

Most police departments are run like regular businesses. Cops have precincts to report to and are scheduled in regular shifts. Cops go out on patrol to look for crimes in progress but most of the job is responding to complaints and calls that have come in. The idea of sticking a lot of cops in one spot, in one time window, in advance of something that
might
happen was pretty revolutionary in 2003, but the department followed her lead.

When the initiative dubbed Project Exile was concluded, gunfire
complaints were down by half compared to the previous year, gun seizures were up 246 percent, and the department had saved $15,000 in New Year's overtime pay for officers. Complaints were down, guns came off the streets in droves, and more cops got the night off. It was a triple score.
6

Both Project Exile and the neural nets showed that they could get results. Yet where Olligschlaeger found resistance from city officials in Pittsburgh, Richmond police were eager to embrace Project Exile. The reason why says a lot about the way city governments work. Because Exile didn't involve a neural net or any outrageously sophisticated modeling technique and was instead a straight statistical regression, the political decision makers could understand it. Neural nets are sometimes referred to as black box systems. It's extremely difficult to see exactly how they reach the conclusions that they reach. What was a fascinating system scientifically was unusable as a decision-making tool for a lawmaker or police representative, someone who had to be able to show how and why he arrived at a particular decision, almost regardless of whether the decision was right or wrong. Yes, Exile proved extremely effective when applied to the problem of random gunfire but the challenge of identifying emerging drug neighborhoods was rather more difficult and potentially of greater long-term significance.

Project Exile simply capitalized on better record keeping techniques. It worked on correlation. Using data to predict crime on the basis of cause was a much more important test. It would occur a few years later in Memphis.

The Red Dot of Crime

Over the last several decades, Memphis has followed the same path—straight down—of many formerly prosperous U.S. metro regions. Property values and college graduation rates are abysmal. Poverty is high. Throughout the early 2000s, Memphis was consistently ranked one of the top five worst U.S. cities for violent crime.

Between 2006 and 2010, in spite of all of the above, crime went down 31 percent.

The demographics of Memphis didn't change in that time. The approximately twenty-three hundred men and women on the police force at that time were the same sort you find in any town where there's too much to do and too few to do it. Here's what changed: the department began handling its information differently thanks to Dr. Richard Janikowski, an associate professor in the Department of Criminology and Criminal Justice at the University of Memphis.

Janikowski convinced local police head Larry Godwin to allow him to study the department's arrest records. But Janikowski wasn't looking for biographical sketches of the perpetrators; he was looking for marginalia, the circumstances behind each arrest, the
where
and
when
of crime.

The biggest single finding and by far the most controversial was that the rising crime rate was closely connected to Section 8 housing, federally subsidized housing for qualified individuals below a certain income level. When Janikowski and his wife, housing expert Phyllis Betts, took a crime hot-spot map and layered it on top of the map for Section 8 housing, the pattern was unmistakable. Hanna Rosin, in her 2008
Atlantic
article on Janikowski, described it thusly: “On the merged map, dense violent-crime areas are shaded dark blue, and Section 8 addresses are represented by little red dots. All of the dark-blue areas are covered in little red dots, like bursts of gunfire. The rest of the city has almost no dots.”
7

When I asked Janikowski about it, he points out that the blue-area, red-dot analysis omitted some important data. “You know, the stuff didn't overlap perfectly. There were high levels of correlation with it. Section 8 housing was part of what you see there. But it was also just heavy levels—a big concentration of poverty. And that's a complex relationship that was occurring.”

Today, we know with more certainty that the connection between Section 8 housing and rising crime is correlative, not causative. People
who live in this housing are not more likely to commit crimes so much as they are more likely to move to low-rent neighborhoods where the probability of a crime rise is already high.
8

This relationship between the likelihood of being a crime victim, being a crime assailant, and living in Section 8 was particularly complicated in Memphis, says Janikowski, where many traditional Section 8 units were in terrible shape and others were being torn down. “You've got lots of demolition that was occurring in what was the traditional inner city for various reasons. So you had movement there. You had a lot of movement of at-risk populations. And they all tended to cluster because, again, the price of housing.”

Other books

Shadow's Edge by Maureen Lipinski
Instinct by J.A. Belfield
Now You See Her by Linda Howard
It's Snow Joke by Nancy Krulik
Hera by Chrystalla Thoma
Notorious by Vicki Lewis Thompson