Everything Bad Is Good for You (20 page)

BOOK: Everything Bad Is Good for You
3.4Mb size Format: txt, pdf, ePub
T
HE
I
NTERNET

V
IEWERS WHO GET LOST
in
24'
s social network have a resource available to them that
Dallas
viewers lacked: the numerous online sites and communities that share information about popular television shows. Just as
Apprentice
viewers mulled Troy's shady business ethics in excruciating detail,
24
fans exhaustively document and debate every passing glance and brief allusion in the series, building detailed episode guides and lists of Frequently Asked Questions. One Yahoo! site featured at the time of this writing more than forty thousand individual posts from ordinary viewers, contributing their own analysis of last night's episode, posting questions about plot twists, or speculating on the upcoming season. As the shows have complexified, the resources for making sense of that complexity have multiplied as well. If you're lost in
24'
s social network, you can always get your bearings online.

All of which brings us to another crucial piece in the puzzle of the Sleeper Curve: the Internet. Not just because the online world offers resources that help sustain more complex programming in other media, but because the process of acclimating to the new reality of networked communications has had a salutary effect on our minds. We do well to remind ourselves how quickly the industrialized world has embraced the many forms of participatory electronic media—from e-mail to hypertext to instant messages and blogging. Popular audiences embraced television and the cinema in comparable time frames, but neither required the learning curve of e-mail or the Web. It's one thing to adapt your lifestyle to include time for sitting around watching a moving image on a screen; it's quite another to learn a whole new language of communication and a small army of software tools along with it. It seems almost absurd to think of this now, but when the idea of hypertext documents first entered the popular domain in the early nineties, it was a distinctly avant-garde idea, promoted by an experimentalist literary fringe looking to explode the restrictions of the linear sentence and the page-bound book. Fast forward less than a decade, and something extraordinary occurs: exploring nonlinear document structures becomes as second nature as dialing a phone for hundreds of millions—if not billions—of people. The mass embrace of hypertext is like the
Seinfeld
“Betrayal” episode: a cultural form that was once exclusively limited to avant-garde sensibilities, now happily enjoyed by grandmothers and third-graders worldwide.

I won't dwell on this point, because the premise that increased interactivity is good for the brain is not a new one. (A number of insightful critics—Kevin Kelly, Douglas Rushkoff, Janet Murray, Howard Rheingold, Henry Jenkins—have made variations on this argument over the past decade or so.) But let me say this much: The rise of the Internet has challenged our minds in three fundamental and related ways: by virtue of being participatory, by forcing users to learn new interfaces, and by creating new channels for social interaction.

Almost all forms of online activity sustained are participatory in nature: writing e-mails, sending IMs, creating photo logs, posting two-page analyses of last night's
Apprentice
episode. Steve Jobs likes to describe the difference between television and the Web as the difference between lean-back and sit-forward media. The networked computer makes you lean in, focus, engage, while television encourages you to zone out. (Though not as much as it used to, of course.) This is the familiar interactivity-is-good-for-you argument, and it's proof that the conventional wisdom is, every now and then, actually wise.

There was a point several years ago, during the first wave of Internet cheerleading, when it was still possible to be a skeptic about how participatory the new medium would turn out to be. Everyone recognized that the practices of composing e-mail and clicking on hyperlinks were going to be mainstream activities, but how many people out there were ultimately going to be interested in publishing more extensive material online? And if that turned out to be a small number—if the Web turned out to be a medium where most of the content was created by professional writers and editors—was it ultimately all that different from the previous order of things?

The tremendous expansion of the blogging world over the past two years has convincingly silenced this objection. According to a 2004 study by the Pew Charitable Trust, more than 8 million Americans report that they have a personal weblog or online diary. The wonderful blog-tracking service Technorati reports that roughly 275,000 blog entries are published in the average day—a tiny fraction of them authored by professional writers. After only two years of media hype, the number of active bloggers in the United States alone has reached the audience size of prime-time network television.

So why were the skeptics so wrong about the demand for self-publishing? Their primary mistake was to assume that the content produced in this new era would look like old-school journalism: op-ed pieces, film reviews, cultural commentary. There's plenty of armchair journalism out there, of course, but the great bulk of personal publishing is just that,
personal
: the online diary is the dominant discursive mode in the blogosphere. People are using these new tools not to opine about social security privatization; they're using the tools to talk about their lives. A decade ago Douglas Rushkoff coined the phrase “screenagers” to describe the first generation that grew up with the assumption that the images on a television screen were supposed to be manipulated; that they weren't just there for passive consumption. The next generation is carrying that logic to a new extreme: the screen is not just something you manipulate, but something you project your identity onto, a place to work through the story of your life as it unfolds.

To be sure, that projection can create some awkward or unhealthy situations, given the public intimacy of the online diary, and the potential for identity fraud. But every new technology can be exploited or misused to nefarious ends. For the vast majority of those 8 million bloggers, these new venues for self-expression have been a wonderful addition to their lives. There's no denying that the content of your average online diary can be juvenile. These diaries are, after all, frequently created by juveniles. But thirty years ago those juveniles weren't writing novels or composing sonnets in their spare time; they were watching
Laverne & Shirley.
Better to have minds actively composing the soap opera of their own lives than zoning out in front of someone else's.

The Net has actually had a positive lateral effect on the tube as well, in that it has liberated television from attempting tasks that the medium wasn't innately well suited to perform. As a vehicle for narrative and first-person intimacy, television can be a delightful medium, capable of conveying remarkably complex experiences. But as a source of information, it has its limitations. The rise of the Web has enabled television to offload some of its information-sharing responsibilities to a platform that was designed specifically for the purposes of sharing information. This passage from Postman's
Amusing Ourselves to Death
showcases exactly how much has changed over the past twenty years:

Television…encompasses all forms of discourse. No one goes to a movie to find out about government policy or the latest scientific advance. No one buys a record to find out the baseball scores or the weather or the latest murder…. But everyone goes to television for all these things and more, which is why television resonates so powerfully throughout the culture. Television is our culture's principal mode of knowing about itself.

No doubt in total hours television remains the dominant medium in American life, but there is also no doubt that the Net has been gaining on it with extraordinary speed. If the early adopters are any indication, that dominance won't last for long. And for the types of knowledge-based queries that Postman describes—looking up government policy or sports scores—the Net has become the first place that people consult. Google is
our
culture's principal way of knowing about itself.

The second way in which the rise of the Net has challenged the mind runs parallel to the evolving rule systems of video games: the accelerating pace of new platforms and software applications forces users to probe and master new environments. Your mind is engaged by the interactive content of networked media—posting a response to an article online, maintaining three separate IM conversations at the same time—but you're also exercising cognitive muscles interacting with the
form
of the media as well: learning the tricks of a new e-mail client, configuring the video chat software properly, getting your bearings after installing a new operating system. This type of problem-solving can be challenging in an unpleasant way, of course, but the same can be said for calculus. Just because you don't like troubleshooting your system when your browser crashes doesn't mean you aren't exercising your logic skills in finding a solution. This extra layer of cognitive involvement derives largely from the increased prominence of the interface in digital technology. When new tools arrive, you have to learn what they're good for, but you also have to learn the rules that govern their use. To be an accomplished telephone user, you needed to grasp the essential utility of being able to have real-time conversations with people physically removed from you,
and
you had to master the interface of the telephone device itself. That same principle holds true for digital technologies, only the interfaces have expanded dramatically in depth and complexity. There's only so much cognitive challenge at stake in learning the rules of a rotary dial phone. But you could lose a week exploring all the nooks and crannies of Microsoft Outlook.

Just as we saw in the world of games, learning the intricacies of a new interface can be a genuine pleasure. This is a story that is not often enough told in describing our evolving relationship with software. There is a kind of exploratory wonder in downloading a new application, and meandering through its commands and dialog boxes, learning its tricks by feel. I've often found certain applications are more fun to explore the first time than they actually are to use—because in the initial exploration, you can delight in features that are clever without being terribly helpful. This sounds like something only a hardened tech geek would say, but I suspect the feeling has become much more mainstream over the past few years. Think of the millions of ordinary music fans who downloaded Apple's iTunes software: I'm sure many of them enjoyed their first walk through the application, seeing all the tools that would revolutionize the way they listened to music. Many of them, I suspect, eschewed the manual altogether, choosing to probe the application the way gamers investigate their virtual worlds: from the inside. That probing is a powerful form of intellectual activity—you're learning the rules of a complex system without a guide, after all. And it's all the more powerful for being fun.

Then there is the matter of social connection. The other concern that Net skeptics voiced a decade ago revolved around a withdrawal from public space: yes, the Internet might connect us to a new world of information, but it would come at a terrible social cost, by confining us in front of barren computer monitors, away from the vitality of genuine communities. In fact, nearly all of the most hyped developments on the Web in the past few years have been tools for augmenting social connection: online personals, social and business network sites such as Friendster, the Meetup.com service so central to the political organization of the 2004 campaign, the many tools designed to enhance conversation between bloggers—not to mention all the handheld devices that we now use to coordinate new kinds of real-world encounters. Some of these tools create new modes of communication that are entirely digital in nature (the cross-linked conversations of bloggers). Others use the networked computer to facilitate a face-to-face encounter (as in Meetup). Others involve a hybrid dance of real and virtual encounters, as in the personals world, where flesh-and-blood dates usually follow weeks of online flirting. Tools like Google have fulfilled the original dream of digital machines becoming extensions of our memory, but the new social networking applications have done something that the visionaries never imagined: they are augmenting our people skills as well, widening our social networks, and creating new possibilities for strangers to share ideas and experiences.

Television and automobile society locked people up in their living rooms, away from the clash and vitality of public space, but the Net has reversed that long-term trend. After a half-century of technological isolation, we're finally learning new ways to connect.

F
ILM

H
AVE THE MOVIES UNDERGONE
an equivalent transformation? The answer to that is, I believe, a qualified yes. The obvious way in which popular film has grown more complex is visual and technological: the mesmerizing special effects; the quicksilver editing. That's an interesting development, and an entertaining one, but not one that is likely to have a beneficial effect on our minds. Do we see the same growing narrative complexity, the same audience “filling in” that we see in television shows today? At the very top of the box office list, there is some evidence of the Sleeper Curve at work. For a nice apples-to-apples comparison, contrast the epic scale and intricate plotting of the
Lord of the Rings
trilogy to the original
Star Wars
trilogy. Lucas borrowed some of the structure for
Star Wars
from Tolkien's novels, but in translating them into a blockbuster space epic, he simplified the narrative cosmology dramatically. Both share a clash between darkness and light, of course, and the general structure of the quest epic. But the particulars are radically different. By each crucial measure of complexity—how many narrative threads you're forced to follow, how much background information you need to interpret on the fly
—Lord of the Rings
is several times more challenging than
Star Wars.
The easiest way to grasp this is simply to review the number of characters who have active threads associated with them, characters who affect the plot in some important way, and who possess a biographical story that the film conveys.
Star Wars
contains roughly ten:

 

Luke Skywalker

Han Solo

Princess Leia Organa

Grand Moff Tarkin

Ben Obi-Wan Kenobi

C-3PO

R2-D2

Chewbacca

Darth Vader

 

Lord of the Rings,
on the other hand, forces you to track almost three times as many:

 

Everard Proudfoot

Sam Gamgee

Sauron

Boromir

Galadriel

Legolas Greenleaf

Pippin

Celeborn

Gil-galad

Bilbo Baggins

Gandalf

Saruman

Lurtz

Elendil

Aragorn

Haldir

Gimli

Gollum

Arwen

Elrond

Frodo Baggins

 

The cinematic Sleeper Curve is most pronounced in the genre of children's films. The megahits of the past ten years
—Toy Story; Shrek; Monsters, Inc.;
and the all-time moneymaking champ,
Finding Nemo—
follow far more intricate narrative paths than earlier films like
The Lion King, Mary Poppins,
or
Bambi.
Much has been written about the dexterity with which the creators of these recent films build distinct layers of information into their plots, dialogue, and visual effects, creating a kind of hybrid form that dazzles children without boring the grownups. (
Toy Story,
for instance, harbors an armada of visual references to other movies
—Raiders of the Lost Ark, The Right Stuff, Jurassic Park—
that wouldn't be out of place in a
Simpsons
episode.) But the most significant change in these recent films is structural.

Take as a representative comparison the plots of
Bambi
(1942),
Mary Poppins
(1964), and
Finding Nemo
(2002). Set aside the question of the life lessons imparted by these films—they are all laudable, of course—and focus instead on the number of distinct characters in each film who play an integral role in the plot, characters who are presented with some biographical information, who develop or change over the course of the film. (Characters with a “story arc,” as screenwriting jargon has it.) All three films contain a family unit at their core: Bambi and Flower, the Bankses, Nemo and his widowed father. They also feature one or two main sidekicks who complement the family unit: Thumper, Mary Poppins and Bert, the amnesiac Dory. But beyond those shared characteristics, the plots diverge dramatically. Bambi's plot revolves almost exclusively around those central three individuals; Mary Poppins introduces about five additional characters who possess distinct story arcs and biographical information (Bert the chimney sweep, the laughing uncle, the bank president). To follow Nemo's plot, however, you have to keep track of almost twenty unique personalities: Nemo's three school chums and their teacher; the three recovering sharks including Bruce, who “never had a father”; the six fish in the aquarium, led by Gill, whose scarred right side bonds him to Nemo with his weak left fin; Crush, the surfer-dude turtle; Nigel the pelican; the aquarium-owning dentist and his evil niece. Add to that a parade of about ten oceanographic cameos: whales, lob-sters, jellyfish—all of which play instrumental roles in the narratives without having clearly defined personalities. As the father of a three-year-old, I can testify personally that you can watch Nemo dozens of times and still detect new information with each viewing, precisely because the narrative floats so many distinct story arcs at the same time. And where the child's mind is concerned, each viewing is training him or her to hold those multiple threads in consciousness, a kind of mental calisthenics.

To see the other real explosion in cinematic complexity, you have to look to the mid-list successes, where you will find significant growth in films built around fiendishly complex plots, demanding intense audience focus and analysis just to figure out what's happening on the screen. I think of this as a new microgenre of sorts: the mind-bender, a film designed specifically to disorient you, to mess with your head. The list includes
Being John Malkovich, Pulp Fiction, L.A. Confidential, The Usual Suspects, Memento, Eternal Sunshine of the Spotless Mind, Run Lola Run, Twelve Monkeys, Adaptation, Magnolia,
and
Big Fish.
(You might add
The Matrix
to this list, since its genius lay in cleverly implanting the mind-bender structure within a big-budget action picture.)

Some of these films challenge the mind by creating a thick network of intersecting plotlines; some challenge by withholding crucial information from the audience; some by inventing new temporal schemes that invert traditional relationships of cause and effect; some by deliberately blurring the line between fact and fiction. (All of these are classic techniques of the old cinematic avant-garde, by the way.) There are antecedents in the film canon, of course: some of the seventies conspiracy films, some of Hitchcock's psychological thrillers. But the mind-benders have truly flowered as a genre in the past ten years—and done remarkably well at the box office too. Most of the films cited above made more than $50 million from box-office receipts alone, and all of them made money for their creators—despite their reliance on narrative devices that might have had them consigned to the art house thirty years ago.

But elsewhere in the world of film, the trends are less dramatic. At the top of the box office charts, I think it's fair to say that
Independence Day
is no more complex than
E.T.
; nor is
The Sixth Sense
more challenging than
The Exorcist.
Hollywood still churns out a steady diet of junk films targeted at teens that are just as simple and formulaic as they were twenty years ago. Why, then, does the Sleeper Curve level off in the world of film?

I suspect the answer is twofold. First, narrative film is an older genre than television or games. The great explosion of cinematic complexity happened in the first half of the twentieth century, in the steady march from the trompe l'oeil and vaudeville diversions of the first movies through
Birth of a Nation
and
The Jazz Singer
all the way to
Citizen Kane
and
Ben-Hur.
As narrative cinema evolved as a genre, and as audiences grew comfortable with that evolution, the form grew increasingly adventurous in the cognitive demands it made on its audience—just as television and games have done over the past thirty years. But film has historically confronted a ceiling that has reined in its complexity, because its narratives are limited to two to three hours. The television dramas we examined tell stories that unfold over multiple seasons, each with more than a dozen episodes. The temporal scale for a successful television drama can be more than a hundred hours, which gives the storylines time to complexify, and gives the audience time to become familiar with the many characters and their multiple interactions. Similarly, the average video game takes about forty hours to play, the complexity of the puzzles and objectives growing steadily over time as the game progresses. By this standard, your average two-hour Hollywood film is the equivalent of a television pilot or the opening training sequence of a video game: there are only so many threads and subtleties you can introduce in that time frame. It's no accident that the most complex blockbuster of our era
—
the
Lord of the Rings
trilogy—lasts more than ten hours in its uncut DVD version. In the recipe for the Sleeper Curve, the most crucial ingredient is also the simplest one: time.

 

T
HE
S
LEEPER
C
URVE
charts a trend in the culture: popular entertainment and media growing more complex over time. But I want to be clear about one thing: The Sleeper Curve does not mean that
Survivor
will someday be viewed as our
Heart of Darkness,
or
Finding Nemo
our
Moby-Dick.
The conventional wisdom the Sleeper Curve undermines is
not
the premise that mass culture pales in comparison with High Art in its aesthetic and intellectual riches. Some of the long-form television dramas of recent years may well find their way into some kind of canon years from now, along with a few of the mind-benders. Games will no doubt develop their own canon, if they haven't already. But that is another debate. The conventional wisdom that the Sleeper Curve
does
undermine is the belief that things are getting worse: the pop culture is on a race to the bottom, where the cheapest thrill wins out every time. That's why it's important to point out that even the worst of today's television—a show like
The Apprentice,
say—doesn't look so bad when measured against the dregs of television past. If you assume there will always be a market for pulp, at least the pulp on
The Apprentice
has some connection to people's real lives: their interoffice rivalries, their battles with the shifting ethics and sexual politics of the corporate world. It's not the most profound subject matter in the history of entertainment, but compared with the pabulum of past megahits—compared with
Mork & Mindy
or
Who's the Boss?—
it's pure gold.

But in making this comparative argument, some might say I have set the bar too low. Perhaps the general public's appetite for pulp entertainment is not a sociological constant. If you think that the ecosystem of television will always serve up shows that exist on a spectrum of quality—some trash and some classics, and quite a bit in the middle—then it's a good sign when the trash seems to be getting more mentally challenging as the medium evolves. But if it's possible to avoid the trash altogether—a nation of PBS viewers—then we shouldn't be thankful for programs whose saving grace is solely that they aren't quite as dumb as the shows used to be.

When people hold out the possibility of such a cultural utopia, they often point to the literary best-seller lists of yesteryear, which allegedly show the masses devouring works of great intricacy and artistic merit. The classic case of highbrow erudition matched with popular success is Charles Dickens, who for a stretch of time in the middle of the nineteenth century was the most popular author writing in the English language, and also (with the possible exception of George Eliot) the most innovative. If the Victorians were willing to line up en masse to read
Bleak House—
with its thousand pages and byzantine plot twists, not to mention its artistic genius—why should we settle for
The Apprentice
?

It is true that Dickens's brilliance lay at least partially in his ability to expand the formal range of the novel while simultaneously building a mass audience eager to follow along. Indeed, Dickens helped to invent some of the essential conventions of mass entertainment—large groups of strangers united by a shared interest in a serialized narrative—that we now take for granted. That he managed to create enduring works of art along the way is one of the miracles of literary history, though of course it took the Cultural Authorities nearly a century to make him an uncontested member of the literary canon, partially because his novels had been tainted by their commercial success, and partially because Dickens's comic style made his novels appear less serious than those of his contemporaries.

Other books

False Advertising by Dianne Blacklock
Distraction by McPherson, Angela
Cornerstone by Misty Provencher
We Are Unprepared by Meg Little Reilly
Peter Camenzind by Hermann Hesse