Read The End of Absence: Reclaiming What We've Lost in a World of Constant Connection Online
Authors: Michael Harris
Without such engineered absences (a weekend without texting, a night without screens), our children suffer as surely as do kids with endless access to fast food. The result is a digital native population that’s less well rounded than we know they could be. In 2012, Elon University worked with the Pew Internet and American Life Project to release a report that compiled the opinions of 1,021 critics, experts, and stakeholders, asking for their thoughts on digital natives.
Their boiled-down message
was that young people now count on the Internet as “their external brain” and have become skillful decision makers—even while they also “thirst for instant gratification and often make quick, shallow choices.” Some of those experts were optimistic about the future brains of the young. Susan Price, CEO and chief Web strategist at San Antonio’s Firecat Studio, suggested that “those who bemoan the perceived decline in deep thinking . . . fail to appreciate the need to evolve our processes and behaviors to suit the new realities and opportunities.”
Price promises that the young
(and those who are young at heart) are developing new skills and standards better suited to their own reality than to the outmoded reality of, say, 1992.
Those “new standards” may, one presumes, place a priority on the processing of information rather than the actual absorption of information. In Socrates’ terms, we’re talking about reminiscence instead of memory, and the appearance of omniscience. Meanwhile, the report’s coauthor, Janna Anderson, noted that while many respondents were enthusiastic about the future of such minds, there was a clear dissenting voice:
Some said they are already witnessing
deficiencies in young people’s abilities to focus their attention, be patient and think deeply. Some experts expressed concerns that trends are leading to a future in which most people become shallow consumers of information, endangering society.
Several respondents took the opportunity, in fact, to cite George Orwell’s dystopian fantasy
1984
. Citizens are always manipulated by some authority or other, but an Orwellian future all but wipes out consciousness of (and criticism of) the subjugation of the masses. In order to keep youths noticing those manipulations in our own pseudo-Orwellian world, we first need to teach them how our technologies evolved.
• • • • •
Charles Darwin’s
The Origin of Species
may have outlined, back in 1859, an idea that explains our children’s relationship with iPhones and Facebook. Here’s the elevator-pitch version of his book: If you have something that copies itself with slight variations, and if that something exists in a competitive environment that will weed out those less suited to the given environment, then you must get what the American philosopher Daniel Dennett has called “
design out of chaos
without the aid of mind.” Evolution is not, then, some magical occurrence, but a mathematical certainty. Given an item’s ability to copy itself with variation, and given a competitive environment, you
must
have evolution.
So is the goop of our DNA the only thing in the universe that can meet Darwin’s requirements for evolution? The English evolutionary biologist Richard Dawkins took the next logical step in 1976 and coined one of the most important, misunderstood, and bandied-about terms of our age: the meme.
The “meme” (from the ancient Greek
mimeme,
which means “that which is imitated”) is an extension of Darwin’s Big Idea past the boundaries of genetics. A meme, put simply, is a cultural product that is copied. A tune is one; so is a corporate logo, a style of dress, or a literary cliché like “the hero’s journey.” We humans are enamored of imitation and so become the ultimate “meme machines.” The young are best of all: Twerking videos and sleepover selfies are memes par excellence. Memes—pieces of culture—copy themselves through history and enjoy a kind of evolution of their own, and they do so riding on the backs of successful genes: ours.
Our genes and memes have been working to shape us since humans first started copying one another’s raw vocalizations. But now we may be witness to a third kind of evolution, one played out by our technologies. This new evolution is posited not by a Silicon Valley teen, but by a sixty-one-year-old woman in rural England named Susan Blackmore. Just as Darwinism submits that genes good at replicating will naturally become the most prevalent, Blackmore submits that technologies with a knack for replication will obviously rise to dominance. These “temes,” as she’s called these new replicators, could be copied, varied, and selected as digital information—thus establishing a new evolutionary process (and one far speedier than our genetic model). Evolutionary theory holds that given a million technological efforts, some are bound to be better at making us addicted to them, and these give rise, organically (as it were), to more and more addictive technologies, leaving each generation of humans increasingly in service to, and in thrall of, inanimate entities. Until we end up . . . well, where we are.
Blackmore’s work offers a fascinating explanation for why each generation seems less and less capable of managing that solitude, less likely to opt for technological disengagement. She suggests that technology-based memes—temes—are a different kind of replicator from the basic memes of everyday material culture, the ones Dawkins was describing. What sort of difference is there? I wanted to know. “The most important difference is the fidelity of copying,” she told me. This is important because a meme’s ability to propagate grows as its fidelity rate increases. “Most memes . . . we forget how often we get them wrong.” (Oral traditions of storytelling, for example, were characterized by constant twists in the tale.) “But with digital machines the fidelity is almost 100 percent. As it is, indeed, with our genes.” This is a startling thought, though a simple enough one: By delivering to the world technologies capable of replicating information with the same accuracy as DNA, we are playing a grand game indeed. The fidelity of our earliest memetic acts would have improved significantly with the advent of writing, of course, and then again thanks to the printing press, which might (like us) be called a meme machine. But we now have near perfect replication online.
We are now becoming, by Blackmore’s estimation, teme machines—servants to the evolution of our own technologies. The power shifts very quickly from the spark of human intention to the absorption of human will by a technology that seems to have intentions of its own.
Kevin Kelly takes this notion to the nth degree in his 2010 book,
What Technology Wants,
where he anthropomorphizes technologies and asks what they would like us to do. “The evolution of technology converges in much the same manner as biological evolution,” he argues. He sees parallels to bioevolution in the fact that the number of lines of code in Microsoft Windows, for example, multiplied ten times since 1993, becoming more complex as time goes on just as biological organisms tend to do.
But viewed in the clear light of morning, we’ll likely find there was no robotic villain behind the curtain. Your iPhone does not “want” anything in the way that we perceive “want” to exist. Instead of animal “want,” we will confront only the cool, unthinking intelligence of evolution’s law. And, to be sure, our own capitalist drive pushes these technologies, these temes, to evolve (if that’s what they’re doing). Consider the fact that Google tested forty-one shades of blue on its toolbar to see which elicited the most favorable response. We
push
the technology down an evolutionary path that results in the most addictive possible outcome. Yet even as we do this, it doesn’t feel
as though we have any control. It feels, instead, like a destined outcome—a fate.
• • • • •
Blackmore’s conception, if thrilling, is also harrowing. Genes must cooperate with us to get copied into the next generation, and they produce animals that cooperate with one another. And temes (being bits of information, not sentient creatures) need humans to build the factories and servers that allow them to replicate, and they need us to supply the power that runs the machines. But as temes evolve, they could demand more than a few servers from future generations of humans. Blackmore continued:
What really scares me is that the accelerating evolution of temes and their machinery requires vast amounts of energy and material resources. We will go on supplying these as long as we want to use the technology, and it will adapt to provide us what we want while massively expanding of its own accord. Destruction of the climate and of earth’s ecosystems is the inevitable outlook. It is this that worries me—not whether they are amoral or not.
Blackmore’s vision for our children’s technological future may seem nightmarish to the point of fantasy, especially since she seems to be constantly reporting the future, suggesting eventualities that cannot be determined the way we like.
Yet when I think now of all that Blackmore told me, and of the eerie promise of decoded neurofeedback, when I think of the advancing multitudes of youths (and adults, too) who seem so perilously entranced by inanimate tools, I do sometimes lose heart. Then again, I want to counter all that with an opposing vision.
The best way I can describe this optimistic alternative is to call up a scene from that 1999 gem of a movie
The Matrix
. In the Wachowski siblings’ film, a population enslaved by computers has been turned into a warehouse of mere battery cells, kept complacent by a mass delusion, the Matrix, which is fed into human brains and keeps them thinking they are living their own lives, freely, on the sunny streets of the world. In fact, as they dream out their false experiences, their physical bodies are held in subterranean caverns, each sleeping human jacked into a womblike pod. In my favorite scene, Neo, our hero, is torn from this dreamworld and awakens in that great dark chamber. Gasping for air, the first real air he has ever breathed, Neo stares out with stunned eyes and sees the raw world at last.
5
The Matrix is a technologically derived web of illusions, a computer-generated dreamworld that’s been built to keep us under control. The people in its thrall are literally suspended, helpless in their servitude to a larger technological intelligence. The solution to this very real human problem is the same solution presented by Buddhism and Gnosticism—we must, like Neo, awaken.
• • • • •
It’s becoming more and more obvious. I live on the edge of a Matrix-style sleep, as do we all. On one side: a bright future where we are always connected to our friends and lovers, never without an aid for reminiscence or a reminder of our social connections. On the other side: the twilight of our pre-Internet youths. And wasn’t there something . . . ? Some quality . . . ?
I began this chapter lamenting little Benjamin’s confusion over the difference between a touch-sensitive iPad screen and a hard copy of
Vanity Fair
. But now I have a confession to make. I’m not much better off. This is not a youth-only phenomenon.
A 2013 study from the University of Michigan
found that those of us in our late thirties have now reached the point of having as many electronic interactions as we have face-to-face interactions. What a dubious honor that is—to be the first generation in history to have as many exchanges with avatars as with people. I wonder, sometimes, if this means I’ll start to treat friends and family as avatars in person. Occasionally, I’m hit with how weirdly
consistent
a group of people appears during a dinner party—how weird it is that they aren’t changing or scrolling like thumbnail portraits on a Twitter feed, being replaced, or flicking off. I’m suffering the same brain slips that young Benjamin suffered when he tried to use a hard-copy magazine as a digital interface. The only difference is that I’m more freaked out.
Increasingly, I notice small moments when I treat hard-copy material as though it were digital. I’ve seen my fingers reach instinctively to zoom in to a printed photo or flick across a paper page as though to advance the progress of an e-book. These slips are deeply disturbing, a little like early signs of dementia. And they crop up in more meaningful scenarios, too. Just the other day, while discussing a particularly dreadful acquaintance with a friend of mine, I actually said, “Ugh,
unfollow,
” using Twitter’s term for removing an avatar from one’s ranks. And it wasn’t a semantic joke, is the thing. I clicked a button in my head and felt that jerk’s swift removal from my mental address book.
There is one key difference here between young Benjamin and me. I am aware of my own confusion and can confront it. I can still recall my analog youth.
In the quiet suburb where I was raised, there was a green hill near our house, a place where no one ever went. It was an easy trek, over the backyard fence and up a dirt path, and I would go there on weekends with a book if I wanted to escape the company of family or merely remove myself from the stultifying order of a household. Children do need moments of solitude as well as moments of healthy interaction. (How else would they learn that the mind makes its own happiness?) But too often these moments of solitude are only stumbled upon by children, whereas socialization is constantly arranged. I remember—I was nine years old—I remember lying on the green hill and reading my book or merely staring for a long, long time at the sky. There would be a crush of childish thoughts that would eventually dissipate, piece by piece, until I was left alone with my bare consciousness, an experience that felt as close to religious rapture as I ever had. I could feel the chilled sunlight on my face and was only slightly awake to the faraway hum of traffic. This will sound more than a little fey, but that young boy on the hillside did press daisies into his book of poetry. And just the other day, when I took that book down from its dusty post on my shelf, the same pressed flowers fell out of its pages (after a quarter century of stillness) and dropped onto my bare toes. There was a deep sense memory, then, that returned me to that hushed state of mind on the lost green hill, a state that I have so rarely known since. And to think: That same year, a British computer scientist at CERN called Tim Berners-Lee was writing the code for the World Wide Web. I’m writing these words on the quarter-century anniversary of his invention.