Connectome (21 page)

Read Connectome Online

Authors: Sebastian Seung

BOOK: Connectome
11.85Mb size Format: txt, pdf, ePub

According to neural Darwinism, synapse elimination works in tandem with creation to store memories. Likewise, we'd expect the creation of neurons to be accompanied by a parallel process of elimination. This pattern holds true for many types of cells, which die throughout the body during development. Such death is said to be “programmed,” because it resembles suicide. Cells naturally contain self-destruct mechanisms and can initiate them when triggered by the appropriate stimuli.

You might think that your hand grew fingers by adding cells. No—actually, cell death etched away at your embryonic hand to create spaces between your fingers. If this process fails to happen properly, a baby is born with fingers fused together,
a minor birth defect that can be corrected by surgery. So cell death acts like a sculptor, chiseling away material rather than adding it.

This is the case for the brain as well as the body. Roughly as many of your neurons died as survived
while you floated in the womb. It may seem wasteful to create
so many neurons and then kill them off. But if “survival of the fittest” was an effective way of dealing with synapses, it might also work well for neurons. Perhaps the developing nervous system refines itself through survival of neurons that make the “right” connections, coupled with elimination of those that don't. This Darwinian interpretation has been proposed not only for development but also for creation and elimination of neurons in adulthood, which I'll call
regeneration
.

If regeneration is so great for learning, why doesn't the neocortex do it? Perhaps this structure needs more stability to retain what has already been learned, and must settle for less plasticity in order to achieve that. But Gould's report of new neocortical neurons is not alone in the literature; similar studies have been published sporadically since the 1960s.
Perhaps these scattered papers contain some grain of truth
that's contrary to the current thinking among neuroscientists.

We could resolve the controversy by hypothesizing that the degree of neocortical plasticity depends on the nature of the animal's environment. Plasticity might well plummet in captivity, for confinement in small cages must be dull compared with life in the wild, and presumably demands little learning. The brain could respond by minimizing the creation of neurons, and most of those created might not survive elimination for long. In this scenario, new neurons indeed exist, but in small and fluctuating numbers that are hard to see, which would explain why researchers are split. It's entirely possible that more natural living conditions would foster learning and plasticity,
and new neurons would become more numerous.

You might not be convinced by this speculation, but it illustrates a general moral of the Rakic–Gould story: We should be cautious about blanket denials of regeneration, rewiring, or other types of connectome change. A denial has to be accompanied by qualifications if it's to be taken seriously. Furthermore, the denial may well cease to be valid under some other conditions.

As neuroscientists have learned more about regeneration, simply counting the number of new neurons has become too crude. We'd like to know why certain neurons survive while others are eliminated. In the Darwinian theory, the survivors are the ones that manage to integrate into the network of old neurons by making the right connections. But we have little idea what “right” means, and there is little prospect of finding out unless we can see connections. That's why connectomics will be important for figuring out whether and how regeneration serves learning.

 

I've talked about four types of connectome change—reweighting, reconnection, rewiring, and regeneration. The four R's play a large role in improving “normal” brains and healing diseased or injured ones. Realizing the full potential of the four R's is arguably the most important goal of neuroscience. Denials of one or more of them were the basis of past claims of connectome determinism. We now know that such claims are too simplistic to be true, unless they come with qualifications.

Furthermore, the potential of the four R's is not fixed. Earlier I mentioned that the brain can increase axonal growth after injury. In addition, damage to the neocortex is known to attract newly born neurons, which migrate into the zone
of injury and become another exception to the “no new neurons” rule. These effects of injury are mediated by molecules that are currently being researched. In principle we should be able to promote the four R's through artificial means, by manipulating such molecules. That's the way genes exert their influence on connectomes, and future drugs will do the same. But the four R's are also guided by experiences, so finer control will be achieved by supplementing molecular manipulations with training regimens.

This agenda for a neuroscience of change sounds exciting, but will it really put us on the right track? It rests on certain important assumptions that are plausible but still largely unverified. Most crucially, is it true that changing minds is ultimately about changing connectomes? That's the obvious implication of theories that reduce perception, thought, and other mental phenomena to patterns of spiking generated by patterns of neural connections. Testing these theories would tell us whether connectionism really makes sense. It's a fact that the four R's of connectome change exist in the brain, but right now we can only speculate about how they're involved in learning. In the Darwinian view, synapses, branches, and neurons are created to endow the brain with new potential to learn. Some of this potential is actualized by Hebbian strengthening, which enables certain synapses, branches, and neurons to survive. The rest are eliminated to clear away unused potential. Without careful scrutiny of these theories, it's unlikely that we'll be able to harness the power of the four R's effectively.

To critically examine the ideas of connectionism, we must subject them to empirical investigation. Neuroscientists have danced around this challenge for over a century without having truly taken it on. The problem is that the doctrine's central quantity—the connectome—has been unobservable. It has been difficult or impossible to study the connections between neurons, because the methods of neuroanatomy have only been up to the coarser task of mapping the connections between brain regions.

We're getting there—but we have to speed up the process radically. It took over a dozen years to find the connectome of the worm
C. elegans,
and finding connectomes in brains more like our own is of course much more difficult. In the next part of this book I'll explore the advanced technologies being invented for finding connectomes and consider how they'll be deployed in the new science of connectomics.

Part IV: Connectomics
8. Seeing Is Believing

Smelling whets the appetite, and listening saves relationships, but seeing is believing. More than any other sense, we trust our eyes to tell us what is real. Is this just a biological accident, the result of the particular way in which our sense organs and brains happened to evolve? If our dogs could share their thoughts by more than a bark or a wag of the tail, would they tell us that smelling is believing? As a bat dines on an insect, captured in the darkness of night by following the echoes of ultrasonic chirps, does it pause to think that hearing is believing?

Or perhaps our preference for vision is more fundamental than biology, based instead on the laws of physics. The straight lines of light rays, bent in an orderly fashion by a lens, preserve spatial relationships between the parts of an object. And images contain so much information that—until the development of computers—they could not easily be manipulated to create forgeries.

Whatever the reason, seeing has always been central to our beliefs. In the lives of many Christian saints, visions of God—apocalyptic or serene—often triggered the conversion of pagans into believers. Unlike religion, science is supposed to employ a method based on the formulation and empirical testing of hypotheses. But science, too, can be propelled by visual revelations, the sudden and simple sight of something amazing. Sometimes science is just seeing.

In this chapter I'll explore the instruments that neuroscientists have created to uncover a hidden reality. This might seem like a distraction from the real subject at hand—the brain—but I hope to convince you otherwise. Military historians dwell on the cunning gambits of daring generals, and the uneasy dance of soldiers and statesmen. Yet in the grand scheme of things, such tales may matter less than the backstory of technological innovation. Through the invention of the gun, the fighter plane, and the atomic bomb, weapon makers have repeatedly transformed the face of war more than any general ever did.

Historians of science likewise glorify great thinkers and their conceptual breakthroughs. Less heralded are the makers of scientific instruments, but their influence may be more profound. Many of the most important scientific discoveries followed directly on the heels of inventions. In the seventeenth century Galileo Galilei pioneered telescope design, increasing magnifying power from 3× to 30×. When he pointed his telescope at the planet Jupiter, he discovered moons orbiting around it, which overturned the conventional wisdom that all heavenly bodies circled the Earth.

In 1912 the physicist Lawrence Bragg showed how to use x-rays to determine the arrangement of atoms in a crystal, and three years later, at the tender age of twenty-five, he won the Nobel Prize for his work. Later on, x-ray crystallography enabled Rosalind Franklin, James Watson, and Francis Crick
to discover the double-helix structure of DNA.

Have you heard the joke about two economists walking down the street? “Hey, there's a twenty-dollar bill lying on the sidewalk!” one of them says. “Don't be silly,” says the other. “If there were, someone would have picked it up.” The joke makes fun of the efficient market hypothesis (EMH), the controversial claim that there exists no fair and certain method of investment that can outperform the average return for a financial market. (Bear with me—you'll see the relevance soon.)

Of course, there are
uncertain
ways of beating the market. You can glance at a news story about a company, buy stock, and gloat when it goes up. But this is no more certain than a good night in Vegas. And there are unfair ways of beating the market. If you work for a pharmaceutical company, you might be the first to know that a drug is succeeding in clinical trials. But if you buy stock in your company based on such nonpublic information, you could be prosecuted for insider trading.

Neither of these methods fulfills the “fair” and “certain” criteria of the EMH, which makes the strong claim that no such method exists. Professional investors hate this claim, preferring to think they succeed by being smart. The EMH says that either they're lucky or they're unscrupulous.

The empirical evidence for and against the EMH is complex, but the theoretical justification is simple: If new information indicates that a stock will appreciate in value, then the first investors to know that information will bid the price up. And thus, says the EMH, there are no good investment opportunities available, just as there are never (well,
almost
never) twenty-dollar bills lying on the sidewalk.

What does this have to do with neuroscience? Here's another joke: “Hey, I just thought of a great experiment!” one scientist says. “Don't be silly,” says the other. “If it were a great experiment, someone would already have done it.” There's an element of truth to this exchange. The world of science is full of smart, hard-working people. Great experiments are like twenty-dollar bills on the sidewalk: With so many scientists on the prowl, there aren't many left. To formalize this claim, I'd like to propose the
efficient science hypothesis
(ESH): There exists no fair and certain method of doing science that can outperform the average.

How can a scientist make a truly great discovery? Alexander Fleming discovered and named penicillin after finding that one of his bacterial cultures had accidentally become contaminated by the fungus that produces the antibiotic. Breakthroughs like this are serendipitous. If you want a more reliable method, it might be better to search for an “unfair” advantage. Technologies for observation and measurement might do the trick.

After hearing rumors of the invention of the telescope in Holland, Galileo quickly built one of his own. He experimented with different lenses, learning how to grind glass himself, and eventually managed to make the best telescopes in the world. These activities uniquely positioned him to make astronomical discoveries, because he could examine the heavens using a device others didn't have. If you're a scientist who purchases instruments, you could strive for better ones than your rivals by excelling at fundraising. But you'd gain a more decisive advantage by building an instrument that money can't buy.

Suppose you think of a great experiment. Has it already been done? Check the literature to find out. If no one has done it, you'd better think hard about why not. Maybe it's not such a great idea after all. But maybe it hasn't been done because the necessary technologies did not exist. If you happen to have access to the right machines, you might be able to do the experiment before anyone else.

My ESH explains why some scientists spend the bulk of their time developing new technologies rather than relying on those that they can purchase: They are trying to build their unfair advantage. In his 1620 treatise the
New Organon,
Francis Bacon wrote:

 

It would be an unsound fancy and self-contradictory to expect that things which have never yet been done can be done except by means which have never yet been tried.

I would strengthen this dictum to:

 

Worthwhile things that have never yet been done can only be done by means that have never yet existed.

It's at those moments when new means exist—when new technologies have been invented—that we see revolutions in science.

Other books

It Rained Red Upon the Arena by Kenneth Champion
Wallflower Gone Wild by Maya Rodale
Lion Heart by A. C. Gaughen
The Terrible Ones by Nick Carter