Authors: Sebastian Seung
Â
The Romans used the phrase
tabula rasa
to refer to the wax tablets mentioned by Plato. It's traditionally translated as “blank slate,” since little chalkboards replaced wax tablets in the eighteenth and nineteenth centuries. In “An Essay Concerning Human Understanding,” the associationist philosopher John Locke resorted to yet another metaphor:
Â
Let us then suppose the mind to be, as we say, white paper, void of all characters, without any ideas. How comes it to be furnished? Whence comes it by that vast store which the busy and boundless fancy of man has painted on it with an almost endless variety? Whence has it all the materials of reason and knowledge? To this I answer, in one word, from experience.
Â
A sheet of white paper contains zero information but unlimited potential. Locke argued that the mind of a newborn baby is like white paper, ready to be written on by experience. In our theory of memory storage, we assumed that all neurons started out connected to all other neurons. The synapses were weak, ready to be “written on” by Hebbian strengthening. Since all possible connections existed, any cell assembly could be created. The network had unlimited potential, like Locke's white paper.
Unfortunately for the theory, the assumption of all-to-all connectivity is flagrantly wrong. The brain is actually at the opposite extreme of
sparse
connectivity. Only a tiny fraction of all possible connections actually exist. A typical neuron is estimated to have tens of thousands of synapses, much less than the total of 100 billion neurons in the brain. There's a very good reason for this: Synapses take up space, as do the neurites they connect. If every neuron were connected to every other neuron, your brain would swell in volume to a fantastic size.
So the brain has to make do with a limited number of connections. This could present a serious problem when you are learning associations. What if your Brad and Angelina neurons had not been connected at all? When you started seeing them together, Hebbian plasticity could not have succeeded in linking the neurons into a cell assembly. There is no potential to learn an association unless the right connections already exist.
Especially if you think a lot about Brad and Angelina, it's likely that each is represented by many neurons in your brain, rather than just one. (In Chapter 4 I argued that this “small percentage” model is more plausible than the “one and only” model.) With so many neurons available, it's likely that a few of your Brad neurons happen to be connected to a few of your Angelina neurons. That might be enough to create a cell assembly in which activity can spread from Brad neurons to Angelina neurons during recollection, or vice versa. In other words, if every idea is redundantly represented
by many neurons, Hebbian learning can work in spite of sparse connectivity.
Similarly, a synaptic chain can be created by Hebbian plasticity even if some connections are missing. Imagine removing the connection represented by the dashed arrow shown in Figure 24. This would break some pathways, but there would still be others extending from the beginning to the end, so the synaptic chain could still function. Each idea in the sequence is represented by only two neurons in the diagram, but adding more neurons would make the chain even more able to withstand missing connections. Again, a redundant representation enables learning to establish associations in spite of sparse connectivity.
Â
Â
Figure 24. Elimination of a redundant connection in a synaptic chain
Â
The ancients already knew the paradoxical fact that remembering
more
information is often easier than remembering less. Orators and poets exploited this fact in a mnemonic technique called the method of loci.
To memorize a list of items, they imagined walking through a series of rooms in a house and finding each item in a different room. The method may have worked by increasing the redundancy of each item's representation.
So sparse connectivity could be a major reason why we have difficulty memorizing information. Because the required connections don't exist, Hebbian plasticity can't store the information. Redundancy solves this problem somewhat, but could there be some other solution?
Why not create new synapses “on demand,” whenever a new memory needs to be stored? We could imagine a variant of Hebb's rule of plasticity: “If neurons are repeatedly activated simultaneously, then new connections are created between them.”
Indeed this rule would create cell assemblies, but it conflicts with a basic fact about neurons: There is negligible crosstalk between electrical signals in different neurites. Let's consider a pair of neurons that contact each other without a synapse. They could create one, but it's implausible that this event could be triggered by simultaneous activity. Because there is no synapse, the neurons can't “hear” each other or “know” they are spiking simultaneously. By similar arguments, the “on-demand” theory of creation doesn't seem plausible for synaptic chains either.
So let's consider another possibility: Perhaps synapse creation is a
random
process. Recall that neurons are connected to only a subset of the neurons that they contact. Perhaps every now and then a neuron randomly chooses a new partner from its neighbors and creates a synapse. This may seem counterintuitive, but think about the process of making friends. Before you speak with someone, it's almost impossible to know whether you should be friends. The initial encounter might as well be randomâat a cocktail party, in the gym, or even on the street. Once you start to talk, you develop a sense of whether your relationship could strengthen into friendship. This process isn't random, as it depends on compatibility. In my experience, people with the richest sets of friends are open to chance meetings but also very skilled at recognizing new people with whom they “click.” The random and unpredictable nature of friendship is a large part of its magic.
Similarly, the random creation of synapses allows new pairs of neurons to “talk” with each other. Some pairs turn out to be “compatible,” because they are activated simultaneously or sequentially as the brain attempts to store memories. Their synapses are strengthened by Hebbian plasticity to create cell assemblies or synaptic chains. In this way, the synapses for learning an association can be created even if they don't initially exist. We may eventually succeed at learning after failing at first, because our brains are continually gaining new potential to learn.
Synapse creation alone, however, would eventually lead to a network that is wasteful. In order to economize, our brains would need to eliminate the new synapses that aren't used for learning. Perhaps these synapses first become weaker by the mechanisms discussed earlier (recall what happens when you are unlearning the BradâJen connection), and the weakening eventually causes the synapses to be eliminated.
You could think of this as a kind of “survival of the fittest” for synapses. Those involved in memories are the “fittest,” and get stronger. Those not involved get weaker, and are finally eliminated. New synapses are continually created to replenish the supply, so that the overall number stays constant. Versions of this theory, known as neural Darwinism, have been developed by a number of researchers, including Gerald Edelman
and Jean-Pierre Changeux.
The theory argues that learning is analogous to evolution. Over time, a species changes in ways that might seem intelligently designed by God. But Darwin argued that changes are actually generated randomly. We end up noticing only the good changes, because the bad ones are eliminated by natural selection, the “survival of the fittest.” Similarly, if neural Darwinism is correct, it might seem that synapses are “intelligently” created, that they are generated “on demand”
only if needed for cell assemblies or synaptic chains. But in fact synapses are created randomly, and then the unnecessary ones are eliminated.
In other words, synapse creation is a “dumb,” random process that endows the brain only with the
potential
for learning. By itself, the process is not learning, contrary to the neo-phrenological theory mentioned earlier. This is why a drug that increases synapse creation might be ineffective for improving memorization, unless the brain also succeeds at eliminating the larger number of unnecessary synapses.
Neural Darwinism is still speculative. The most extensive studies of synapse elimination are by Jeff Lichtman,
who has focused on the synapses from nerves to muscles. Early in development, connectivity starts out indiscriminate, with each fiber in a muscle receiving synapses from many axons. Over time, synapses are eliminated until each fiber receives synapses from just a single axon. In this case, synapse elimination refines connectivity, making it much more specific. Motivated to see this phenomenon more clearly, Lichtman has become a major proponent of superior imaging technologiesâa topic I'll return to in later chapters.
Through the images of dendritic spines shown earlier in Figure 23, we saw that reconnection has also been studied in the cortex. The researchers showed that most new spines disappear within a few days, but a larger fraction survive when the mouse is placed in an enriched cage like the ones Rosenzweig used. Both of these observations are consistent with the idea of “survival of the fittest,” that new synapses survive only if they are used to store memories. The evidence is far from conclusive, however. It's an important challenge for connectomics to reveal the exact conditions under which a new synapse survives or is eliminated.
***
We've seen that the brain may fail to store memories if the required connections don't exist. That means reweighting has limited capacity for storing information in connectivity that is fixed and sparse. Neural Darwinism proposes that the brain gets around this problem by randomly creating new synapses to continually renew its potential for learning, while eliminating the synapses that aren't useful. Reconnection and reweighting are not independent processes; they interact with each other. New synapses provide the substrate for Hebbian strengthening, and elimination is triggered by progressive weakening. Reconnection provides added capacity for information storage, compared with reweighting alone.
A further advantage of reconnection is that it may stabilize memories. For a clearer understanding of stability it's helpful to broaden the discussion. So far I've focused on the idea that synapses retain memories. I should mention, however, that there is evidence for another retention mechanism based on spiking. Suppose that Jennifer Aniston is represented not by a single neuron but by a group of neurons organized into a cell assembly. Once the stimulus of Jen causes these neurons to spike, they can continue to excite each other through their synapses. The spiking of the cell assembly is self-sustaining, persisting even after the stimulus is gone. The Spanish neuroscientist Rafael Lorente de Nó called this “reverberating activity,” because of its similarity to a sound that persists by echoing in a canyon or cathedral. Persistent spiking could explain how you can remember what you have just seen.
Judging from many experiments, such persistent spiking appears to retain information over time periods of seconds. There is good evidence, however, that retention of memories over long periods does not require neural activity. Some victims of drowning in icy water have been resuscitated after being effectively dead for tens of minutes. Even though their hearts had stopped pumping blood, the icy cold prevented permanent brain damage. The lucky ones recovered with little or no memory loss,
despite the complete inactivity of their neurons while their brains were chilled. Any memories that were retained through such a harrowing experience cannot depend on neural activity.
Amazingly, neurosurgeons sometimes chill the body and brain intentionally. In a dramatic medical procedure called Profound Hypothermia and Circulatory Arrest (PHCA), the heart is stopped and the entire body is cooled below 18 degrees
Celsius, slowing life's processes to a glacial pace. PHCA is so risky that it's used only when surgery is required to correct a life-threatening condition. But the success rate is quite high, and patients usually survive with memories intact, even though their brains were effectively shut down during the procedure.
The success of PHCA supports a doctrine known as the “dual-trace” theory of memory. Persistent spiking is the trace of short-term memory, while persistent connections are the trace of long-term memory. To store information for long periods, the brain transfers it from activity to connections. To recall the information, the brain transfers it back from connections to activity.
The dual-trace theory explains why long-term memories can be retained without neural activity. Once activity induces Hebbian synaptic plasticity, the information is retained by the connections between the neurons in a cell assembly or synaptic chain. During recollection later on, the neurons are activated. But during the period between storage and recall, the activity pattern can be latent in the connections without actually being expressed.