Neurosurgery is notable for the extreme variety of different “approaches” and the attention paid to them. When it comes to major surgery within the abdomen, for example, a nice long vertical incision right down the middle works fine for many of the internal organs. Once the abdomen is open, you can see and get to just about anything. A modified opening can be made depending on the target organ—a smaller one in the lower right-hand corner for the appendix—but it’s still your basic linear incision, just of a different size, angle, and location along the topographically uninteresting surface of the abdomen.
Brain surgery is different. You can’t gain access to everything through an invasive part, splitting the scalp and skull down the middle. The brain (and the blood-filled superior sagittal sinus) would be in the way most of the time. Many of our targets are hidden underneath the brain, not within it or on its surface. Furthermore, the head is round and irregular and there are critical surface structures—like the eyes and ears—that you have to avoid even if the target of interest is located underneath them. As a result, a panoply of creative and curvy incisions have been devised over the decades in order to sneak around or underneath.
The forehead is a key zone that we try not to violate, for cosmetic reasons, which forces us to make ridiculously long incisions (sometimes ear to ear) behind the hairline so that we can reflect and fold the forehead forward in order to reach, say, a small tumor just underneath it in the frontal lobe. In special cases, such as for big craniofacial reconstructions in kids, we may collaborate with plastic surgeons. They tend to be even more thoughtful than we are in fashioning a nice scalp incision. For example, rather than cutting across the scalp in a straight line behind the hairline, they may zigzag it all the way, giving it the appearance of having been cut with large pinking shears. When I first saw this, I wondered why they would go to the trouble. It takes longer and is more difficult to close at the end.
The attending plastic surgeon responded to my question as if the answer were obvious: What does the kid’s hair look like when he gets out of a pool? If the incision simply goes straight across, the scar may be revealed with the linear parting of the wet hair. With the zigzag approach, the wet hair tends to cover the scar in a more natural pattern, not parting sharply along a dividing line, and the other kids at the pool may not notice it at all. Neurosurgeons don’t think as much about poolside life, but I was touched by the concern, thinking back to my own awkward years as a kid, awkward enough without a large scalp incision and reconstructed cranium underneath.
The only real downside to collaborating with the scalp-friendly plastic surgeon is that we tend to worry a bit if we have to leave them alone in the room with the brain exposed. Our common refrain at times like these: be gentle; treat the brain as you would treat the skin. We all have our own nitpicky concerns.
Getting back to approaches, creativity is sometimes appreciated and sometimes not. The big buzzword these days is “minimally invasive,” and some surgeons will go to great lengths to attract patients with claims that they are more minimally invasive than others. My husband and I were on a shuttle bus a few years ago, riding between an airport and the rental car place. We both independently took note of a businessman sitting across from us.
When we got off the bus, out of earshot of this gentleman, we turned to each other and said, quietly and simultaneously: “That guy got the ‘batwing’ incision.” We felt terrible for him. The batwing incision is a rarely used approach to getting underneath the frontal lobes, just above the eyes. As an alternative to creating the aforementioned lengthy incision behind the hairline and freeing up the entire forehead all the way to the brow line (not very “minimally invasive”), alternative approaches are promoted from time to time by individual neurosurgeons hoping to push the frontiers.
The batwing incision, one such novel approach—again, almost never used, which added to our intrigue on the shuttle bus—involves creating a much smaller incision (more “minimally invasive”) but one that is cosmetically challenging, within and between the eyebrows. Theoretically, most of the incision should be hidden once the eyebrows grow back, but it tends not to heal so seamlessly, leaving the patient with unusual looking eyebrows. Plus there’s no way to hide the segment that crosses above the bridge of the nose. So, although it may be minimally invasive in one sense, it’s maximally invasive in another. You have to be careful. A bigger incision can be the better choice, depending on the circumstances.
Long, complex, invasive skull base operations are the kinds of operations that always impress medical students. There’s tons of interesting anatomy open to the air, so they get to see things they’ve only seen before in textbooks or a cadaver. It can be hard to watch at times, though, if you’re not used to it. I once took a Ph.D. research colleague on a tour of our operating rooms. He had never observed any surgery and expressed a strong interest in catching a glimpse of a real brain, having studied brain function and brain images for years without ever seeing one. In one of the rooms, a skull base case was being performed, a combined effort between neurosurgery and otolaryngology (ear, nose, and throat, or ENT). We walked into the room just as one of the surgeons was gently tapping a mallet against a chisel into bone, just above and between the eyes, which were not covered by the drapes. The eyelids had been gently sutured shut to protect the eyes during surgery.
At the end of the tour I asked his impressions. He had been uncharacteristically quiet. He told me: “I could go outside right now and get hit by a bus, and I’d still be having a better day than that guy we just saw.”
True, being in and around neurosurgery does have a way of inspiring a newfound appreciation for one’s own health and luck. Still, his impression was amusing and I couldn’t help laughing. All he saw was the height of the gore, the dramatic part, figuring that a pedestrian hit by a bus would be better off. He didn’t get to see the guy a few days later, sitting up and asking when breakfast was served. You’d be amazed at how much can be done to the human body when necessary, with the human inside the body triumphing despite it all. Surgery is trauma, but it’s intelligent, controlled trauma, and it’s done in the patient’s best interest, despite appearances.
Similarly, I love when a family is thoroughly shocked and impressed upon seeing their loved one awake and talking immediately after brain surgery, as if they had expected the patient to emerge from surgery completely mute. Such low expectations have a way of fueling happier outcomes, even when things turn out just fine but no better than the surgeon had expected. These shocked responses, although amusing, are somehow endearing, and I certainly don’t want to discourage them.
Going through my neurosurgery residency, I saw the gamut of options, from the most minimally invasive to the most maximally invasive. The neat and clean Gamma Knife approach certainly maintained its allure, but I decided that I wouldn’t end up specializing in it. There wasn’t any one particular niche within neurosurgery, in fact, that I could see myself focusing on exclusively, a reality that would become a small source of frustration later on. I just couldn’t see myself super-specializing (as is customary in academia), narrowing things down so far in an already small world, confining myself to an even tighter box, even though that might have been the ideal career move.
Although I still sometimes dream of a pure white and glass house or even a perfectly traditional Japanese one (and who knows, I may still get one at some point), I am quite satisfied with the one we have now. It’s over a hundred years old but newly restored on the inside. We’ve decorated it in an eclectic—but we think sophisticated—style, combining traditional colonial pieces from Connecticut with modern steel pieces from SoHo, and a thirty-dollar concrete Buddha head that appears to be ancient and expensive. We figured that if we couldn’t commit to just one style, we would mix them together, carefully and selectively. The concept may sound messy but it works. And we always keep it neat, of course.
FIFTEEN
Traces of Thought
Four years into my neurosurgery residency, I started to get a little frustrated. I was dealing with the brain day in and day out, which is what I had asked for, true, but something was missing. I didn’t have enough time or energy to really sink my teeth into the mind. A fascination with the mind is what got me interested in going down this road in the first place, but I was too busy sliding catheters through the cortex, drilling off bone flaps, and picking at tumors to think much about this ethereal by-product of the organ I had otherwise gotten to know so well.
Sure, in saving a brain we’re also saving a mind. That’s obvious. But the attention that we pay to the mind, per se, can be pretty superficial: this guy was in a coma and now he’s awake and following commands; that guy is aphasic; this guy’s memory is shot. Don’t get me wrong; neurosurgeons are intelligent professionals who are good at what they do. It’s mainly an issue of time management and focus. We just can’t spend a lot of time mulling over the intricate nuances of the mind, dissecting apart all the different types of memory that a brain is capable of. Plus, it’s not what we’re the most expert at, except for a select few academics among us. We’re not neuropsychologists.
The brain as a physical organ is what demands our attention most. Maybe I should have known better, but what other path could I have chosen? I knew I didn’t want to be a psychiatrist or a neurologist or a basic scientist and the choices weren’t infinite, at least not from the vantage point of a medical student traveling down a defined path. Neurosurgery seemed a better, although not perfect, career choice for me.
Luckily, as a part of our seven years of neurosurgery residency training, we are granted a full two years of research time. In my program, these were designated as the fifth and sixth years, just prior to the final year as chief resident. I was determined to remedy this lopsided physical/manual focus, and I became a sort of neurosurgeon-in-residence at the Center for Cognitive Brain Imaging at Carnegie Mellon University for one year. (I spent the second research year as a fellow in the small super-specialized field of epilepsy surgery, the subspecialty most directly concerned with cognition.)
The two codirectors of the center at the time, Dr. Marcel Just and Dr. Patricia Carpenter (a brilliant Ph.D. couple), focused their careers on studying higher-level cognition, including mental abilities that people value most highly, such as speech, comprehension, visual-spatial skills, memory, and decision making. They also study diseases of thought, diseases that neurosurgeons never deal with, like autism. That year seemed almost a guilty pleasure for me, not just because of the sensible hours and the fact that I never missed lunch, but more because I had the time to browse through journals—including non-neurosurgery journals—and to chat about ideas, and ideas are what get me fired up.
I was inspired to join their group after reading an article of theirs that was published in
Science.
1
They were curious as to what it means for the brain to work harder. What’s actually happening in the brain, for example, when you go from reading simple material to more complex material? In other words, how does the mind compensate for an increased workload? That was a recurring theme of their work. We pretty much understand how muscles respond to increasingly heavier weights or more repetitions, but how does the brain respond to a tougher workout?
They used a sophisticated brain imaging method, functional MRI (or fMRI), to look into this question. Functional MRI picks up on subtle changes in oxygen level in brain tissue. When a certain area of the brain is active, slightly more blood flows to that region, changing the oxygen level by just a couple percentage points. Those tiny differences in oxygen level reveal which parts of the brain are active or inactive at any given time. It’s a reflection of the mind at work.
Normal research subjects or patients are asked to perform various mental tasks, like reading from a screen, while they are in the scanner. All sorts of clever tests can be devised to study all manners of thought or other types of brain activity, like coordinating movement. You’ve probably seen fMRI images before—in
Time
or
Newsweek
or
The New York Times—
with colorful splotches superimposed on anatomically detailed brain images. They’ve become quite popular in the press over the past few years, partly, I think, because they look so nice.
Their findings were intriguing. First, though, as background, the left hemisphere is well known to be dominant for speech (except in a small percentage of left-handers). Two of the major nodes in the speech network are Broca’s area (in the left frontal lobe) and Wernicke’s area (in the left temporal lobe). Most people don’t think much about the right hemisphere when it comes to thinking about speech. This paper shows that we should.
What they found was that speech areas were recruited in a graded fashion depending upon the complexity of the task. In reading the simplest sentences, activation was seen in Broca’s and Wernicke’s areas. With increasingly complex sentences—dependent clauses, advanced vocabulary—a higher-volume activation of Broca’s and Wernicke’s was called upon. Then, for the most complex sentences, the mirror-image
right-
sided regions of Broca’s and Wernicke’s were recruited into action. In other words, the most complex reading called not only for a greater volume of brain use overall, but also for the recruitment of additional nodes in the network.
If you think about it, there are some interesting implications of this work. Most people who develop an aphasia (a speech or comprehension disturbance) after a stroke will eventually recover well enough to speak and read again. The brain does have an amazing ability to recover, at least partially. But what happens to their reserves? Are they capable of equally complex comprehension and equally complex speech compared to what they were capable of before the stroke? With a certain defined volume of their brain tissue rendered nonfunctional (i.e., dead), does recruitment falter with increased workload?
These sorts of insights inspire me to keep my mind as well tuned as possible, to use all the areas that can be used, so the reserves remain strong. I hate the thought of a particular region lying dormant for too long. Who knows how long you have before you turn the key and it doesn’t start up again? I’m most at risk for this happening with my math skills, which were never that stellar to begin with, except for trigonometry in high school, which was more visual. I’ve become overly lazy with the calculator, to the point where it can take me longer than it should to figure out, in my mind, how much change I should be getting back at Starbucks when I get a tall iced chai. This is embarrassing. I clearly need more exercise.
I’ll never forget my amazement at watching college-age research assistants at this center navigate through images of the brain on their computer screens. Their job was to help define the exact location of activation along the cortex, the convoluted surface of the brain. (By the way, the cortex is not always on the visible superficial surface of the brain. It folds in on itself in certain areas and also dives deep along the inner surface between the two hemispheres.) What I couldn’t believe was how these students could point out and name any individual convolution (gyrus) or crevice (sulcus) so quickly—just as you might identify familiar streets on a road map of your hometown and think nothing of it.
My conclusion? These students were actually more facile with the detailed map of the cortical surface than most neurosurgeons. I, for one, had never before learned (or been taught) the names of every single sulcus, for example. I know the most important ones, of course—the most “eloquent” ones that we avoid violating at all costs—but not all the ones that we feel to be of lesser importance. They laughed when I told them this, and they clearly didn’t believe me. I left it at that.
As far as I can tell, the lay public has conspired in telling the same joke when it comes to brain surgery. It goes something like this:
So, if you slip with the knife, is it like—there goes fourth grade?
Based on my rudimentary knowledge of how the brain/mind works, let me clarify a few things. First, we usually don’t use any knives on the brain, except sometimes—if you really want to know—to prick the pia, which is the very thin, nearly invisible membrane adherent to the surface. Second, fourth grade (or any other particular grade for that matter) is not stored in any one spot. Memories aren’t really “stored” in such a location-specific way, neatly and chronologically along a gyrus.
Memories are called up when needed via activation of the entire memory network across multiple regions of the brain, and if this process seems a bit mysterious and almost unbelievable, that’s because it is. The more you know about how memory works and how much it is tied to the frontal lobes, emotion, and other such complexities (and how unlike a passive recording device the system is), the easier it is to see how memories can sometimes be faulty and how, for example, humans aren’t always the most reliable witnesses in court or the most accurate in their autobiographies.
My husband and I can waste inordinate amounts of time in arguing over which one of us came up with a certain brilliant idea, exchanging accusations of faulty memory. On a trip to Europe I might say something like: “Aren’t you glad I thought of visiting this little town?” Rather than a “yes” and maybe a “thank you” he’ll counter with: “I like your revisionist history, but
I’m
actually the one who came up with the idea, remember?” Then I’ll say, “But I’m the one who read about it.” And he’ll claim, “But the only reason you read about it in the first place was because I told you about it.” And so on. Neither of us gives in. This makes me think that the memory network must also be tied strongly to whatever cortical regions control ego, but I haven’t seen any studies on this.
Mysteries of the mind and brain abound, of course, and some may never be answered, but others are slowly unraveling with the help of functional imaging (pictures of the brain that reveal function, like functional MRI). Take the issue of blindness, and what goes on in the mind of a blind person. I’ve always been incredibly impressed by people who are completely blind, partly because of their Braille-reading skills and partly because some maintain the confidence and skill to maneuver around town on their own. I always wonder if I’d actually be able to do that, travel around on my own. Sometimes I conclude that the answer would be no.
One strike against me is that I have little inherent sense of direction. (Don’t give me east-west-north-south directions unless I’m in a grid type of location that I’m familiar with, like the Upper East Side of Manhattan.) This deficit of mine is fertile grounds for teasing, again, by my husband, whose sense of direction rivals that of migratory birds. He marvels over how I can step out of a store in New York City, out onto the sidewalk, and start walking in the wrong direction. He likes to let me go for a while, walking alongside me, until he cracks up at my puzzled look as I pass shops that we had just gone into. I often rely more on recognizing landmarks than on a sense of direction to get me where I’m going. (Oh, there’s that Japanese noodle place where we had dinner last month, now I know where we are.) As a blind person I wouldn’t be able to rely on recognizing restaurants, and that’s why I’d be in trouble.
I used to think my poor sense of direction was a real liability until I read that Harvey Cushing, the father of neurosurgery, shared the same deficit. It didn’t stop him from operating on the brain and pioneering a whole new surgical specialty. I don’t know, though, whether he worried about his potential as a blind person. Regardless, my husband probably would have had no problem laughing at his directional deficits either.
We used to live on the same street as a man who was blind. We never met him but we would see him walk down the sidewalk from time to time with his red-tipped white cane, no guide dog. One afternoon, during a heavy rainstorm, we decided to step outside our front door to watch the downpour. Just as we stepped out, we noticed that this man was making his way home, drenched, walking and tapping his cane back and forth at a faster clip than we had ever seen, turning sharp corners—onto our street and then onto his driveway—with rapid military-like precision. If he had moved any faster he would have been running. He must have had such confidence in the one-to-one correspondence between the well-traveled map in his mind and the external reality of the streets, he seemed to walk with a more self-assured stride than most sighted people.
Consider an interesting question: What happens to the visual cortex of a person who is blind? The visual cortex, a part of the brain in the occipital lobes (in the back of the head), receives visual input from the eyes. That’s how it is stimulated. If there is no visual input, does that area remain completely fallow, nonfunctional, as logic would have you predict? Is it like a solar panel, of one purpose and useless without the sun?
This question has been studied, as I discovered in my browsing. A group of blind people who had been blind since early in life, with even no memory of vision, was studied.
2
They were asked to read Braille while in the scanner. For comparison, because reading Braille involves the tactile sense, they were also asked to perform other tactile tasks, some of which required fine discrimination (matching raised angles on a piece of paper) and others that required no specific discrimination skills (feeling a rough surface).
They found that the visual cortex in these blind individuals was strongly activated by Braille reading. It was also activated, but less so, by the other tactile tasks that required fine discrimination. It wasn’t activated at all by the simple stimulus of feeling a rough surface.
A skeptic might interpret the data like this: Well, maybe the visual cortex is not as strictly specific for visual input as we thought; maybe it’s involved in other sorts of fine discrimination, visual or not, and we just never looked for it. Maybe it’s irrelevant that the subjects were blind. These scientists were smart, though, so they addressed the skeptic’s question before he had a chance to ask it. They studied normal sighted subjects as well, using the same discrimination and nondiscrimination tasks (but not Braille reading, of course, because they didn’t know how to read Braille). In these normal research subjects, the visual cortex did not “light up” at all, for any of these tasks. In fact, the visual cortex actually showed subtle decreased baseline activity as attention (and blood flow) was shunted to areas of the brain specifically in charge of touch sensations, an area known as the somatosensory cortex, in the front of the parietal lobe.