Read The Universal Sense Online
Authors: Seth Horowitz
Being able to regenerate hair cells is one of the most important future goals of auditory neuroscience. With 600,000 to 800,000 functionally deaf people in the United States and about 6–8 million with severe hearing impairment, being able to restore hearing is a major clinical goal. While cochlear implants are important and successful ways to do this, only about 250,000 of these 8 million are good candidates for the procedure, not to
mention the fact that the surgeries cost about $60,000 per ear. The idea of being able to regrow sensory hair cells lost to disease; injury, developmental issues, chronic noise exposure, or just getting older is one that keeps hundreds of auditory and developmental neuroscientists trying to figure out exactly why every almost every other class of vertebrates is able to do this while mammals are not. The mammalian inner ear split away from the plan used by our other vertebrate relatives several hundred million years ago, and the adaptations that have given us a much wider range of hearing have come with a cost. The mammalian cochlea goes through a much more complex morphological maturation process than is seen in any other vertebrate, leaving less cell cycle-entry room to create new hair cells or to allow transdifferentiation of preexisting supporting cells into functioning hair cells. In addition, wherever hair cells have died due to injury or aging, the underlying tissues form scars that prevent the endolymph, the potassium-rich fluid of the inner ear, from leaking into cellular spaces where it could depolarize and damage other cells. But this scar formation also prevents any chance of regeneration of hair cells by natural processes. The loss of hair cells also leads to a loss of input to the sensory neurons in the spiral ganglion of the cochlea, which causes those neurons to retract and eventually die.
But if we can grow functional mouse hair cells in a petri dish, does this mean that in a few years we’ll be able to transplant working human hair cells back into the cochlea and restore lost hearing? Well, no. In more than a few years? Maybe. First, the hair cells grown in culture were derived from mice, and mice, while mammals, have radically different functional and developmental pathways than humans. Mice mature in a few weeks and start showing signs of age-dependent hearing loss at anywhere
from three months to a year depending on the strain. Humans with normal development and who have escaped noise-induced damage start showing age-dependent hair cell loss at about forty years of age. This means that since we don’t normally regenerate them, our hair cells have some unknown mechanism for maintaining themselves at least forty times longer than happens in mice.
Several studies have suggested that these protective mechanisms might be exactly what prevents us from growing new replacement hair cells in the first place, and could easily prevent transplanted ones from taking. In addition, because of the tonotopic layout of the cochlea, we would not only have to grow human or human-compatible hair cells but surgically place them extremely precisely in the damaged area. The cochlea is one of the most complicated neural and sensory structures in the body, and the type of microsurgery required for such precise implantation in a living ear without damaging other parts of the ear has not yet been invented. Lastly, even if we replaced the hair cells, we would then have to replace the spiral ganglion neural connection from the hair cell to the cochlear nucleus via the inner ear. While some animals, like frogs, can regrow appropriate connections after damage to the auditory nerve, this is another thing that mammals are not too good at.
So if we’re not likely to transplant actual hair cells, what about their progenitors, stem cells? The idea behind using stem cells is that they are pluripotent—in other words, an appropriately chosen stem cell could, in the proper biochemical environment, be transplanted into a damaged area and differentiate into the needed tissue. Researchers have been able to isolate cells from adult vestibular tissue (the balance part of the inner ear) and coax them into growing and differentiating into elements
found in the inner ear such as neurons and glia, as well as showing some hair cell protein markers. Other studies have shown that by providing the proper culture conditions you can get embryonic (as opposed to somatic) stem cells to differentiate into hair-cell-like structures. But for either type of cell, the results from these studies have contributed more to the understanding of natural growth and differentiation than to the development of a functional therapy. The few studies that have tried actually transplanting stem cells into cochlear tissue have yielded little or no success. So despite jaw-dropping progress, the ability to biologically restore normal hearing is definitely part of future history.
So given that we’re probably going to have to wait another few decades for stem cell transplants to become effective treatment techniques, what are some other possible directions for biological hearing restoration? If you read the “future directions” segments of scientific papers on the subject, you’ll find projections based on current research, often giving you an accurate pathway for the next few years. These papers are usually written either by grad students who are putting together massive compilations of the most contemporary research in order to determine which way their own careers should go (while fulfilling publication requirements of their training labs) or by very senior researchers who have led the direction of a field for decades and are laying out the questions remaining to be addressed by current or future colleagues. In either case, they are limited by a few factors: the ideas have to directly build from current findings, they have to fit within current funding parameters, and they usually focus on things that the researchers or their immediate circle of colleagues are skilled at.
I’ve found one of the richest sources of futuristic ideas for
hearing (or anything else) to be the informal gatherings, usually over drinks, after a conference, and informally run by undergrads, grad students, and postdocs (as long as the lab directors and funding agency representatives don’t show up and spoil the fun). The last such auditory fest I attended after sneaking away from other members of my lab and buying a few rounds had some amazing suggestions for what comes next. One idea was to transplant an entire fetal anlage, the collection of cells destined to become the whole inner ear, directly into a damaged cochlea, where, when provided with intravenous culture medium, it would be able to grow an entirely new ear, using the damaged one as a template. When someone pointed out the ethical and political issues involved with harvesting an entire human embryonic structure, let alone some undifferentiated stem cells, another person suggested implanting not a human auditory anlage but one from another creature such as a frog. The argument here was that xenotransplantation—the transplanting of organs from another species—has been going on for more than a century, ranging from the early twentieth-century transplant of goat testicles into a male human scrotum to help “low male urges” through the accepted contemporary practice of using pig heart valves as replacements for damaged human ones. This surgery would almost certainly be rejected by the human host’s immune system, but still—it was a great example of innovative thinking.
Another suggestion was to use temporarily implanted microinjectors to introduce promoters that would reactivate some of the identified genes underlying normal cochlear structural development as well as cell death promoters. By judiciously applying one at one end and the other at the other end, you could theoretically grow a new cochlea while simultaneously digesting
the old one. The problem with this approach was that while many genes have been identified that help lay down structural axes and the way in which body parts grow out of them in a regular fashion, actually triggering them in a controlled way to grow a functional body part is still far down the road. My contribution was to examine mammals that live abnormally long lives, such as bats and naked mole rats (both of which live three to five times longer than they should by any current metabolic models) and see if they have any protective mechanisms that enable them to keep hearing longer.
Bats in particular would be an interesting model because they are much more closely related to us than are mice, and because they are absolutely dependent on hearing to survive—a deaf bat will starve to death if it doesn’t kill itself by flying into a tree at the wrong moment. In addition, bats’ hearing is all high-end (as far as we know). Understanding how they preserve their 20 kHz hearing at thirty-five years of age might at least give us some insights into the nature of cochlear protection. None of these ideas are things you are likely to see written about as a scientific success story in the next few years, but these kind-of-out-there ideas are the inspiration for the next generation of auditory neuroscientists, hopefully even after they sober up.
One neuroengineering grad student was adamant about the fact that biological experiments
always
take longer to perfect than technological ones, and we should be focusing on technological adjuncts to our ears rather than trying to improve on 300 million years of mammalian evolution. There are indeed advantages to working with technology rather than biology, not the least of which is if you mess up an electronics experiment, you don’t have to stanch bleeding.
Technological applications in the field of miniature and biologically compatible electronics have undergone staggering progress in the last ten years, often leading people who were on the cutting edge five years ago to wonder what happened. For example, shortly after finishing my Ph.D., I was invited to attend a Defense Advanced Research Projects Agency (DARPA) conference on acoustic microsensors. The driving idea behind it was a battlefield intelligence application to identify things in an area based just on their sound and vibration and upload the information to a remote server. The proposed system would use semi-independent acoustic modules that were small enough that hundreds could be dumped out of a low-flying aircraft and, upon hitting the ground, be able to form a network that could report on acoustic events. The acoustic basis for the modules were newly created Knowles subminiature microphones, a few millimeters on a side, capable of picking up a wide range of sounds. Bioacousticians who studied animal hearing were called in because animals, including ourselves, are excellent at identifying sounds and figuring out where they are coming from. Based on the projected network parameters, remote listeners would be able to identify the type of sounds using frequency and amplitude analysis and know the location of a sound based on differences in amplitude and phase between the microphones.
The hope was that if you spread enough of these around an area, you could pick out individual events, such as the low-frequency rumble of approaching soldiers. The idea was a fascinating one, but it suffered from a couple of problems. One was that in 1998 there weren’t readily available batteries of the right specs or tiny low-power networkable broadcast devices, but these were “just engineering issues,” as scientists like to say when they
aren’t the ones who have to solve the problem. The bigger issue was in the choice of invitees to the conference. The attending bioacoustic scientists were among the cream of the crop for cutting-edge animal hearing science, but most of the animals they studied were mammals—in other words, animals that predominantly hear high frequencies. Mice, chinchillas, bats, and cats, species who were the research focus of most of the attending scientists, are great at detecting and locating airborne sounds, but not so good at identifying vibrations that travel through the ground. What was needed were experts in frogs, scorpions, and naked mole rats, animals that are much more sensitive to low-frequency ground-based sounds.
That project, like many DARPA projects, didn’t end up coming up with a practical solution to the question at hand, but it did provide many of the participating labs with access to amazing technology. Direct and indirect spin-offs from the program eventually yielded things such as subminiature microphone arrays capable of localizing sounds a significant distance away to within a meter or so of accuracy (which was unheard of at the time), a plethora of automatic sound categorization algorithms, and the recent development of a biomimetic cochlea-like chip that picks up radio frequencies in a manner similar to that of the mammalian ear. The miniaturization of microphones continues on, and the 2 mm microphone of 1998 is now down to a barely visible 0.7 mm, with equally reduced power requirements. Specialized audio transducers such as ultrasonic sonar emitters and detectors and underwater hydrophones have likewise dropped in size and price.
So what can you do with these tiny sound devices? The market for miniature microphones for use in personal electronics alone is staggering. Almost 700 million were sold in 2010 for
use in everything from cell phones to personal computers, military communication devices to industrial robotics. And as reliability has increased and the price has dropped, they show up in more and more applications. One application demonstrated as a proof of concept in 2002 was the idea of an implantable cell phone. The idea was that a small radio-frequency chip powered by a miniature battery could use a subminiature microphone/speaker implanted at the tooth/bone interface to both send signals to the inner ear and pick up spoken words via a bone conduction pathway through the jaw. You probably would not want to let your teenager get one of these when they become available (what are you going to do, take his jawbone away when he goes over his minutes?), but embedding such a device into a removable cap or bridge could be extremely useful for coordination and communication between members of search-and-rescue teams, police, or military, not to mention the people being searched for or protestors on the other side. At some level, the embedded cell phone is the culmination of a few decades of miniaturizing consumer electronics, which we’ve been seeing since the advent of the Walkman.
Another piece of acoustic technology that has benefited from miniaturization is the ultrasonic transducer, the basis of sonar. Even just ten years ago, most ultrasonic transducers, capable of emitting and picking up signals above 20 kHz, were either very simple single-frequency devices such as the inch-wide Polaroid units used to focus cameras or very expensive and delicate research-grade devices used in underwater sonar applications. But in the last five years, it’s become easy to find devices much smaller and more powerful than these research units in almost any electronics or robotics store, most attached to inexpensive amplifier and detector circuits that will let a homemade robotic
toy avoid objects just a few inches across up to 10 feet away. We’ve started seeing small sonar devices in cars to keep you from running into the back of your garage, sonar “tape measures,”
58
and even wearable miniature sonar platforms that can be built into clothing or hats to aid the visually handicapped. The miniaturization of ultrasonic transducers has also allowed them to be used for better non-invasive medical imaging of ever smaller structures and even engineering applications to allow detection of flow through tiny pipes and tubing to detect leaks. It’s not too far a stretch to imagine that within a few decades (or much sooner) wearable high-definition sonar units will be able to fit into headbands to allow people to carry out search-and-rescue operations in the dark, or that surgeons will be able to wear micro-miniature sonar units on their fingertips or deploy them on scalpel heads to be able to generate three-dimensional views of the surgical area projected onto a heads-up display to make surgery safer. And coupled with advances in neural prostheses, it likely will not be too much further down the road before we will be able to take sonar data and convert it into signals that the human visual cortex can understand, giving us a bat- or dolphin-like ability to see in the dark and into each other’s bodies.