Read The Universal Sense Online
Authors: Seth Horowitz
At about 19 seconds, the truck noise, a much wider band of noise at high amplitude, blocks or
masks
the insect sounds. This masking is not just a result of the fact that the truck is louder than the insect at this distance.
4
Since the truck puts out a much broader range of frequencies, the combined noise effectively blocks the narrow channel of sound that the insect uses for its song.
At about 24 seconds you can see another example of a bicycle passing, with several gear shifts, but at about 25 seconds you can see a slight increase in the amplitude of the oscillogram and a
series of alternating harmonic lines that are just observable in the low-frequency region and continue up to about 4,000 Hz, louder on the left than on the right. These harmonic bands are simpler than those observed in the human voices seen in the beginning and have a sort of staircase appearance. These are notes being played by a saxophone up on a slight hill to our left. The sax player had been audible for quite a while (starting at about 20 seconds), but once again it was not until we were closest to him that the higher-frequency harmonics appeared and the lower notes were evident in the band of background noise. You can compare these sounds easily to the harmonic bands from the woman’s voice appearing a few seconds later as she approached from behind and to the left of us.
The last thing to notice starts at about 30 seconds and continues through the end of this recording: quiet spectral bands very far down in the background noise, alternating between about 200 and 500 Hz. Although at this scale they seem continuous, there are short gaps in the frequency bands, and the two bands in fact alternate. These are the footsteps of a runner approaching from behind and to our right, passing us at about 34 seconds. While it may seem odd that there would be such a difference between the left (lower-frequency) and right (higher-frequency) footfalls, this gives another interesting insight into acoustical recognition. If the runner had been running with perfectly symmetrical strides and feet perfectly aligned front to back, there would have been very little difference between them. However, I happened to notice as he passed us that the runner had a very distinct outward turn of his right foot and landed strongly on his left heel. The heel strike and full-foot roll off the left foot created less of a slap than the turned-out right foot (which had less surface area to work with and hence put more
energy into a smaller footprint, generating a slightly louder sound at a higher frequency). Many years ago, there was a student in my lab who was interested in the question of whether you could identify someone just from the sound of his or her footsteps. She carried out a very neat little experiment where she had people of similar weights and heights wearing similar footwear walk and run down a hall while she recorded them. She then played these recordings back to listeners who had heard the subjects run previously. She found that people were remarkably good at identifying individuals just using these simple sounds—another example of your brain being able to carry out extremely complicated identification and analyses based on subtle acoustic cues. And if you think this is just an interesting academic exercise, bear it in mind the next time you hear footsteps behind you on a dark street, and realize that you too probably would not have to turn around to determine whether you are being chased by a stranger or by your roommate looking for the keys.
Chapter 3
Listeners of the Low End: Fish and Frogs
Just as every place has its own acoustic signature, every listener has its own plan for hearing what it needs to. There are about fifty thousand kinds of listeners in the vertebrate world, each with its own solution to the problem of what to listen to and usually very closely tied to the acoustics of its normal environment. Of all these, maybe one hundred have been explored scientifically (and most data are drawn from about a dozen, including zebrafish, goldfish, toadfish, bullfrogs, clawed toads, mice, rats, gerbils, cats, bats, dolphins, and humans).
At one level this is okay. Hearing in all vertebrates is based on using hair cells in some configuration to detect changes in pressure or particle motion and converting this into useful perceptions to help guide behavior. Once you get past the ears, vertebrate brains derive from a similar general plan—hindbrain receiving and sending much of the raw sensorimotor information, midbrain integrating both incoming and outgoing information, thalamus acting as a relay center to forward brain regions, and forebrain governing intentional behavior. But on the other hand, every species has developed its own solution to
what it should hear. And to make it worse, every individual shows differences from that species’s version of “normal,” not only through genetic variation but also by what it has been exposed to over the course of its own life. So trying to understand hearing and all its variations from such a small sample of species can be very frustrating.
But people who study hearing ultimately want to understand their subject from a human perspective. We humans are primarily concerned with the human experience. Even though we are vastly outnumbered by all the other living things on Earth, humans are the ones who build the sound level meters, write and hopefully adhere to noise abatement laws, and have opposable thumbs to turn down the volume or switch the song. So we tend to view other species as systems whose features overlap human performance or interests. This limitation has become increasingly stringent in the last twenty years or so, as most research funding these days goes to “translational research”—studies that can be applied to human biomedical or technological applications. So we tend to focus on certain species that are currently perceived as being “useful.” This is why you don’t see too many papers on hearing in platypuses, lesser star-nosed moles, or giraffes (although these animals are beloved in electroreception, touch sensitivity and yawning research).
5
But the fact that humans share an evolutionary heritage with all vertebrates and the proven technological usefulness of biomimetics (copying nature’s engineering) provide a lot of leeway in how we study hearing in animals. Even though we can’t study all fifty thousand species of listeners, by looking at success stories—
animals who have been around a long time and do some things very well—we not only learn about our own hearing but push the boundaries of what we can do with technology and biomedicine in the near future. In this chapter we’ll start in the shallow end of the pool with listeners at the low end of hearing, fish and frogs.
Life on Earth came from the sea, where life-forms took hundreds of millions of years to experiment with sticking their heads out of the water and risking the arid world of the air. Today most organisms still live underwater. Hearing underwater seems very complicated to us terrestrial types, as human ears have evolved to pick up pressure changes in sound in air, a rather low-density medium. These pressure changes vibrate our eardrum. The eardrum’s vibration is then amplified by three small hearing bones or
ossicles
—the malleus, incus, and stapes (or hammer, anvil, and stirrup, for the Latin-challenged)—which in turn vibrate the oval window, the portal to the fluid-filled cochlea of the human inner ear where our hair cells convert these vibrations into usable signals.
But try sticking your head underwater while in a bath or swimming. One of the first things you notice is how odd everything sounds—the best description I ever heard was from a diver friend of mine who said, “Everything is simultaneously louder and softer and everywhere.” Part of this is because human ears have evolved to translate vibrations in low-density air to the high-density fluid of the inner ear. Once you fill your ear canals with water, you’ve upset the system: your ear canal is now full of water, but your middle ear, housing the ossicles, is still full of air and hence passes along distorted signals.
Water is about eight times denser than air (similar to the density of your inner ear fluids and the rest of your body’s tissues).
This difference in density means that water has a higher
impedance
than air—it takes more energy to start a sound underwater, but once it gets moving it travels about five times faster, confusing everything from our ability to identify sounds to our capacity to figure out where they’re coming from. This is why divers without expensive communications gear rely on waterproof whiteboards or just banging on another diver’s tank to get his or her attention. It also explains why no matter how loud you are playing your radio near the bathtub, once your head is underwater, the sounds in air just bounce off the surface due to the impedance mismatch, filling the bathroom with sound but leaving you to listen only to the slow plonking sound of drips from the faucet as they hit the water’s surface.
6
Yet fish have been hearing under water for hundreds of millions of years, despite the lack of any external or middle ears, plus an acoustic impedance almost exactly the same as the water surrounding them. Based on simple physics, the sound should basically pass right through them undetected. But certain adaptations have created enough of a difference to allow the vibrations to be captured by a fish’s relatively simple inner ear. Fish pick up sound using the saccule, a hair-cell-laden sensory organ with an unusual structure. The saccule is oriented vertically in the inner ear, with hair cells that extend outward. The tips of the hair cells are embedded in a mucus-like mass that is full of dense crystals of calcium carbonate, like tiny little chips of bone. This structure, called an
otolith
(literally “ear stone”), is much denser
than the surrounding tissue. When sound pressure waves strike the fish, most of the energy passes through its body, vibrating the fish along with the sound, but the dense mass of the otolith has a higher impedance than the rest of the tissue. The difference in motion between the fish and the otoliths on each side of its head bends the tips of the hair cells, changing their voltage and sending signals via the auditory nerve to code the characteristics of the sound. The best description I heard of this comes from Dick Fay of Loyola University, who explains that “in terrestrial vertebrates the animal holds still and the sound shakes the ear. In fish, the ear holds still and the sound shakes the fish.”
There are many fish, such as sharks, rays, and skates, that get by just fine with this simple auditory arrangement. These cartilagenous fish tend to have relatively limited hearing, responding only to fairly loud sounds and a limited range of frequencies. But quite a few of the evolutionarily more modern bony fish have an adaptation called a swim bladder, an air-filled sac that is an evolutionary precursor to our lungs. The presence of a large pocket of air in the fish creates an impedance mismatch for sound, and many freshwater fish have evolved a modification to their vertebrae called Weberian ossicles that connect the swim bladder to the inner ear. These fish, which include goldfish, have quite good underwater hearing, with low thresholds for sounds up to about 4 kHz, and hence are considered “hearing specialists.”
7
Clupeiform fish (herring, sardines, shad, and their relatives) take this one step further and have an extension of their swim bladder that projects into the skull and directly stimulates
the inner ear, with work by Art Popper of the University of Maryland demonstrating that some of these fish can hear up into the ultrasonic range. So some fish have brought a bit of the atmosphere into their own bodies as a first step toward hearing the world beyond the sea. Which brings us to frogs.
I met my oldest friend, Greg, when we both jumped to catch a bullfrog at the age of eight (we landed on each other, the frog got away). When I was ten, Pablo the bullfrog (a temporary pet who did not get away from me in the pond but did manage to get out of his terrarium every night) cheerfully greeted my mother at the top of the stairs every night and serenaded her. Then there was Francesca, a subadult bullfrog whom I unsuccessfully tried to condition to turn her head to the left whenever I played B-flat on my synthesizer.
When I first applied to Brown University’s graduate program, I went to the meet-and-greet for potential incoming students and met my soon-to-be graduate advisor, Andrea Simmons, and her husband (and my eventual postdoctoral advisor), Jim Simmons. Andrea at the time was specializing in how bullfrogs detect pitch. Jim was and is studying bat echolocation. After I’d been talking with them for a while, Jim said, “Frogs own the low end. Bats own the high end. Between the two you can figure out almost everything in hearing.” I’ve spent about twenty years studying hearing in these (and a few other) species, and I still haven’t figured out almost everything about hearing, but the point still sticks with me and drives my interests. It’s led to everything from hauling a hundred pounds of recording gear into mosquito- and snapping-turtle-infested swamps to gene-screening injured frogs to try to identify the molecular basis for their ability to regrow their brains. I’ve had my hand stuck in the mouth of a male bullfrog intent on swallowing me whole,
and had to hit the guano-covered floor of a bat-infested attic as nursing mother bats dive-bombed me with their babies hanging from their nipples. According to my doctor, I have developed the world’s only recorded allergy to bullfrog urine.
One of the things that fascinated me about frogs is that, as amphibians, they are representative of some of the earliest forms that successfully ventured forth out of the water and onto the land. The fossil record shows that anatomically modern-looking frogs have been around for over 300 million years. This has created a perception that frogs are “simple” or primitive organisms and that by examining them we can learn only the basics of hearing. As an example of this, for many years, frog hearing was thought of as a simple mating call detector—that is, it was narrowly tuned to hear only the sounds of its fellow frogs. At first thought this seems to make sense—if your social behavior is dependent on calling and hearing other frogs of your species, why waste brain resources on extraneous noise? But frogs, like all animals, are not machines designed for a specific task but complex organisms in a complex ecology. To quote Dick Fay again: “The problem with hearing only your own species is that you’ll probably get eaten by the first predator that makes noise outside your calling range.” And as anyone who has tried to sneak up on a bunch of frogs knows, Fay is right—a chorus of bullfrogs filling the night with their low-pitched calls at a headache-inducing 100 dB will suddenly silence themselves with the first footstep within 20 meters. Their calls may include audible frequencies from 200 to 4,000 Hz, but they can detect ground-borne seismic vibrations orders of magnitude lower in pitch and amplitude than our ears can pick up.