I Can Hear You Whisper (32 page)

Read I Can Hear You Whisper Online

Authors: Lydia Denworth

BOOK: I Can Hear You Whisper
8.56Mb size Format: txt, pdf, ePub

Dorman's goal has been to put as many people as possible with either two cochlear implants (“bilaterals”) or with an implant and a hearing aid
(“bimodals”) through the torture chamber of the eight loudspeaker array to look for patterns in their responses, both to determine if two really are better than one and, if so, to better understand how and why.

For John Ayers, Cook doesn't play the restaurant noise as loud as it would be in real life. With the click of a computer mouse, she can adjust the signal-to-noise ratio—the relative intensity of the thing you are trying to hear (the signal) versus all the distracting din in the background (the noise). She makes the noise ten decibels quieter than the talker, even though the difference would probably be only two decibels in a truly noisy restaurant. She needs first to establish a level at which Ayers will be able to have some success but not too much, so as to allow room for improvement. Noise that's so loud he can't make out a word or so quiet he gets everything from the start doesn't tell the researchers much. Eventually, Cook settles on a level that is six decibels quieter than the signal. Ayers repeats the test with one implant, then the other, then both together—each time trying different noise conditions, with the noise coming from just one loudspeaker or from all of them. From the computers where I sit, it's hard to see him through the observation window, so he's a disembodied voice saying things like, “He was letting Joe go,” when it should have been, “He went sledding down the hill.”

The sentences Cook asks Ayers to repeat were created in this very lab in an effort to improve testing by providing multiple sentence lists of equivalent difficulty. Known as the AzBio sentences, there are one thousand in all recorded by Dorman and three other people from the lab. They're widely used. That meant, back home in New York City, I could still hear Dorman's deep, sonorous voice speaking to me when I observed a test session in Mario Svirsky's laboratory. To relieve the tedium of the sound booth, Dorman and colleagues intentionally made some of the sentences amusing.

“Stay positive and it will all be over.” Ayers got that one.

“You deserve a break today.” Ayers heard it as: “You decided to fight today.”

“The pet monkey wore a diaper.” A pause and then Ayers says, incredulously: “Put the monkey in a diaper?”

Cook scores the sentences based on how many words Ayers gets right out of the total. With only one implant, Ayers scored between 30 and 50 percent correct. With both implants together, he scored as high as 80 percent.

Dorman and Cook use the same loudspeaker array to test the ability of cochlear implant users to localize sound, which is restored by two implants but only to a degree, as implants can work with intensity cues but not timing. Hearing aids, on the other hand, can handle timing cues, since the residual hearing they amplify is usually in the low frequencies. The average hearing person can find the source of a sound to within seven degrees of error. Bilateral implant patients can do it to about twenty degrees. “In the real world, that's fine,” says Dorman. It works because the bilateral patients have been given the gift of a head shadow effect. “If you have two implants, you'll always have one ear where the noise is being attenuated by the head,” says Dorman. He sees patients improve by 30 to 50 percent.

With both a hearing aid and a cochlear implant, Alex uses two ears, too, so it seemed he ought to have had an easier time localizing sound than he did. During my visit to Arizona, I finally understood why localizing was still so hard for him. Bimodal patients—those with an implant and a hearing aid—do better than people with just one usable ear, who can't localize at all, but the tricks that the brain uses to analyze sound coming into two different ears require something bimodal patients don't have: two of the same kind of ears. “Either will do,” says Dorman. “For this job of localizing, you need two ears with either good temporal cues or good intensity cues.” A hearing aid gives you the first, an implant gives you the second, but the listener with one of each is comparing apples to oranges.

The work with bilateral and bimodal patients is a sign of the times. The basic technology of implants hasn't actually changed much in twenty years, since the invention of CIS processing. Absent further improvements in the processing program or solutions to the problem of spreading electrical current, the biggest developments today have less to do with how implants work and more with who gets them, how many, and when. Just because the breakthroughs are less dramatic these days, says Dorman, that doesn't mean they don't matter. He has faith in the possibilities of science and says, “You have to believe that if we can keep adding up the little gains, we get someplace.” One of the projects he is most excited about is a new method that uses modulation discrimination to determine if someone like Alex would do better with a hearing aid or a second implant. “It allows you to assess the ability of the remaining hearing to resolve the speech signal. So far, it's more useful than the audiogram.” The project is still in development so won't be in clinical use for several years, but the day they realized how well the strategy worked was a happy one. “You keep playing twenty questions with Mother Nature and you usually lose,” says Dorman. “Every once in a while, you get a little piece of the answer, steal the secret. That's a good day.”

25
B
EETHOVEN'S
N
IGHTMARE

A
lex waved with delight, thrilled to see me in the middle of a school day. Head tilted, lips pressed together, big brown eyes bright, he wore his trademark expression, equal parts silly and shy. His body wiggled with excitement. I waved back, trying to look equally happy. But I was nervous. Alex and the other kindergartners at Berkeley Carroll were going to demonstrate to their parents what they were doing in music. Three kindergarten classes had joined forces, so there were nearly sixty children on the floor and at least as many parents filling the bleachers of the gym, which doubled as a performance space.

It had been almost exactly three years since Alex's implant surgery. Now he was one of this group of happy children about to show their parents what they knew about pitch, rhythm, tempo, and so on. Implants, however, are designed to help users make sense of speech. Depending on your perspective, music is either an afterthought or the last frontier. Or was. Some of the same ideas that could improve hearing in noise might also make it possible for implant users to have music in their lives. I was thrilled to know that people were out there working on this, but they couldn't help Alex get through kindergarten. Music appreciation and an understanding of its basic elements were among the many pieces of knowledge he and the other children were expected to acquire. I feared—even assumed—music was one area where his hearing loss made the playing field too uneven.

Music is much more difficult than speech for the implant's processor to accurately translate for the brain. As a result, many implant recipients don't enjoy listening to music. In her account of receiving her own implant,
Wired for Sound
, Beverly Biderman noted that for some recipients, music sounded like
marbles rolling around in a dryer. After she was implanted, Biderman was determined to enjoy music and worked hard at it. (Training does help, studies show.) For every twenty recordings Biderman took out of the library to try, eighteen or nineteen sounded “awful,” but one or two were beautiful and repaid her effort.

Speech and music do consist of the same basic elements unfolding over time to convey a message. Words and sentences can be short or long, spaced close together or with big gaps in between—in music we call that rhythm. The sound waves of spoken consonants and vowels have different frequencies and so do musical notes—that's pitch. Both spoken and musical sounds have what is known as “tonal color,” something of a catchall category to describe what's left after rhythm and pitch—timbre, the quality that allows us to recognize a voice or to distinguish between a trumpet and a clarinet.

But music is far more acoustically complex than speech, and its message is abstract. “There's a big difference between what music expects of hearing and what speech requires,” acoustic scientist Charles Limb of Johns Hopkins University explains to me. “Speech is redundant. It's all within a certain frequency range.” So speech doesn't require as much information to make sense of it. The words themselves are a handy clue, as is the context in which they are used. Musical sounds have a lot more going on within them, and if there are no lyrics to serve as guideposts, it gets even harder. Classical music is generally much harder to follow than pop, for instance.

In one of his papers, Limb compared “
Happy Birthday to You” when spoken, sung, or played on the piano. Represented in waveform, the spoken words are distinct, narrow bands. When sung or played on the piano, those same bands begin to spread out and “smear” like a squirt of ketchup after it's been mushed into a hot dog roll. For all of us, then, the sound waves of music are smeared already. A cochlear implant smears the sound further and the result can be a muddle.

There are a handful of people, however, who defy expectations. In Melbourne, Australia, there's a young woman who lost her hearing suddenly around the age of thirty. Before that, she played the piano very regularly. Within eighteen months of going deaf, she got a cochlear implant and now has two. “She still continues to play the piano a few hours a day,” says Peter Blamey, who has been working with cochlear implant recipients since he joined Graeme Clark's team in 1979. Now a deputy director at the Bionics Institute, a research organization founded (as the Bionic Ear Institute) by Clark, Blamey says of this woman, “She has pitch perception that's as good as mine. She can do things like rating consonance and dissonance of chords and notes played in succession and do things that are generally not accepted as being possible with a cochlear implant.” Blamey and his colleagues are still studying her, but they are guessing that in her case training has been the important factor. “The difference seems to be that six hours a day that she spends playing the piano, and maybe the learning that took place before she went deaf. They're both central things,” he says. “We're looking for things that are going to be more generally applicable for improving music perception for people even if they don't have six hours a day to devote to it.”

Researchers at the Bionics Institute are also using the same cues that Andrew Oxenham described to me as useful for hearing in noise to see if they can tease out ways to use them to make listening to music not just more accurate but more enjoyable for people with implants. In their research, “we just have simple musical melodies that repeat over and over that the brain can learn easily,” explains research scientist Hamish Innes-Brown. “Then we vary those notes in loudness, pitch, location, etc. We try to get people to detect that modification. We want to change the signal as little as possible—get the most streaming bang for your perceptual buck.” They also recently
commissioned composers to create music specifically for cochlear implant users. The musicians spent nine months or so visiting the institute and learning about implants. One named Natasha Anderson impressed Innes-Brown by asking for the center frequencies of all the implants. “She's actually tailored this sound to fit all the exact frequencies.”

 • • • 

By the day of the open kindergarten music class, I had spent a lot of time considering what words Alex could understand and say, but I hadn't thought deeply about music and how he experienced it, except that he loved it. Maybe he liked the sound of marbles in the dryer—the little rascal wasn't above putting marbles in the dryer. But it was more likely that that's not what it sounded like to him. Music and dancing were such favorites that at Clarke his end-of-year gift had been a book called
Song and Dance Man
. He has a hard time keeping up with song lyrics, but his favorite family activity for years was a “dance party,” which entailed putting on music and having all of us boogie around the living room.

“Does he respond to music?” was one of the questions the doctor asked when we were trying to figure out why he couldn't talk.

“Yes,” we had to say, “with gusto.”

When I began to look into research on music and cochlear implants, the mystery was explained. There was one group that consistently reported greater enjoyment of music: those who still had some hearing in the non-implanted ear and used a hearing aid. Furthermore, even profoundly deaf people enjoy the vibration and beat of music. The all-deaf rock band Beethoven's Nightmare was featured in the PBS documentary
Through Deaf Eyes
. The group's drummer, Bob Hiltermann, said he depended on both vibrations and his hearing and that the band played “really, really loud” so that the musicians could hear themselves. He joked about attending a rock concert: “It was really too loud for the regular hearing person,” he said. “They're going to become deaf themselves. But we already are, so it's perfect.” In a lecture on how to listen to music with your whole body, deaf percussionist
Dame Evelyn Glennie remembered what she said to her first teacher when he asked how she would hear the music: “I hear it through my hands, through my arms, my cheekbones, my scalp, my tummy, my chest, my legs, and so on.” Glennie even plays without shoes, the better to hear the music with her feet.

That, I suspect, was some of what Alex experienced when he danced around the living room. He didn't hear everything that we heard, but he loved what he did hear. He listened with his whole body. Music brought him joy, just as it had millions of other people over the centuries. But dancing around the living room is not the same thing as performing music in class. At school, there was an audience. The children would have to demonstrate knowledge and skill. Would performing music in this way kill the joy for Alex? Was I setting him up to fail? At a demoralizing dance performance, he was once the unfortunate child whose partner didn't show up, and though he gamely went through the routine with the rest of the class, he was behind from the start and never caught up.

I read through the music teacher's three-page description of what the children were going to do like a doctor searching a medical history for signs of trouble. Trouble showed itself quickly in the section about the importance of “inner hearing”:

Inner hearing in music is an essential tool needed to sing in tune, read music, create musical compositions and to improvise melodies. The Kindergartners are able to tap beats while singing melodies in their heads. Honing the inner voice is not only essential for the professional musician but for the lifelong lover of music.

Was my child able to do that?

And what about this part about how “once children are able to clap the rhythm of the words to the song, transferring the rhythm to a percussion instrument is a given.”

Was it a given for Alex?

Maybe. Hands down, researchers have found, rhythm is easiest for implant recipients to perceive. They do nearly as well in listening tests as those with normal hearing. That's because the information that has to be processed occurs over seconds, not milliseconds, and it doesn't require the fine-tuning that other musical elements do. With each beat, an electrode will fire. Discerning the exact frequency doesn't matter. This is why percussive instruments such as drums and piano are easiest for implant recipients to appreciate. (And probably why, when Beverly Biderman embarked on her own musical study experiment,
the first music that sounded good to her was Glenn Gould's recording of
The Goldberg Variations
on the piano.)

Pitch and timbre are much more difficult. Is one pitch higher or lower than another? Some implant users can answer that question, but some can't identify even an octave change. Pitch is essential for following a melody, but not nearly so important in understanding a sentence. It will tell me whether I'm listening to a man, woman, or child, but I don't absolutely need to know who is speaking to know that he or she said “Excuse me” or “May I have some water?” In music, however, the only way I can differentiate between “Twinkle, Twinkle Little Star” and “Mary Had a Little Lamb” is by following the changes in pitch. Pitch is essential to melody and melody is essential to appreciating music.

Like localization, pitch is an area where Alex's two modes of hearing might not always be helpful. His low-frequency hearing in the left ear probably conveys some pitches fairly accurately, but his implant may tell him something completely different about the same note—the frequency shift that Mario Svirsky had explained to me. If you compare the sound frequency spectrum to a rainbow, an implant recipient can see the
ROYGBIV
colors—red, orange, yellow, green, blue, indigo, and violet—but he or she cannot see any of the colors in between, Charles Limb explained. Furthermore, the information they get may not be accurate—the equivalent of seeing orange as red.

In studies of timbre, less than half of implant recipients were able to correctly identify musical instruments. Non-musicians make mistakes, too, but their mistakes are less frequent and tend to be within the same instrumental family, such as confusing an oboe with a clarinet. Implant recipients might mistake a flute for a trumpet, a violin for an organ.

At least I knew that Alex should have no problem with the drums in the music class. They were arrayed in rows on the floor of the gym. The xylophones were lined up behind them. Looking around the gym, a space I had been in many times over several years, I noticed how small movements in the bleachers rang out loudly and how the noise of the children as they settled into their spots on the floor was like a low roar. Static or feedback in the public address system was suddenly not just annoying but worrisome. (The year before, I'd been at a memorial service where the microphone didn't work well and two family friends with hearing aids—men in their seventies—told me they couldn't understand a word that was said.)

After some discussion on the importance of being quiet in music class—something for which I had new appreciation—the group started in on the traditional standard “Engine, Engine Number Nine.” With sixty-some kindergartners chanting at different tempos—fast and slow—the gym qualified as an “in noise” condition. Just as Usha Goswami had recommended, the children were playing with rhythm and rhyme and having a great time doing it. Alex seemed as happy as anyone.

Next, the kids took turns at easels where sheets of pictures—a frog, for instance, repeated in rows and columns—helped them tap the beat. The teacher had devised a game in which children tried to tap with the chant and end on the last picture at exactly the last beat of the song. I wondered if Alex could distinguish the separate words of the song to help him distinguish the beats. In the same way that other children think “LMNO” is one letter when they are first learning the alphabet, he often has trouble separating sounds. In general, though, tapping a beat like this should be quite possible for Alex. And it was.

After a few variations of this and some drumming and xylophone playing, we were nearly at the end of the class. I was beginning to think I shouldn't have worried so much. For the last exercise, the teacher mentioned the inner hearing he had written about in the handout. He was going to sing a melody. The children had to listen and then play it back on a two-tone wood block, improvising as they decided which tone went with which beat. Hands shot up around the circle of children to volunteer, Alex's among them. At least he's enthusiastic, I thought. The teacher picked a little girl from the right side of the circle, and then, from the left side, he picked Alex.

Other books

Addicted Like Me by Karen Franklin
Scene of the Climb by Kate Dyer-Seeley
The Damsel's Defiance by Meriel Fuller
Winds of Heaven by Karen Toller Whittenburg
Carisbrooke Abbey by Amanda Grange
The Appointment by Herta Müller
Me, Inc. by Mr. Gene Simmons