Showing posts with label auditory. Show all posts
Showing posts with label auditory. Show all posts

Monday, May 25, 2020

Food & Paper: Auditory deviance detection in the human insula: An intracranial EEG study (Blenkmann)

reposted from
https://www.uio.no/ritmo/english/news-and-events/events/food-and-paper/2020/alejandro-omar-blenkmann/index.html


Food & Paper: Auditory deviance detection in the human insula: An intracranial EEG study (Blenkmann)

Researcher at RITMO Alejandro Omar Blenkmann will give a talk on his latest paper.
Image may contain: Glasses, Glasses, Chin, Forehead, Facial hair.

Abstract

The human insula is an area of the brain that is rarely accessible due to its location behind the frontal and temporal lobes. Evidence from previous studies indicated that it is involved in auditory processing, but knowledge about its precise functional role and the underlying electrophysiology is limited. We decided to assess its role in automatic auditory deviance detection, a fundamental function of the human brain to detect a novel stimulus in a sequence of regular stimuli.
We analyzed the electrophysiological activity from 90 intracranial EEG channels implanted in the insular cortex across 16 patients undergoing pre-surgical monitoring for epilepsy treatment. Subjects passively listened to a stream of standard and deviant tones differing in four physical dimensions: intensity, frequency, location, or time. Responses to auditory stimuli were found in different areas of the insular cortex (the short and long gyri, and the anterior, superior, and inferior segments of the circular sulcus). Only a well-localized subset of channels (in the inferior segment of the circular sulcus) showed deviance detection responses. These results provide evidence that the human insula is engaged during auditory deviance detection. 

Bio

My major interest is how our brains make predictions of future events. Is our brain a prediction machine to some extent? Predictions are omnipresent in our lives. For example, when reading this text, our brains are predicting the following words. Or when playing tennis, we expect the ball to bounce off the ground in a precise way. However, we know little about how these predictions are implemented in our brains at the neurophysiological level. In my research, I use different experiments in the auditory sensory domain to characterize the neuronal networks that are active when we make predictions. And more interestingly, when unexpected events violate these predictions.
I'm interested in understanding the role of different brain areas during predictive processes and how they communicate with each other. I work mainly with intracranial recordings obtained from epilepsy patients, implanted for medical reasons with grids (ECoG) or depth electrodes (SEEG). These recordings allow us to observe the brain activity with a unique spatio-temporal resolution. I also study patients with frontal lobe lesions to better understand the role of the frontal lobe in the prediction network.
Additionally, I'm interested in methods for the localization of intracranial electrodes. In this vein, I developed iElectrodes, an open-source toolbox running on MATLAB ® to perform intracranial electrodes localization using MRI and CT images.
Published May 25, 2020 11:38 AM Last modified May 25, 2020 11:38 AM

Friday, March 10, 2017

Infographic: Mapping Musicality

reposted from

Infographic: Mapping Musicality

Huge areas of the brain respond to any sort of auditory stimulus, making it difficult for scientists to nail down regions that are important for music processing.
By  | March 1, 2017
© CATHERINE DELPHIAFunctional magnetic resonance imaging (fMRI) studies have taken diverse approaches to pinpointing areas involved in musical perception, providing “musical” stimuli ranging from human singing to synthesized piano melodies and other computer-generated sounds, and yielding equally varied results. Despite these hurdles, research is beginning to offer some clues about the regions of the brain involved in musical perception.

Music specificity

Based on Cortex, 59:126-37, 2014
Music activates diverse areas of the brain, from the primary auditory cortex to the amygdala. But the degree to which certain areas are specifically geared to processing music, as opposed to other sounds, is unclear. By comparing activation patterns in the brain while people listened to nonmusical human vocalizations, such as speech or laughter, or to instrumental music, researchers found that certain regions responded more strongly to one type of auditory stimulus than the other. For example, parts of the superior temporal gyrus (STG), the superior temporal sulcus (STS), and the inferior frontal gyrus (IFG) showed stronger responses to vocalizations than to music (orange), while other areas such as the planum polare (part of the anterior STG) showed stronger responses to music than to vocalizations (blue).

Beat and pitch

Based on Cereb Cortex, 24:836-43, 2014 and Philos Trans R Soc Lond B Biol Sci, 370:20140093, 2015 (left); Front Psychol, 3:76, 2012 (right)
Some fMRI studies have focused on identifying the brain circuitry underlying specific components of auditory perception. For example, the primary auditory cortex (located in the STG) and the thalamus are thought to play prominent roles in beat perception for both music and speech, and trained musicians may recruit extra language-processing areas such as the supramarginal gyrus (SMG) when listening to complex rhythms. In addition, several regions considered to be part of the motor system have been associated with beat perception, including the supplementary motor area (SMA) and the premotor cortex (PMC), suggesting an important link between perceiving a rhythm and synchronizing movement to it.
Studies of pitch processing, meanwhile, have repeatedly highlighted a role for the auditory cortex, although evidence for the overlap between speech and music in this and other areas is mixed. Some regions, however, including the intraparietal sulcus (IPS, located on the parietal lobe), appear to be activated more by pitch in sung words than by pitch in spoken words. Additional observations revealed differential lateralized activity for song and speech: the left inferior frontal gyrus (IFG), for example, dominates in pitch processing for speech, while the right IFG takes over for song.
Read the full story.

Understanding the Roots of Human Musicality

Researchers are using multiple methods to study the origins of humans’ capacity to process and produce music, and there’s no shortage of debate about the results.
By  | March 1, 2017
22954
© ISTOCK.COM/LIUDMYLA SUPNYSKAGetting to Santa María, Bolivia, is no easy feat. Home to a farming and foraging society, the village is located deep in the Amazon rainforest and is accessible only by river. The area lacks electricity and running water, and the Tsimane’ people who live there make contact with the outside world only occasionally, during trips to neighboring towns. But for auditory researcher Josh McDermott, this remoteness was central to the community’s scientific appeal.
In 2015, the MIT scientist loaded a laptop, headphones, and a gasoline generator into a canoe and pushed off from the Amazonian town of San Borja, some 50 kilometers downriver from Santa María. Together with collaborator Ricardo Godoy, an anthropologist at Brandeis University, McDermott planned to carry out experiments to test whether the Tsimane’ could discern certain combinations of musical tones, and whether they preferred some over others. The pair wanted to address a long-standing question in music research: Are the features of musical perception seen across cultures innate, or do similarities in preferences observed around the world mirror the spread of Western culture and its (much-better-studied) music?
“Particular musical intervals are used in Western music and in other cultures,” McDermott says. “They don’t appear to be random—some are used more commonly than others. The question is: What’s the explanation for that?”
TSIMANE’ TESTS: Ricardo Godoy of Brandeis University tests the musical preferences of a Tsimane’ woman in Santa María, Bolivia.JOSH MCDERMOTTEthnomusicologists and composers have tended to favor the idea that these musical tendencies are entirely the product of culture. But in recent years, scientific interest in the evolutionary basis for humans’ musicality—our capacity to process and produce music—has been on the rise. With it has come growing enthusiasm for the idea that our preference for consonant intervals—tonal combinations considered pleasant to Western ears, such as a perfect fifth or a major third—over less pleasant-sounding, dissonant ones is hardwired into our biology. As people with minimal exposure to Western influence, the Tsimane’ offered a novel opportunity to explore these ideas.
If these properties are absent in some cultures, they can’t be strictly determined by something in the biology.—Josh McDermott, MIT
Making use of the basic auditory equipment they’d brought by canoe, McDermott and his colleagues carried out a series of tests to investigate how members of this community responded to various sounds and musical patterns. The team found that although the Tsimane’ could distinguish consonance from dissonance, they apparently had no preference for one over the other. McDermott interprets the results as evidence against a strong biological basis for preference.1 “If these properties are absent in some cultures, they can’t be strictly determined by something in the biology—on the assumption that the biology in these people is the same as it is in us,” he says.
But the authors’ publication of their results proved controversial. While some took the findings to imply that culture, not biology, is responsible for people’s musical preferences, others argued that the dichotomy was a false one. Just because there’s variation in perception, it doesn’t mean there’s no biological basis, says Tecumseh Fitch, an evolutionary biologist and cognitive scientist at the University of Vienna. “Almost everything has a biological basis and an environmental and cultural dimension,” he says. “The idea that those are in conflict with one another, this ‘nature versus nurture,’ is just one of the most consistently unhelpful ideas in biology.”
Identifying the biological and cultural influences on humans’ musicality is one of various thorny issues that researchers working on the cognitive science of music are currently tackling. The field has exploded in recent years, and while many answers have yet to materialize, “the questions have been clarified,” says Fitch, who was one of more than 20 authors contributing to a special issue of Philosophical Transactions B on the subject in 2015. For example, “rather than talking about the evolution of music, we’re talking now about the evolution of musicality—a general trait of our species. That avoids a lot of confusion.”
Researchers are beginning to break this trait into various components such as pitch processing and beat synchronization (see Glossary); addressing the function and evolution of each of these tasks could inform the broader question of where humans’ musicality came from. But as illustrated by the discussions following McDermott’s recent publication, it’s clear just how much remains mysterious about the biological origins of this trait. So for now, the debates continue.

A mind for music?

MAPPING MUSIC: Huge areas of the brain respond to any sort of auditory stimulus, making it difficult for scientists to nail down regions that are important specifically for music processing. Functional magnetic resonance imaging (fMRI) studies have taken diverse approaches to pinpointing areas involved in musical perception, providing “musical” stimuli ranging from human singing to synthesized piano melodies and other computer-generated sounds, and yielding equally varied results. Despite these hurdles, research is beginning to offer some clues about the regions of the brain involved in musical perception.
See full infographic: WEB | PDF
© CATHERINE DELPHIA
Musical faculties don’t fossilize, so there’s little direct evidence of our musical past (see Time Signatures). But researchers may find clues in the much older study of another complex cognitive trait: speech perception. “Music and language are both sound ordered in time; they both have hierarchical structure; they’re in all cultures; and they’re very complex human activities,” says Fred Lerdahl, a composer and music theorist at Columbia University. “A lot of people, including me, think that music and language have, in some respects, a common origin.”
Numerous lines of evidence have supported this view. For example, Tufts University psychologist Ani Patel and colleagues showed a few years ago that patients with congenital amusia, a neurodevelopmental disorder of musical perception commonly known as tone deafness, also had difficulty perceiving intonation in speech.2(See “Caterwauling for Science.”) And fMRI scans of normally hearing volunteers listening to recordings have revealed that large areas of the brain’s temporal lobes—regions involved in auditory processing—show heightened activation in response to both music and speech, compared with nonvocal sounds or silence.3 For many, these findings hint at the possibility of common neural circuitry for the processing of speech and music.
But other research points to dissociated processing for at least some components of music and language, suggesting that certain parts of the brain specialized in musicality during our evolution. Lesion studies, for example, show that brain damage can disrupt the processing of pitch in music without disrupting pitch processing in speech.4 And multivariate neuroimaging analyses with higher sensitivity than traditional methods indicate that, despite stimulating overlapping regions of the cortex, recordings of music and speech activate different neural networks.5“People may take localization of activity as evidence for sharing,” notes Isabelle Peretz, a neuropsychologist at the University of Montreal. But given the low resolution of most current methods, “that’s nonsense, of course.”
McDermott’s lab recently reported more extreme dissociation. Using a novel approach to analyze fMRI data from people listening to more than 150 recordings of speech, music, nonverbal vocalizations, or nonvocal sounds, the team identified anatomically distinct pathways in the auditory cortex for speech and for music, along with other regions of the brain that responded selectively to each.6 “We find that they’re largely anatomically segregated,” McDermott says. “Speech selectivity seems to be located primarily lateral to primary auditory cortex, while music [selectivity] is localized mostly anterior to it.”
The neural processing mechanisms themselves remain elusive, but studies like McDermott’s “clearly demonstrate that you can separate the representations for speech and music,” says Peretz. All the same, she notes, with current research continuing to present evidence both for and against a shared neural basis for music and speech perception, “the debate is still on.”
Another way researchers hope to throw more light on how the human brain has become tuned for musical perception is by looking at people’s DNA. “For me, [genetics] is the only way to study the evolutionary roots of musicality,” says Irma Järvelä, a medical geneticist at the University of Helsinki. In recent years, Järvelä’s group has researched genome-wide association patterns in Finnish families. In a preliminary study published last year, the team used standard music-listening tests to characterize participants as having either high or low musical aptitude, and identified at least 46 genomic regions associated with this variation.7 “We asked, what are the genes in these regions, and are these genes related to auditory perception?” she explains. In addition to homologs of genes associated with song processing and production in songbirds, the researchers identified genes previously linked with language development and hearing.
Further clues about musicality’s genetic basis could come from the study of amusia. In 2007, Peretz and colleagues reported that congenital amusia runs in families.8 And recent descriptions of high amusia incidence in patients with genetic diseases such as Williams-Beuren syndrome, a condition associated with deletion of up to 28 genes on chromosome 7, may lead researchers to additional musicality-linked genes.9 “We are making progress along these lines, but there’s a lot more to be done,” says Peretz. “It’s really hard to do, and more expensive than neuroimaging. So we have to be patient.” But it’s progress worth waiting for, she adds, as an understanding of the genetics contributing to particular musical—or amusical—phenotypes could offer an entirely new perspective on the biological basis for musicality.
Music’s universality in humans, combined with its fundamental social and cultural roles, is convincing evidence to some that our musicality is adaptive.
Meanwhile, some researchers advocate looking to related species to answer questions about the origins of human musicality. Although nonhuman primates share our ability to distinguish between consonance and dissonance, many apes and monkeys have surprisingly different auditory processing. “Things that are fundamental to music that people thought would be ancient, general aspects of how animals process sound turn out not to be, and potentially reflect specialization in our brains,” says Patel. For example, the ability to synchronize movement to a beat, a capacity central to music, “doesn’t come naturally to our closest living relatives,” says Patel, though he adds that “it does come quite naturally to some other species,” including parrots, seals, and elephants. (See “John Iversen: Brain Beats.”)
Similarly, vocal learning—potentially a requirement for musicality—is known to be prevalent in several taxa, including some species of songbirds, parrots, whales, seals, bats, and elephants, but it is not well documented in any primate other than humans. (See “Singing in the Brain.”) “It raises the question of why,” Patel says. “What basic features of music perception are shared with other species, and what does that tell us about the evolution of those features?”

Why music?

© ISTOCK.COM/PEOPLEIMAGESAs researchers continue to probe how humans have evolved to process music, many scientists, and the public, have been increasingly drawn to another question concerning musicality’s origins: Why did it evolve at all? For some, music’s universality in humans, combined with its fundamental social and cultural roles, is persuasive evidence that our musicality is adaptive. “Music is so common in all societies,” says Helsinki’s Järvelä. “There must be favorable alleles; it must be beneficial to humans.”
But just what this benefit might be, and whether it did indeed influence our evolution, have been the objects of what Patel calls “one of the oldest debates in the book.” In the late 1990s, cognitive psychologist Steven Pinker famously dubbed music “auditory cheesecake”—pleasant, but hardly essential—and argued that musicality was nothing more than a by-product of neural circuitry evolved to process language and other auditory inputs. It’s become the argument to beat for researchers looking for ultimate explanations of musicality’s evolution in humans, Fitch says. “Everybody seems to want to prove that Pinker’s cheesecake argument is wrong,” he notes. “But it’s just the null hypothesis.”
One adaptationist viewpoint, that traces its roots to Darwin, is that human musicality, like birdsong, is a sexually selected trait—albeit an unusual one, prevalent as it is in both sexes. Musicality is a reliable and visible indicator of cognitive ability, the argument goes, and so informs a potential mate of an individual’s genetic quality. Some researchers have tried to generate testable predictions from this idea, but so far there’s been little evidence in its favor. One recent study went as far as assessing the self-reported sexual success—based on indicators including the number of sex partners and age at first intercourse—of more than 10,000 pairs of Swedish twins.10 The researchers found no association between musical ability and sexual success, but cautioned against being quick to draw conclusions about the sexual relationships of our evolutionary ancestors from modern society.
Other hypotheses arise from research on music’s far more complex and still poorly understood effects on human emotion and social bonding. University of Toronto psychologist Sandra Trehub notes, for example, that babies and young children are particularly sensitive to musical communication, and that singing comes naturally to adults interacting with them. “Caregivers around the world sing to infants,” she says. “It’s not a Western phenomenon, nor a class-based phenomenon. It seems to be important for caregiving everywhere.”
She and her colleagues recently showed that recordings of singing, more so than speech, could delay the time it took for an infant to become distressed when unable to see another person.11 And in 2014, research led by Laurel Trainor at McMaster University found that when babies just over a year old were bounced to music, they became more helpful towards a researcher standing opposite them who had been bopping along in rhythm (handing back “accidentally” dropped objects) than to people who had been bouncing asynchronously.12
Are musical tendencies the prod­uct of culture, or have they evolved along with our abilities to produce and process music?
These and related findings have led some to propose that parent-infant bonding, or? social cohesion in general, provided a selective pressure that favored the evolution of musicality in early humans, though Trehub herself says she does not subscribe to this rather speculative view. “I have no difficulty imagining a time when music-like things would have been very important in communicating global notions and managing interpersonal relationships,” she says. “But it’s pretty hard, based on anything we look at now, to relate it to conditions in ancient times and the functions it would have served.”
Indeed, the inherent challenge of studying ancient hominin behavior, combined with the complexity of the trait itself, makes explanations for musicality’s evolution particularly vulnerable to “just-so” stories, says Trainor. “When you look at the effect that music has on people, it’s easy to think it must have been an evolutionary adaption. Of course, it’s very difficult, if not impossible, to prove that something is an evolutionary adaption.”
This intractability has led some researchers to view adaptation-based lines of inquiry into human musicality as something of a distraction. “I don’t think it’s a particularly useful question at all,” says Fitch. “It’s an unhealthy preoccupation, given how little we know.” Others have argued for a subtler view of musicality’s evolution that avoids the search for simple answers. “The evolutionary process isn’t a one-shot thing,” says Trainor. “It has many nuanced stages.”
Her work, for example, addresses how aspects of auditory scene analysis—the process by which animals locate the source of sounds in space—could have led to features currently viewed as critical for musicality in modern humans. But that doesn’t mean that music didn’t provide its own benefits once it arose. “I think parts of the long road to our becoming musical beings were driven by evolutionary pressures [for music itself],” says Trainor, “and other parts of it were driven by evolutionary pressures for things other than music that music now uses.”
But most researchers agree that understanding our musical evolution will require studying musicality in more-focused and biologically relevant ways. For example, instead of asking why musicality evolved, Fitch suggests researchers investigate why humans evolved to synchronize their movements to a beat. This approach “is what’s really important,” says Patel. “We’ve had hundreds of years of speculation. Now, I think, the real advances are being made by thinking about the individual components of music cognition and looking at them in an evolutionary framework.” 
© SASCHA SCHUERMANN/AFP/GETTY IMAGES; © ISTOCK.COM/ANGEALWithout physical evidence of ancient humans’ musical perception, researchers look for signs of our capacity to produce music to approximate the timescale of musicality’s evolution. One way to do this is through archaeology. The oldest undisputed musical instruments are bone flutes (pictured at right) found in caves in Germany that have been dated as more than 40,000 years old (J Hum Evo, 62:664-76, 2012). But many researchers argue that the use of the voice as an instrument likely came much earlier than that.
To put an upper limit on the age of vocal musicality, some have turned to human anatomy. Producing complex vocalizations requires both a powerful brain and specialized vocal machinery. During hominin evolution, for example, the thorax become more innervated, a change that allowed humans (and Neanderthals) to more effectively control the pitch and intensity in their vocalizations. The fossil record indicates that the first hominins with breath control like ours lived a maximum of 1.6 million years ago, which some suggest marks the first time our lineage would have been physically capable of producing vocalizations resembling singing (Am J Phys Anthropol, 109:341-63, 1999).
Genetics might also help researchers pin down when certain components of musicality appeared in our ancestors, if parts of our DNA can be linked to our capacity for perceiving and processing music. For now, however, the question of when humans first produced something we might recognize as music remains open to speculation.

References

  1. J.H. McDermott et al., “Indifference to dissonance in native Amazonians reveals cultural variation in music perception,” Nature, 535:547-50, 2016.
  2. F. Liu et al., “Intonation processing in congenital amusia: Discrimination, identification and imitation,” Brain, 133:1682-93, 2010.
  3. I. Peretz et al., “Neural overlap in processing music and speech,” Philos Trans R Soc B, doi:10.1098/rstb.2014.0090, 2015.
  4. I. Peretz et al., “Functional dissociations following bilateral lesions of auditory cortex,” Brain, 117: 1283–1301, 1994.
  5. C. Rogalsky et al., “Functional anatomy of language and music perception: Temporal and structural factors investigated using functional magnetic resonance imaging,” J Neurosci, 31:3843-52, 2011.
  6. S. Norman-Haignere et al., “Distinct cortical pathways for music and speech revealed by hypothesis-free voxel decomposition,” Neuron, 88:1281-96, 2015.
  7. X. Liu et al., “Detecting signatures of positive selection associated with musical aptitude in the human genome,” Sci Rep, 6:21198, 2016.
  8. I. Peretz et al., “The genetics of congenital amusia (tone deafness): A family-aggregation study,” Am J Hum Genet, 81:582-88, 2007.
  9. M.D. Lense et al., “(A)musicality in Williams syndrome: Examining relationships among auditory perception, musical skill, and emotional responsiveness to music,” Front Psychol, 4:525, 2013.
  10. M.A. Mosling et al., “Did sexual selection shape human music? Testing predictions from the sexual selection hypothesis of music evolution using a large genetically informative sample of over 10,000 twins,” Evol Hum Behav, 36:359-66, 2015.
  11. M. Corbeil et al., “Singing delays the onset of infant distress,” Infancy, 21:373-91, 2015.
  12. L.K. Cirelli et al., “Interpersonal synchrony increases prosocial behavior in infants,” Dev Sci, 17:1003-11, 2014.

Wednesday, September 9, 2015

The Sounds of Silence: Science-based tinnitus therapeutics are finally coming into their own.

reposted from here

The Sounds of Silence

Science-based tinnitus therapeutics are finally coming into their own.
By  | September 1, 2015
STOP THE RINGING: Tinnitus can manifest early in auditory perception, as damage to the inner ear, or in the brain where sounds are processed. Researchers developing treatments for the condition are targeting various points along this pathway.
See full infographic: JPG
© MARI SCHMITT/SCIENCE SOURCE; © ENCYCLOPEDIA BRITANNICA/UIG/GETTY IMAGES
It often starts off with a bang. Many a soldier, construction worker, concertgoer, or innocent passerby exposed to a loud noise walks away with the telltale symptom of tinnitus, a persistent ringing in the ears. The condition can also arise from other ear traumas, such as middle-ear infections or exposure to high pressure while scuba diving, and begins with damage to the hair cells in the cochlea of the inner ear or to the auditory nerve. Until recently, such damage was thought to be the cause of the phantom sounds that plague tinnitus sufferers. Now, researchers are realizing that it’s much more complex than that.
“Damage to hair cells and auditory nerve fibers sets the stage for the development of tinnitus,” says Jennifer Melcher of the Massachusetts Eye and Ear Infirmary. But the true culprit is really the brain, which eventually begins to compensate for the loss of input from the ear by “turning up the volume” on the sound signals it is trying to pick up, she adds. Navzer Engineer, chief scientific officer of Dallas-based MicroTransponder, which is developing a neurostimulative treatment for tinnitus, agrees: “Cells in the brain don’t stay dormant” even though they have lost input from the ear, he says.
It’s unclear when the condition transitions from the ear to the brain. Researchers also do not yet know whether the brain or peripheral nerves are primarily responsible for amplifying the spontaneous neural activity in the auditory pathway. But in the end the effect is the same: the brain begins to capture sounds of its own creation. “The pathology is in the ear . . . but the sounds are generated by the brain,” says Engineer.
The University of Regensburg’s Berthold Langguth, chairman of the executive committee of the Tinnitus Research Initiative, likens the compensatory sound to the phantom limb sensation experienced by amputees. And like the phenomenon of phantom limbs, there’s not just a single brain region at fault. In addition to the auditory cortex, the limbic cortex—particularly the amygdala, the brain’s emotional center—as well as the temporal, parietal, and sensorimotor cortex areas have all been implicated in tinnitus perception (Curr Biol, 25:1208-14, 2015; eLife, 4:e06576, 2015).
A better scientific understanding of tinnitus could be key to developing an effective treatment. One in five Americans has tinnitus, including more than a million veterans who experienced loud noises in the line of duty, and many suffer a severe form of the disorder. Yet treatment options are largely limited to cognitive behavioral therapy to learn to tune out the sound and physical exercises such as contracting the head and neck muscles (by clenching their jaw, for example) to adjust the rogue sound’s pitch or loudness. For those who continue to suffer significant psychological and emotional consequences of tinnitus, there has been no pharmaceutical treatment or cure. “It’s a very desperate group,” Engineer says.

Inside the ear

The most advanced treatment in development for tinnitus targets the auditory neurons that connect the hair cells of the inner ear to the auditory cortex. In the mid-1990s, researchers at Inserm in Montpellier, France, found that chemically inducing tinnitus in rats was associated with upregulated N-methyl-D-aspartate (NMDA) receptors on the animals’ cochlear neurons (J Neurosci, 23:3944-52, 2003). NMDA receptors play a role in forming new synapses at these neurons, and regulate the levels of other neuronal receptors. In 2003, teaming up with Swiss entrepreneur Thomas Meyer and his company Auris Medical, the Inserm researchers also observed such increased levels of NMDA receptors in rodents suffering from noise-induced tinnitus. Prior to noise trauma, the animals had been trained to jump onto a pole in response to a sound, and after trauma, rodents with tinnitus continued these behaviors, even in the absence of an external tone.
To treat the condition, the group set about designing a drug that would block NMDA receptors. These days, Auris is testing the small-molecule drug S-ketamine in two Phase 3 trials of trauma-induced tinnitus patients. The treatment, delivered directly into the inner ear via three injections over three days, must catch the disorder while the problem is still within the ear, before the brain has begun overcompensating for the loss of hearing. Once that happens, no amount of adjustment to the receptors on the auditory nerves will do any good.
Because it is not known when that transition from ear to brain occurs, one of the current trials, of 300 European patients, is specifically testing tinnitus sufferers who have developed the condition no more than three months prior to treatment. The other, a study of 330 North American patients, is investigating a therapy within one year post-trauma. Preliminary results suggest that S-ketamine is effective beyond three months, but declines in effectiveness within a year of the initial trauma, so later stages of the trial are being refocused on the four- to six-month time frame. The trials will be completed at the end of this year, and Auris hopes to submit to the US Food and Drug Administration (FDA) for approval in the summer of 2016.
“[The hope is] that this might show benefits and might become the first drug to be approved for the treatment of tinnitus,” says Langguth, who is not affiliated with Auris. Because the therapeutic is delivered directly into the ear, he thinks that it will be particularly useful for patients who also suffer from hearing loss, an extremely common comorbidity of tinnitus.
S-ketamine will probably not work for all tinnitus sufferers, however, says Meyer. “We feel it’s important to get started and then see what else can be done with this.”

Chemically modifying neurons

Meanwhile, other researchers are developing therapies that target the brain to treat patients whose tinnitus has progressed to the auditory cortex. One strategy currently under investigation is the manipulation of the potassium channels found throughout the auditory pathway. “[Using] potassium channel modulators, the activity in the central auditory pathway can be changed,” Langguth says.
In tinnitus, the auditory maps in the brain rewire themselves without external stimulation.
U.K.-based Autifony Therapeutics began in 2011 as an outgrowth of GlaxoSmithKline’s investigation of potassium channels in the auditory system. Autifony CEO Charles Large and his colleague Giuseppe Alvaro are focusing on the development of the previously unexamined Kv3 potassium channels, which exist throughout the brain and in high abundance on the auditory nerve and cortex, allowing the neurons to signal rapidly. After exposure to loud noises, these channels can be damaged and fail to properly conduct ions, making them an ideal drug target for the treatment of tinnitus.
Working with academic collaborators, Autifony researchers developed a small-molecule drug that enhances the function of the Kv3 channels. In rodent models, the drug reduced the spontaneous neural activity in the midbrain auditory system associated with tinnitus. “We’re dampening down a spurious activity that is believed to give rise to the phantom perception,” says Large. “We have a lot of confidence from our preclinical work that we should see some interesting effects in people with tinnitus.”
Autifony researchers are currently recruiting patients for Phase 2 trials in the U.K. In contrast to Auris Medical’s target patient population, Autifony focuses on people whose tinnitus is established in the brain and who have had the disorder for at least six months (but no more than 18 months). The treatment is currently taken as a daily oral pill for 28 days, although the length of the treatment course is still under investigation.
“Autifony is really quite unique in having a drug treatment that’s been rationally designed around the idea that we can dampen down the hyperexcitability that we see in the nervous system,” Large says.

Retraining the brain

For patients with chronic tinnitus beyond the 18-month window being targeted by Autifony, a third potential treatment is making its way through clinical trials. MicroTransponder’s therapy is a riff on a decades-old treatment for epilepsy and depression called vagus-nerve stimulation. More than 90,000 patients have undergone such treatment.
MicroTransponder was started out of Michael Kilgard’s lab at the University of Texas, Dallas, where Engineer conducted his postdoctoral research. In 1998, Kilgard’s group published a rat study demonstrating that direct stimulation of the nucleus basalis of the forebrain could be paired with the playing of a particular tone to change how sounds map to the brain’s auditory cortex (Science, 279:1714-18). The researchers were later able to accomplish the same sound remapping in the rat brain by stimulating the more-accessible vagus nerve, which projects to the nucleus basalis (Nature, 470:101-04, 2011).
The auditory maps in the brains of tinnitus sufferers rewire themselves without external stimulation. In the human inner ear, the cochlea contains more than 3,500 inner hair cells, each of which is tuned to a single frequency. As these cells are damaged by loud noise, infection, or other insults, the brain is deprived of normal input from the ear at particular frequencies. As a result, neurons that represent adjacent frequencies expand their range to include the missing frequencies. These neighboring neurons begin to fire spontaneously, sending phantom signals to create the perceived sound of tinnitus. Kilgard’s work suggests that retraining the auditory cortex by pairing tones with electrical stimulation could correct such abnormal firing. “There was the idea that maybe there could be specific forms of auditory stimulation which could have a beneficial effect,” Langguth says.
Engineer’s stimulation therapy has successfully stemmed tinnitus in a rat model, in which the animals were exposed to a loud noise that impaired their hearing. The treatment, now in human trials, involves two incisions in the neck and chest wall to insert a helical electrode, which winds around the left vagus nerve in the neck, and wires to connect the electrode to a pacemaker-like pulse generator in the chest. The researchers determine the pitch of a patient’s tinnitus by playing various tones until the patient reports a match with the perceived sound, then pair tones near but not at the tinnitus pitch with vagus-nerve stimulation in half-second pulses. The idea is to train the brain regions that have begun to fire spontaneously—and cause tinnitus—to respond only to the non-tinnitus frequencies that the ear actually hears. “[It] actually reverts the auditory cortex map down to normal,” Engineer says. Vagus-nerve stimulation or the tones by themselves don’t work, he noted. “The key is the pairing.” The course of treatment is a 2.5-hour daily listening session for six weeks.
In a preliminary 10-patient study in Belgium, about half of patients with chronic tinnitus improved (Neuromodulation, 17:170-79, 2014). However, the researchers noted decreased efficacy if the patients were on antidepressants. Stimulating the vagus nerve causes the release of the neurotransmitters norepinephrine and acetylcholine. Antidepressant medications can interfere with this release, suggesting that these natural chemicals are required for the vagus-nerve stimulation treatment for tinnitus to work. The proof-of-concept trial was followed up by a larger-scale study of 30 patients at four sites in the U.S. that concluded this April. The most common side effect was a hoarse voice, but otherwise the treatment is considered safe. Results from the trial will be published this autumn, but Engineer says that the data look promising.

Looking ahead

While there is still no approved drug to treat tinnitus, Meyer of Auris Medical is optimistic that the future for patients suffering from the disorder is bright. “We have learned a tremendous amount over the last few years. We know things we absolutely had no idea about 10 years ago,” he says. In addition to the therapies currently in trials for acute tinnitus, “I believe that long-term there will be also solutions for chronic tinnitus,” he adds.
Meanwhile, further research into the pathophysiology of the disease will be critical to develop targeted treatments. “There’s not one tinnitus,” Langguth says. “There are probably many forms, which differ in their mechanisms and differ in their best possible treatment.” Studies that help scientists better delineate these different forms of tinnitus into clinically meaningful subgroups will likely inform future drug targets, he adds.
“The hearing space is where ophthalmology was 10 or 12 years ago,” says Autifony executive Barbara Domayne-Hayman. At that time, the basic research community was not that interested in certain eye disorders, “whereas now it’s an extremely hot and active space. We think that hearing is going to go in exactly the same way,” she adds.