The Effect of Musical Training and Working Memory in Adverse Listening Situations Objectives: Speech-in-noise (SIN) perception is essential for everyday communication. In most communication situations, the listener requires the ability to process simultaneous complex auditory signals to understand the target speech or target sound. As the listening situation becomes more difficult, the ability to distinguish between speech and noise becomes dependent on recruiting additional cognitive resources, such as working memory (WM). Previous studies have explored correlations between WM and SIN perception in musicians and nonmusicians, with mixed findings. However, no study to date has examined the speech perception abilities of musicians and nonmusicians with similar WM capacity. The objectives of this study were to investigate (1) whether musical experience results in improved listening in adverse listening situations, and (2) whether the benefit of musical experience can be separated from the effect of greater WM capacity. Design: Forty-nine young musicians and nonmusicians were assigned to subgroups of high versus low WM, based on the performance on the backward digit span test. To investigate the effects of music training and WM on SIN perception, performance was assessed on clinical tests of speech perception in background noise. Listening effort (LE) was assessed in a dual-task paradigm and via self-report. We hypothesized that musicians would have an advantage when listening to SIN, at least in terms of reduced LE. Results: There was no statistically significant difference between musicians and nonmusicians, and no significant interaction between music training and WM on any of the outcome measures used in this study. However, a significant effect of WM on SIN ability was found on both the Quick Speech-In-Noise test (QuickSIN) and the Hearing in Noise Test (HINT) tests. Conclusion: The results of this experiment suggest that music training does not provide an advantage in adverse listening situations either in terms of improved speech understanding or reduced LE. While musicians have been shown to have heightened basic auditory abilities, the effect on SIN performance may be more subtle. Our results also show that regardless of prior music training, listeners with high WM capacity are able to perform significantly better on speech-in-noise tasks. The authors have no conflicts of interest to disclose. Received July 25, 2018; accepted May 8, 2019. Address for correspondence: Jillian Escobar, 5981 Andover Drive West, Hanover Park, IL 60133, USA. E-mail: jillian.escobar95@gmail.com Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Electro-Tactile Stimulation Enhances Cochlear-Implant Melody Recognition: Effects of Rhythm and Musical Training Objectives: Electro-acoustic stimulation (EAS) enhances speech and music perception in cochlear-implant (CI) users who have residual low-frequency acoustic hearing. For CI users who do not have low-frequency acoustic hearing, tactile stimulation may be used in a similar fashion as residual low-frequency acoustic hearing to enhance CI performance. Previous studies showed that electro-tactile stimulation (ETS) enhanced speech recognition in noise and tonal language perception for CI listeners. Here, we examined the effect of ETS on melody recognition in both musician and nonmusician CI users. Design: Nine musician and eight nonmusician CI users were tested in a melody recognition task with or without rhythmic cues in three testing conditions: CI only (E), tactile only (T), and combined CI and tactile stimulation (ETS). Results: Overall, the combined electrical and tactile stimulation enhanced the melody recognition performance in CI users by 9% points. Two additional findings were observed. First, musician CI users outperformed nonmusicians CI users in melody recognition, but the size of the enhancement effect was similar between the two groups. Second, the ETS enhancement was significantly higher with nonrhythmic melodies than rhythmic melodies in both groups. Conclusions: These findings suggest that, independent of musical experience, the size of the ETS enhancement depends on integration efficiency between tactile and auditory stimulation, and that the mechanism of the ETS enhancement is improved electric pitch perception. The present study supports the hypothesis that tactile stimulation can be used to improve pitch perception in CI users. ACKNOWLEDGMENTS: We thank our CI participants for their passionate participation in this study. J.H. designed and performed experiments, analyzed data, and wrote the article. B.S. created the testing software, processed testing stimuli, and edited the article. T.L. processed testing stimuli and edited the article. F.G.Z. designed the experiments and provided critical revision. All authors discussed the results and implications and commented on the artcile at all states. This work was supported by NIH Grants DC015587 (F.Z.) and DC014503 (J.H.). The authors have no conflicts of interest to disclose. Received July 6, 2018; accepted April 19, 2019. Address for correspondence: Juan Huang, PhD, Department of Biomedical Engineering, Johns Hopkins University, Clark 106, 3400 N. Charles Street, Baltimore, MD 21218, USA. E-mail: jhuang7@jhu.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Long-Term Sensorineural Hearing Loss in Patients With Blast-Induced Tympanic Membrane Perforations Objective: To describe characteristics of sensorineural hearing loss (SNHL) in patients with blast-induced tympanic membrane (TM) perforations that required surgery. Design: A retrospective review of hearing outcomes in those who had tympanoplasty for combat blast-induced TM perforations. These were sequential cases from one military otolaryngologist from 2007 to 2012. A total of 87 patients were reviewed, and of those, 49 who had appropriate preinjury, preoperative, and long-term audiograms were included. Those with pre-existing hearing loss were excluded. Preinjury audiograms were used to assess how sensorineural thresholds changed in the ruptured ears, and in the contralateral ear in those with unilateral perforations. Results: The mean time from injury to the final postoperative audiogram was 522 days. In the ears with TM perforations, 70% had SNHLs of 10 dB or less (by bone conduction pure tone averages). Meanwhile, approximately 8% had threshold shifts >30 dB, averaging 50 dB. The strongest predictor of severe or profound hearing loss was ossicular discontinuity. Thresholds also correlated with bilateral injury and perforation size. In those with unilateral perforations, the SNHL was almost always larger on the side with the perforation. Those with SNHL often had a low-to-mid frequency threshold shift and, in general, audiograms that were flatter across frequencies than those of a typical population of military personnel with similar levels of overall hearing loss. Conclusions: There is a bimodal distribution of hearing loss in those who experience a blast exposure severe enough to perforate at least one TM. Most ears recover close to their preinjury thresholds, but a minority experience much larger sensorineural threshold shifts. Blast exposed ears also tend to have a flatter audiogram than most service members with similar levels of hearing loss. ACKNOWLEDGEMENTS: The authors thank Shankar Sridhara and Sungjin Song for their previous work on this set of patients. The views expressed in this article are those of the author and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government. The authors have no conflicts of interest to disclose. Received December 17, 2018; accepted April 23, 2019. Address for correspondence: Douglas Brungart, Walter Reed National Military Medical Center, Department of Audiology, 8901 Wisconsin Avenue, Bethesda, MD 20889, USA. E-mail: douglas.s.brungart.civ@mail.mil Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
The Association Between Physiological Noise Levels and Speech Understanding in Noise Objectives: Traditionally, elevated hearing thresholds have been considered to be the main contributors to difficulty understanding speech in noise; yet, patients will often report difficulties with speech understanding in noise despite having audiometrically normal hearing. The purpose of this cross-sectional study was to critically evaluate the relationship of various metrics of auditory function (behavioral thresholds and otoacoustic emissions) on speech understanding in noise in a large sample of audiometrically normal-hearing individuals. Design: Behavioral hearing thresholds, distortion product otoacoustic emission (DPOAE) levels, stimulus-frequency otoacoustic emission levels, and physiological noise (quantified using OAE noise floors) were measured from 921 individuals between 10 and 68 years of age with normal pure-tone averages. The quick speech-in-noise (QuickSIN) test outcome, quantified as the signal-to-noise ratio (SNR) loss, was used as the metric of speech understanding in noise. Principle component analysis (PCA) and linear regression modeling were used to evaluate the relationship between the measures of auditory function and speech in noise performance. Results: Over 25% of participants exhibited mild or worse degree of SNR loss. PCA revealed DPOAE levels at 12.5 to 16 kHz to be significantly correlated with the variation in QuickSIN scores, although correlations were weak (R2 = 0.017). Out of all the metrics evaluated, higher levels of self-generated physiological noise accounted for the most variance in QuickSIN performance (R2 = 0.077). Conclusions: Higher levels of physiological noise were associated with worse QuickSIN performance in listeners with normal hearing sensitivity. We propose that elevated physiological noise levels in poorer speech in noise performers could diminish the effective SNR, thereby negatively impacting performance as seen by poorer QuickSIN scores. ACKNOWLEDGMENTS: The authors thank Drs. Uzma Wilson, Niall Klyn, Courtney Glavin, Jungmee Lee, Gayla Poling, and Ms. Vickie Hellyer along with many other collaborators for assistance with recruitment, data collection, and analysis. This work was supported by NIH/NIDCD grants no. R01DC008420 (to S. D. and J. S.) and Northwestern University. The authors have no conflicts of interest to disclose. Received May 11, 2018; accepted May 3, 2019. Address for correspondence: Samantha Stiepan, Northwestern University, Room 1–246, Francis Searle Building, 2240 Campus Drive, Evanston, IL 60208, USA. E-mail: smstiepan@u.northwestern.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Prediction Model for Audiological Outcomes in Patients With GJB2 Mutations Objectives: Recessive mutations in GJB2 are the most common genetic cause of sensorineural hearing impairment (SNHI) in humans. SNHI related to GJB2 mutations demonstrates a wide variation in audiological features, and there has been no reliable prediction model for hearing outcomes until now. The objectives of this study were to clarify the predominant factors determining hearing outcome and to establish a predictive model for SNHI in patients with GJB2 mutations. Design: A total of 434 patients confirmed to have biallelic GJB2 mutations were enrolled and divided into three groups according to their GJB2 genotypes. Audiological data, including hearing levels and audiogram configurations, were compared between patients with different genotypes. Univariate and multivariate generalized estimating equation (GEE) analyses were performed to analyze longitudinal data of patients with multiple audiological records. Results: Of the 434 patients, 346 (79.7%) were homozygous for the GJB2 p.V37I mutation, 55 (12.7%) were compound heterozygous for p.V37I and another GJB2 mutation, and 33 (7.6%) had biallelic GJB2 mutations other than p.V37I. There was a significant difference in hearing level and the distribution of audiogram configurations between the three groups. Multivariate GEE analyses on 707 audiological records of 227 patients revealed that the baseline hearing level and the duration of follow-up were the predominant predictors of hearing outcome, and that hearing levels in patients with GJB2 mutations could be estimated based on these two parameters: (Predicted Hearing Level [dBHL]) = 3.78 + 0.96 × (Baseline Hearing Level [dBHL]) + 0.55 × (Duration of Follow-Up [y]). Conclusion: The baseline hearing level and the duration of follow-up are the main prognostic factors for outcome of GJB2-related SNHI. These findings may have important clinical implications in guiding follow-up protocols and designing treatment plans in patients with GJB2 mutations. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: We thank all the patients and their parents for participating in the study. P.-Y. C. and C.-C. W. ascertained clinical data, analyzed data, and wrote the article. Y.-H.L. and Y.-H.L. performed genetic examinations and analyses. L.-H.T. collected and analyzed audiological data. T.-H.Y. collected clinical data. P.-L.C. supervised the genetic examinations and analyses. T.-C.L. and C.-J.H. supervised the whole study and provided critical revision. This work was supported by research grants from the National Health Research Institute (NHRI-EX106-10414PC to C.-C.W.), the Ministry of Science and Technology of Taiwan (MOST 103-2628-B-002-009-MY4 to C.-C.W.), and National Taiwan University Hospital Yunlin Branch (NTUHYL106.N005 to P.-Y.C.). The authors have no conflicts of interest to disclose. Received May 23, 2018; accepted March 8, 2019. Address for correspondence: Chen-Chi Wu, Department of Otolaryngology, National Taiwan University Hospital, 7, Chung-Shan South Road, Taipei, Taiwan. E-mail: chenchiwu@ntuh.gov.tw Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Human Frequency Following Responses to Vocoded Speech: Amplitude Modulation Versus Amplitude Plus Frequency Modulation Objectives: The most commonly employed speech processing strategies in cochlear implants (CIs) only extract and encode amplitude modulation (AM) in a limited number of frequency channels. Zeng et al. (2005) proposed a novel speech processing strategy that encodes both frequency modulation (FM) and AM to improve CI performance. Using behavioral tests, they reported better speech, speaker, and tone recognition with this novel strategy than with the AM-alone strategy. Here, we used the scalp-recorded human frequency following responses (FFRs) to examine the differences in the neural representation of vocoded speech sounds with AM alone and AM + FM as the spectral and temporal cues were varied. Specifically, we were interested in determining whether the addition of FM to AM improved the neural representation of envelope periodicity (FFRENV) and temporal fine structure (FFRTFS), as reflected in the temporal pattern of the phase-locked neural activity generating the FFR. Design: FFRs were recorded from 13 normal-hearing, adult listeners in response to the original unprocessed stimulus (a synthetic diphthong /au/ with a 110-Hz fundamental frequency or F0 and a 250-msec duration) and the 2-, 4-, 8- and 16-channel sine vocoded versions of /au/ with AM alone and AM + FM. Temporal waveforms, autocorrelation analyses, fast Fourier Transform, and stimulus-response spectral correlations were used to analyze both the strength and fidelity of the neural representation of envelope periodicity (F0) and TFS (formant structure). Results: The periodicity strength in the FFRENV decreased more for the AM stimuli than for the relatively resilient AM + FM stimuli as the number of channels was increased. Regardless of the number of channels, a clear spectral peak of FFRENV was consistently observed at the stimulus F0 for all the AM + FM stimuli but not for the AM stimuli. Neural representation as revealed by the spectral correlation of FFRTFS was better for the AM + FM stimuli when compared to the AM stimuli. Neural representation of the time-varying formant-related harmonics as revealed by the spectral correlation was also better for the AM + FM stimuli as compared to the AM stimuli. Conclusions: These results are consistent with previously reported behavioral results and suggest that the AM + FM processing strategy elicited brainstem neural activity that better preserved periodicity, temporal fine structure, and time-varying spectral information than the AM processing strategy. The relatively more robust neural representation of AM + FM stimuli observed here likely contributes to the superior performance on speech, speaker, and tone recognition with the AM + FM processing strategy. Taken together, these results suggest that neural information preserved in the FFR may be used to evaluate signal processing strategies considered for CIs. ACKNOWLEDGMENTS: The authors thank Dr. Jackson Gandour for his assistance with statistical analysis. This work was supported by the National Institutes of Health (NIH), R01 DC008549 (A. K.) and the Department of Speech, Language, and Hearing Sciences, Purdue University. The authors have no conflicts of interest to disclose. Received January 24, 2019; accepted May 15, 2019. Address for correspondence: Ananthanarayan Krishnan, PhD, Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Dr, Rm 3060, West Lafayette, In 47907, USA. E-mail: rkrish@purdue.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
The Ototoxic Potential of Cobalt From Metal-on-Metal Hip Implants: Objective Auditory and Vestibular Outcome Objectives: During the past decade, the initial popularity of metal-on-metal (MoM) hip implants has shown a progressive decline due to increasingly reported implant failure and revision surgeries. Local as well as systemic toxic side effects have been associated with excessive metal ion release from implants, in which cobalt (Co) plays an important role. The rare condition of systemic cobaltism seems to manifest as a clinical syndrome with cardiac, endocrine, and neurological symptoms, including hearing loss, tinnitus, and imbalance. In most cases described in the literature, revision surgery and the subsequent drop in blood Co level led to (partial) alleviation of the symptoms, suggesting a causal relationship with Co exposure. Moreover, the ototoxic potential of Co has recently been demonstrated in animal experiments. Since its ototoxic potential in humans is merely based on anecdotal case reports, the current study aimed to prospectively and objectively examine the auditory and vestibular function in patients implanted with a MoM hip prosthesis. Design: Twenty patients (15 males and 5 females, aged between 33 and 65 years) implanted with a primary MoM hip prosthesis were matched for age, gender, and noise exposure to 20 non-implanted control subjects. Each participant was subjected to an extensive auditory (conventional and high-frequency pure tone audiometry, transient evoked and distortion product otoacoustic emissions [TEOAEs and DPOAEs], auditory brainstem responses [ABR]) and vestibular test battery (cervical and ocular vestibular evoked myogenic potentials [cVEMPs and oVEMPs], rotatory test, caloric test, video head impulse test [vHIT]), supplemented with a blood sample collection to determine the plasma Co concentration. Results: The median [interquartile range] plasma Co concentration was 1.40 [0.70, 6.30] µg/L in the MoM patient group and 0.19 [0.09, 0.34] µg/L in the control group. Within the auditory test battery, a clear trend was observed toward higher audiometric thresholds (11.2 to 16 kHz), lower DPOAE (between 4 and 8 kHz), and total TEOAE (1 to 4 kHz) amplitudes, and a higher interaural latency difference for wave V of the ABR in the patient versus control group (0.01 ≤ p < 0.05). Within the vestibular test battery, considerably longer cVEMP P1 latencies, higher oVEMP amplitudes (0.01 ≤ p < 0.05), and lower asymmetry ratio of the vHIT gain (p < 0.01) were found in the MoM patients. In the patient group, no suggestive association was observed between the plasma Co level and the auditory or vestibular outcome parameters. Conclusions: The auditory results seem to reflect signs of Co-induced damage to the hearing function in the high frequencies. This corresponds to previous findings on drug-induced ototoxicity and the recent animal experiments with Co, which identified the basal cochlear outer hair cells as primary targets and indicated that the cellular mechanisms underlying the toxicity might be similar. The vestibular outcomes of the current study are inconclusive and require further elaboration, especially with respect to animal studies. The lack of a clear dose–response relationship may question the clinical relevance of our results, but recent findings in MoM hip implant patients have confirmed that this relationship can be complicated by many patient-specific factors. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: L.L. has received funding from the Special Research Fund of Ghent University (BOF) (grant no. 01D33015) and is currently receiving funding from the Research Foundation Flanders (FWO) (grant no. 1170718N), as a predoctoral research fellow. L.L. performed the experiments, analyzed the data, and wrote the article. L.M., B.V., S.D.G., and R.V. assisted with the data analysis and critically reviewed the article during the complete writing process. C.V.D.S. and K.D.S. were responsible for the recruitment of patients with a metal-on-metal implant and reviewed the article during the complete writing process. I.D., H.K., R.L., and F.L.W. critically reviewed the article during the complete writing process. The authors have no conflict of interest to disclose. Received December 5, 2018; accepted March 31, 2019. Address for correspondence: Laura Leyssens, MSc, Department of Rehabilitation Sciences, University of Ghent, Ghent University Hospital, Corneel Heymanslaan 10, 9000 Ghent, Belgium. E-mail: Laura.Leyssens@UGent.be Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Test-Retest Variability in the Characteristics of Envelope Following Responses Evoked by Speech Stimuli Objectives: The objective of the present study was to evaluate the between-session test-retest variability in the characteristics of envelope following responses (EFRs) evoked by modified natural speech stimuli in young normal hearing adults. Design: EFRs from 22 adults were recorded in two sessions, 1 to 12 days apart. EFRs were evoked by the token /susa∫ i/ (2.05 sec) presented at 65 dB SPL and recorded from the vertex referenced to the neck. The token /susa∫ i/, spoken by a male with an average fundamental frequency [f0] of 98.53 Hz, was of interest because of its potential utility as an objective hearing aid outcome measure. Each vowel was modified to elicit two EFRs simultaneously by lowering the f0 in the first formant while maintaining the original f0 in the higher formants. Fricatives were amplitude-modulated at 93.02 Hz and elicited one EFR each. EFRs evoked by vowels and fricatives were estimated using Fourier analyzer and discrete Fourier transform, respectively. Detection of EFRs was determined by an F-test. Test-retest variability in EFR amplitude and phase coherence were quantified using correlation, repeated-measures analysis of variance, and the repeatability coefficient. The repeatability coefficient, computed as twice the standard deviation (SD) of test-retest differences, represents the ±95% limits of test-retest variation around the mean difference. Test-retest variability of EFR amplitude and phase coherence were compared using the coefficient of variation, a normalized metric, which represents the ratio of the SD of repeat measurements to its mean. Consistency in EFR detection outcomes was assessed using the test of proportions. Results: EFR amplitude and phase coherence did not vary significantly between sessions, and were significantly correlated across repeat measurements. The repeatability coefficient for EFR amplitude ranged from 38.5 nV to 45.6 nV for all stimuli, except for /∫/ (71.6 nV). For any given stimulus, the test-retest differences in EFR amplitude of individual participants were not correlated with their test-retest differences in noise amplitude. However, across stimuli, higher repeatability coefficients of EFR amplitude tended to occur when the group mean noise amplitude and the repeatability coefficient of noise amplitude were higher. The test-retest variability of phase coherence was comparable to that of EFR amplitude in terms of the coefficient of variation, and the repeatability coefficient varied from 0.1 to 0.2, with the highest value of 0.2 for /∫/. Mismatches in EFR detection outcomes occurred in 11 of 176 measurements. For each stimulus, the tests of proportions revealed a significantly higher proportion of matched detection outcomes compared to mismatches. Conclusions: Speech-evoked EFRs demonstrated reasonable repeatability across sessions. Of the eight stimuli, the shortest stimulus /∫/ demonstrated the largest variability in EFR amplitude and phase coherence. The test-retest variability in EFR amplitude could not be explained by test-retest differences in noise amplitude for any of the stimuli. This lack of explanation argues for other sources of variability, one possibility being the modulation of cortical contributions imposed on brainstem-generated EFRs. ACKNOWLEDGMENTS: This study was funded by a Collaborative Health Research Project grant from the Canadian Institutes of Health Research and the Natural Sciences and Engineering Research Council of Canada (grant no. 493836-2016). V.E. designed the study, performed the experiment, analyzed data, and wrote the article. D.P. discussed study design, helped with response analysis, and edited the article. S.S. and S.A. discussed results, and edited the article. The authors have no conflicts of interest to disclose. Received October 20, 2018; accepted March 8, 2019. Address for correspondence: Vijayalakshmi Easwar, 541 Waisman Centre, The University of Wisconsin-Madison, 1500 Highland Ave, Madison, WI 53705, USA. E-mail: veaswar@wisc.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
The Revised Hearing Handicap Inventory and Screening Tool Based on Psychometric Reevaluation of the Hearing Handicap Inventories for the Elderly and Adults Objectives: The present study evaluates the items of the Hearing Handicap Inventory for the Elderly and Hearing Handicap Inventory for Adults (HHIE/A) using Mokken scale analysis (MSA), a type of nonparametric item response theory, and develops updated tools with optimal psychometric properties. Design: In a longitudinal study of age-related hearing loss, 1447 adults completed the HHIE/A and audiometric testing at baseline. Discriminant validity of the emotional consequences and social/situational effects subscales of the HHIE/A was assessed, and nonparametric item response theory was used to explore dimensionality of the items of the HHIE/A and to refine the scales. Results: The HHIE/A items form strong unidimensional scales measuring self-perceived hearing handicap, but with a lack of discriminant validity of the two distinct subscales. Two revised scales, the 18-item Revised Hearing Handicap Inventory and the 10-item Revised Hearing Handicap Inventory—Screening, were developed from the common items of the original HHIE/A that met the assumptions of MSA. The items on both of the revised scales can be ordered in terms of increasing difficulty. Conclusions: The results of the present study suggest that the newly developed Revised Hearing Handicap Inventory and Revised Hearing Handicap Inventory—Screening are strong unidimensional, clinically informative measures of self-perceived hearing handicap that can be used for adults of all ages. The real-data example also demonstrates that MSA is a valuable alternative to classical psychometric analysis. Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal's Web site (www.ear-hearing.com). ACKNOWLEDGMENTS: The authors thank Jayne Ahlstrom for editorial assistance. The authors thank the subjects who participated in this study. This work was supported (in part) by research grant P50 DC000422 from NIH/NIDCD and by the South Carolina Clinical and Translational Research (SCTR) Institute, with an academic home at the Medical University of South Carolina, NIH/NCATS Grant number UL1 TR001450. This investigation was conducted in a facility constructed with support from Research Facilities Improvement Program Grant Number C06 RR14516 from the NIH/NCRR. Portions of this article were presented at the Hearing Across the Lifespan 2018 conference, Cernobbio, Lake Como, Italy, June 7, 2018. The authors have no conflicts of interest to declare. Received December 31, 2018; accepted March 25, 2019. Address for correspondence: Christy Cassarly, Department of Public Health Sciences, Medical University of South Carolina, 135 Cannon St., Ste 303, MSC 835, Charleston, SC 29425, USA. E-mail: cassarly@musc.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
The Effects of GJB2 or SLC26A4 Gene Mutations on Neural Response of the Electrically Stimulated Auditory Nerve in Children Objectives: This study aimed to (1) investigate the effect of GJB2 and SLC26A4 gene mutations on auditory nerve function in pediatric cochlear implant users and (2) compare their results with those measured in implanted children with idiopathic hearing loss. Design: Participants included 20 children with biallelic GJB2 mutations, 16 children with biallelic SLC26A4 mutations, and 19 children with idiopathic hearing loss. All subjects except for two in the SLC26A4 group had concurrent Mondini malformation and enlarged vestibular aqueduct. All subjects used Cochlear Nucleus devices in their test ears. For each subject, electrophysiological measures of the electrically evoked compound action potential (eCAP) were recorded using both anodic- and cathodic-leading biphasic pulses. Dependent variables (DVs) of interest included slope of eCAP input/output (I/O) function, the eCAP threshold, and eCAP amplitude measured at the maximum comfortable level (C level) of the anodic-leading stimulus (i.e., the anodic C level). Slopes of eCAP I/O functions were estimated using statistical modeling with a linear regression function. These DVs were measured at three electrode locations across the electrode array. Generalized linear mixed effect models were used to evaluate the effects of study group, stimulus polarity, and electrode location on each DV. Results: Steeper slopes of eCAP I/O function, lower eCAP thresholds, and larger eCAP amplitude at the anodic C level were measured for the anodic-leading stimulus compared with the cathodic-leading stimulus in all subject groups. Children with GJB2 mutations showed steeper slopes of eCAP I/O function and larger eCAP amplitudes at the anodic C level than children with SLC26A4 mutations and children with idiopathic hearing loss for both the anodic- and cathodic-leading stimuli. In addition, children with GJB2 mutations showed a smaller increase in eCAP amplitude when the stimulus changed from the cathodic-leading pulse to the anodic-leading pulse (i.e., smaller polarity effect) than children with idiopathic hearing loss. There was no statistically significant difference in slope of eCAP I/O function, eCAP amplitude at the anodic C level, or the size of polarity effect on all three DVs between children with SLC26A4 mutations and children with idiopathic hearing loss. These results suggested that better auditory nerve function was associated with GJB2 but not with SLC26A4 mutations when compared with idiopathic hearing loss. In addition, significant effects of electrode location were observed for slope of eCAP I/O function and the eCAP threshold. Conclusions: GJB2 and SLC26A4 gene mutations did not alter polarity sensitivity of auditory nerve fibers to electrical stimulation. The anodic-leading stimulus was generally more effective in activating auditory nerve fibers than the cathodic-leading stimulus, despite the presence of GJB2 or SLC26A4 mutations. Patients with GJB2 mutations appeared to have better functional status of the auditory nerve than patients with SLC26A4 mutations who had concurrent Mondini malformation and enlarged vestibular aqueduct and patients with idiopathic hearing loss. ACKNOWLEDGMENTS: We gratefully thank all subjects and their parents for participating in this study. J.L. participated in data collection and patient testing, prepared the initial draft of this article, provided critical comments, and approved the final version of this article. L.X., X.C., and R.W. participated in the data collection and patient testing, provided critical comments, and approved the final version of this article. X.B. conducted genetic tests in all study participants, provided critical comments, and approved the final version of this article. A.P. and Z.F. provided critical comments and approved the final version of this article. H.W. participated in designing this study, provided critical comments, and approved the final version of this article. S.H. designed the study, participated in data collection and patient testing, and drafted and approved the final version of this article. The authors have no conflicts of interest to declare. Received September 18, 2018; accepted March 17, 2019. Address for correspondence: Haibo Wang, Department of Otolaryngology—Head and Neck Surgery, Shandong Provincial Hospital Affiliated to Shandong University, Duanxing West Road, Huaiyin, Jinan 250022, Shandong, People's Republic of China. E-mail: Whboto11@163.Com or Shuman He, Eye and Ear Institute, Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, OH 43212, USA. E-mail: shuman.he@osumc.edu Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved. |
Alexandros Sfakianakis
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
Anapafseos 5 . Agios Nikolaos
Crete.Greece.72100
2841026182
6948891480
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου