Σφακιανάκης Αλέξανδρος
ΩτοΡινοΛαρυγγολόγος
Αναπαύσεως 5 Άγιος Νικόλαος
Κρήτη 72100
00302841026182
00306932607174
alsfakia@gmail.com

Αρχειοθήκη ιστολογίου

! # Ola via Alexandros G.Sfakianakis on Inoreader

Η λίστα ιστολογίων μου

Τετάρτη 21 Οκτωβρίου 2020

Speech and Language








J Speech Lang Hear Res




. 2019 Sep 20;62(9):3545-3553. doi: 10.1044/2019_JSLHR-H-18-0307. Epub 2019 Aug 21.
Relationship Between Working Memory and Speech-in-Noise Recognition in Young and Older Adult Listeners With Age-Appropriate Hearing
Katrien Vermeire 1 2, Allart Knoop 2 3, Marleen De Sloovere 2, Peggy Bosch 4, Maurits van den Noort 5
Affiliations expand
PMID: 31433720
DOI: 10.1044/2019_JSLHR-H-18-0307

Abstract


Purpose The purpose of this study was to investigate the relationship between working memory (WM) capacity and speech recognition in noise in both a group of young adults and a group of older adults. Method Thirty-three older adults with a mean age of 71.0 (range: 60.4-82.7) years and 27 young adults with a mean age of 21.7 (range: 19.1-25.0) years participated in the study. All participants had age-appropriate hearing and no history of central nervous system dysfunction. WM capacity was measured using the van den Noort version of the Reading Span Test, and recognition of sentences in the presence of a stationary speech-shaped noise was measured as the speech reception threshold for 50% correct identification by using the Leuven Intelligibility Sentence Test. Results The older adults had significantly worse WM capacity scores, t(58) = 8.266, p < .001, and significantly more difficulty understanding sentences in noise than the younger adults, t(58) = -6.068, p < .001. In the group of older adults, a correlation was found (r = -.488, n = 33, p = .004) between the results of the WM capacity test (Reading Span Test) and the results of the speech-recognition-in-noise test (Leuven Intelligibility Sentence Test), meaning that the higher the WM performance was, the better was the speech recognition in noise. This correlation cannot be found in young normal-hearing listeners. Conclusions This study shows deleterious effects of age on both WM capacity and speech recognition in noise. Interestingly, only in the group of older adults was a significant relation found between WM capacity and speech recognition in noise. The current results caution against the assumption that WM necessarily supports speech-in-noise identification independently of the age and hearing status of the listener.
supplementary info
MeSH termsexpand
full-text links

Proceed to details
Cite









Share













2
J Exp Psychol Learn Mem Cogn




. 2020 May;46(5):968-979. doi: 10.1037/xlm0000767. Epub 2019 Oct 3.
Speech-in-speech perception, nonverbal selective attention, and musical training
Adam Tierney 1, Stuart Rosen 2, Fred Dick 1
Affiliations expand
PMID: 31580123
DOI: 10.1037/xlm0000767

Abstract


Speech is more difficult to understand when it is presented concurrently with a distractor speech stream. One source of this difficulty is that competing speech can act as an attentional lure, requiring listeners to exert attentional control to ensure that attention does not drift away from the target. Stronger attentional control may enable listeners to more successfully ignore distracting speech, and so individual differences in selective attention may be one factor driving the ability to perceive speech in complex environments. However, the lack of a paradigm for measuring nonverbal sustained selective attention to sound has made this hypothesis difficult to test. Here we find that individuals who are better able to attend to a stream of tones and respond to occasional repeated sequences while ignoring a distractor tone stream are also better able to perceive speech masked by a single distractor talker. We also find that participants who have undergone more musical training show better performance on both verbal and nonverbal selective attention tasks, and this musician advantage is greater in older participants. This suggests that one source of a potential musician advantage for speech perception in complex environments may be experience or skill in directing and maintaining attention to a single auditory object. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

Cited by 1 article
supplementary info
MeSH termsexpand
full-text links

Proceed to details
Cite









Share













3
J Speech Lang Hear Res




. 2019 Sep 20;62(9):3443-3461. doi: 10.1044/2019_JSLHR-L-19-0089. Epub 2019 Sep 13.
Specific Language Impairment in African American English and Southern White English: Measures of Tense and Agreement With Dialect-Informed Probes and Strategic Scoring
Janna B Oetting 1, Jessica R Berry 2, Kyomi D Gregory 3, Andrew M Rivière 4, Janet McDonald 5
Affiliations expand
PMID: 31525131
PMCID: PMC6808338
DOI: 10.1044/2019_JSLHR-L-19-0089Free PMC article

Abstract


Purpose In African American English and Southern White English, we examined whether children with specific language impairment (SLI) overtly mark tense and agreement structures at lower percentages than typically developing (TD) controls, while also examining the effects of dialect, structure, and scoring approach. Method One hundred six kindergartners completed 4 dialect-informed probes targeting 8 tense and agreement structures. The 3 scoring approaches varied in the treatment of nonmainstream English forms and responses coded as Other (i.e., those not obligating the target structure). The unmodified approach counted as correct only mainstream overt forms out of all responses, the modified approach counted as correct all mainstream and nonmainstream overt forms and zero forms out of all responses, and the strategic approach counted as correct all mainstream and nonmainstream overt forms out of all responses except those coded as Other. Results With the probes combined and separated, the unmodified and strategic scoring approaches showed lower percentages of overt marking by the SLI groups than by the TD groups; this was not always the case for the modified scoring approach. With strategic scoring and dialect-specific cut scores, classification accuracy (SLI vs. TD) was highest for the 8 individual structures considered together, the past tense probe, and the past tense probe irregular items. Dialect and structure effects and dialect differences in classification accuracy also existed. Conclusions African American English- and Southern White English-speaking kindergartners with SLI overtly mark tense and agreement at lower percentages than same dialect-speaking TD controls. Strategic scoring of dialect-informed probes targeting tense and agreement should be pursued in research and clinical practice.

76 references
5 figures
supplementary info
Publication types, MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













4
J Am Acad Audiol




. Jul/Aug 2019;30(7):619-633. doi: 10.3766/jaaa.17140. Epub 2018 Nov 1.
Risk Assessment of Recreational Noise-Induced Hearing Loss from Exposure through a Personal Audio System-iPod Touch
Kamakshi V Gopal 1, Liana E Mills 1, Bryce S Phillips 1, Rajesh Nandy 2
Affiliations expand
PMID: 30395532
DOI: 10.3766/jaaa.17140

Abstract


Background: Recreational noise-induced hearing loss (RNIHL) is a major health issue and presents a huge economic burden on society. Exposure to loud music is not considered hazardous in our society because music is thought to be a source of relaxation and entertainment. However, there is evidence that regardless of the sound source, frequent exposure to loud music, including through personal audio systems (PAS), can lead to hearing loss, tinnitus, difficulty processing speech, and increased susceptibility to age-related hearing loss.

Purpose: Several studies have documented temporary threshold shifts (TTS) (a risk indicator of future permanent impairment) in subjects that listen to loud music through their PAS. However, there is not enough information regarding volume settings that may be considered to be safe. As a primary step toward quantifying the risk of RNIHL through PAS, we assessed changes in auditory test measures before and after exposure to music through the popular iPod Touch device set at various volume levels.

Research design: This project design incorporated aspects of both between- and within-subjects and used repeated measures to analyze individual groups.

Study sample: A total of 40 adults, aged 18-31 years with normal hearing were recruited and randomly distributed to four groups. Each group consisted of five males and five females.

Data collection and analysis: Subjects underwent two rounds of testing (pre- and postmusic exposure), with a 30-min interval, where they listened to a playlist consisting of popular songs through an iPod at 100%, 75%, 50%, or 0% volume (no music). Based on our analysis on the Knowles Electronic Manikin for Acoustic Research, with a standardized 711 coupler, it was determined that listening to the playlist for 30 min through standard earbuds resulted in an average level of 97.0 dBC at 100% volume, 83.3 dBC at 75% volume, and 65.6 dBC at 50% volume. Pure-tone thresholds from 500-8000 Hz, extended high-frequency pure tones between 9-12.5 kHz, and distortion product otoacoustic emissions (DPOAE) were obtained before and after the 30-min music exposure. Analysis of variance (ANOVA) was performed with two between-subjects factors (volume and gender) and one within-subjects factor (frequency). Change (shift) in auditory test measures was used as the outcome for the ANOVA.

Results: Results indicated significant worsening of pure-tone thresholds following music exposure only in the group that was exposed to 100% volume at the following frequencies: 2, 3, 4, 6 and 8 kHz. DPOAEs showed significant decrease at 2000 and 2822 Hz, also only for the 100% volume condition. No significant changes were found between pre- and postmusic exposure measures in groups exposed to 75%, 50%, or 0% volume conditions. Follow-up evaluations conducted a week later indicated that pure-tone thresholds had returned to the premusic exposure levels.

Conclusions: These results provide quantifiable information regarding safe volume control settings on the iPod Touch with standard earbuds. Listening to music using the iPod Touch at 100% volume setting for as little as 30 min leads to TTS and worsening of otoacoustic emissions, a risk for permanent auditory damage.

American Academy of Audiology.

Cited by 2 articles
supplementary info
Publication types, MeSH termsexpand
Proceed to details
Cite









Share













5
J Speech Lang Hear Res




. 2019 Sep 20;62(9):3470-3492. doi: 10.1044/2019_JSLHR-L-19-0076. Epub 2019 Sep 4.
Analysis of Amount and Style of Oral Interaction Related to Language Outcomes in Children With Hearing Loss: A Systematic Review (2006-2016)
Nuzhat Sultana 1, Lena L N Wong 1, Suzanne C Purdy 2
Affiliations expand
PMID: 31479621
DOI: 10.1044/2019_JSLHR-L-19-0076

Abstract


Purpose This systematic review summarizes the evidence for differences in the amount of language input between children with and without hearing loss (HL). Of interest to this review is evaluating the associations between language input and language outcomes (receptive and expressive) in children with HL in order to enhance insight regarding what oral language input is associated with good communication outcomes. Method A systematic review was conducted using keywords in 3 electronic databases: Scopus, PubMed, and Google Scholar. Keywords were related to language input, language outcomes, and HL. Titles and abstracts were screened independently, and full-text manuscripts meeting inclusion criteria were extracted. An appraisal checklist was used to evaluate the methodological quality of studies as poor, good, or excellent. Results After removing duplicates, 1,545 study results were extracted, with 27 eligible for full-text review. After the appraisal, 8 studies were included in this systematic review. Differences in the amount of language input between children with and without HL were noted. Conversational exchanges, open-ended questions, expansions, recast, and parallel talk were positively associated with stronger receptive and expressive language scores. The quality of evidence was not assessed as excellent for any of the included studies. Conclusions This systematic review reveals low-level evidence from 8 studies that specific language inputs (amount and style) are optimal for oral language outcomes in children with HL. Limitations were identified as sample selection bias, lack of information on control of confounders and assessment protocols, and limited duration of observation/recordings. Future research should address these limitations.

Cited by 1 article
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













6
Observational Study
J Pain Symptom Manage




. 2019 Dec;58(6):949-958.e2. doi: 10.1016/j.jpainsymman.2019.06.030. Epub 2019 Aug 22.
Dysphagia Prevalence and Predictors in Cancers Outside the Head, Neck, and Upper Gastrointestinal Tract
Ciarán Kenny 1, Julie Regan 2, Lucy Balding 3, Stephen Higgins 3, Norma O'Leary 3, Fergal Kelleher 4, Ray McDermott 5, John Armstrong 6, Alina Mihai 6, Eoin Tiernan 6, Jennifer Westrup 6, Pierre Thirion 6, Declan Walsh 7
Affiliations expand
PMID: 31445137
DOI: 10.1016/j.jpainsymman.2019.06.030

Abstract


Context: Dysphagia is usually associated with malignancies of the head, neck, and upper gastrointestinal tract but also occurs in those with tumors outside anatomic swallow regions. It can lead to aspiration pneumonia, malnutrition, reduced quality of life, and psychosocial distress. No studies have yet reliably described dysphagia prevalence in those with malignancies outside anatomic swallow regions.

Objective: The objective of this study was to establish the prevalence and predictors of dysphagia in adults with solid malignancies outside the head, neck, and upper gastrointestinal tract.

Methods: A cross-sectional, observational study using consecutive sampling was conducted. There were 385 participants (mean age 66 ± 12 years) with 21 different primary cancer sites from two acute hospitals and one hospice. Locoregional disease was present in 33%, metastatic in 67%. Dysphagia was screened by empirical questionnaire and confirmed through swallow evaluation. Demographic and clinical predictors were determined by univariate and multivariate binary regression.

Results: Dysphagia occurred in 19% of those with malignancies outside anatomic swallow regions. Prevalence was 30% in palliative care and 32% in hospice care. Dysphagia was most strongly associated with cough, nausea, and worse performance status. It was also associated with lower quality of life and nutritional difficulties.

Conclusion: Dysphagia was common and usually undiagnosed before study participation. It occurred at all disease stages but coincided with functional decline. It may therefore represent a cancer frailty marker. Oncology and palliative care services should routinely screen for this symptom. Timely dysphagia identification and management may improve patient well-being and prevent adverse effects like aspiration pneumonia and weight loss.

Keywords: Cancer; deglutition disorders; dysphagia; hospice and palliative care nursing; neoplasms.

Copyright © 2019 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













7
J Speech Lang Hear Res




. 2019 Sep 20;62(9):3381-3396. doi: 10.1044/2019_JSLHR-L-19-0001. Epub 2019 Aug 19.
Using Polygenic Profiles to Predict Variation in Language and Psychosocial Outcomes in Early and Middle Childhood
Dianne F Newbury 1, Jenny L Gibson 2, Gina Conti-Ramsden 3, Andrew Pickles 4, Kevin Durkin 5, Umar Toseeb 6
Affiliations expand
PMID: 31425657
PMCID: PMC6808346
DOI: 10.1044/2019_JSLHR-L-19-0001Free PMC article

Abstract


Purpose Children with poor language tend to have worse psychosocial outcomes compared to their typically developing peers. The most common explanations for such adversities focus on developmental psychological processes whereby poor language triggers psychosocial difficulties. Here, we investigate the possibility of shared biological effects by considering whether the same genetic variants, which are thought to influence language development, are also predictors of elevated psychosocial difficulties during childhood. Method Using data from the U.K.-based Avon Longitudinal Study of Parents and Children, we created a number of multi-single-nucleotide polymorphism polygenic profile scores, based on language and reading candidate genes (ATP2C2, CMIP, CNTNAP2, DCDC2, FOXP2, and KIAA0319, 1,229 single-nucleotide polymorphisms) in a sample of 5,435 children. Results A polygenic profile score for expressive language (8 years) that was created in a discovery sample (n = 2,718) predicted not only expressive language (8 years) but also peer problems (11 years) in a replication sample (n = 2,717). Conclusions These findings provide a proof of concept for the use of such a polygenic approach in child language research when larger data sets become available. Our indicative findings suggest consideration should be given to concurrent intervention targeting both linguistic and psychosocial development as early language interventions may not stave off later psychosocial difficulties in children.

Cited by 1 article
58 references
2 figures
supplementary info
Publication types, MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













8
J Speech Lang Hear Res




. 2019 Sep 20;62(9):3160-3182. doi: 10.1044/2019_JSLHR-S-18-0212. Epub 2019 Aug 19.
Bang for Your Buck: A Single-Case Experimental Design Study of Practice Amount and Distribution in Treatment for Childhood Apraxia of Speech
Edwin Maas 1, Christina Gildersleeve-Neumann 2, Kathy Jakielski 3, Nicolette Kovacs 1, Ruth Stoeckel 4, Helen Vradelis 1, Mackenzie Welsh 1
Affiliations expand
PMID: 31425660
DOI: 10.1044/2019_JSLHR-S-18-0212

Abstract


Purpose The aim of this study was to examine 2 aspects of treatment intensity in treatment for childhood apraxia of speech (CAS): practice amount and practice distribution. Method Using an alternating-treatments single-subject design with multiple baselines, we compared high versus low amount of practice, and massed versus distributed practice, in 6 children with CAS. Conditions were manipulated in the context of integral stimulation treatment. Changes in perceptual accuracy, scored by blinded analysts, were quantified with effect sizes. Results Four children showed an advantage for high amount of practice, 1 showed an opposite effect, and 1 showed no condition difference. For distribution, 4 children showed a clear advantage for massed over distributed practice post treatment; 1 showed an opposite pattern, and 1 showed no clear difference. Follow-up revealed a similar pattern. All children demonstrated treatment effects (larger gains for treated than untreated items). Conclusions High practice amount and massed practice were associated with more robust speech motor learning in most children with CAS, compared to low amount and distributed practice, respectively. Variation in effects across children warrants further research to determine factors that predict optimal treatment conditions. Finally, this study adds to the evidence base supporting the efficacy of integral stimulation treatment for CAS. Supplemental Material https://doi.org/10.23641/asha.9630599.
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













9
Sci Rep




. 2019 Jul 15;9(1):10185. doi: 10.1038/s41598-019-46641-7.
Effects of Auditory Distraction on Face Memory
Raoul Bell 1, Laura Mieth 2, Jan Philipp Röer 3, Axel Buchner 2
Affiliations expand
PMID: 31308413
PMCID: PMC6629691
DOI: 10.1038/s41598-019-46641-7Free PMC article

Abstract


Effects of auditory distraction by task-irrelevant background speech on the immediate serial recall of verbal material are well established. Less is known about the influence of background speech on memory for visual configural information. A recent study demonstrated that face learning is disrupted by joyful music relative to soothing violin music and quiet. This pattern is parallel to findings in the serial-recall paradigm showing that auditory distraction is primarily caused by auditory changes. Here we connect these two streams of research by testing whether face learning is impaired by irrelevant speech. Participants learned faces either in quiet or while ignoring auditory changing-state sequences (sentential speech) or steady-state sequences (word repetitions). Face recognition was impaired by irrelevant speech relative to quiet. Furthermore, changing-state speech disrupted performance more than steady-state speech. The results were replicated in a second study using reversed speech, suggesting that the disruptive potential of the background speech does not depend on its semantic content. These findings thus demonstrate robust effects of auditory distraction on face learning. Theoretical explanations and applied implications are discussed.

Conflict of interest statement


The authors declare no competing interests.

71 references
3 figures
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













10
Comparative Study
J Speech Lang Hear Res




. 2019 Sep 20;62(9):3462-3469. doi: 10.1044/2019_JSLHR-L-18-0331. Epub 2019 Sep 12.
Not All Nonverbal Tasks Are Equally Nonverbal: Comparing Two Tasks in Bilingual Kindergartners With and Without Developmental Language Disorder
Kathleen Durant 1, Elizabeth Peña 2, Anna Peña 3, Lisa M Bedore 4, María R Muñoz 5
Affiliations expand
PMID: 31518170
PMCID: PMC6808348
DOI: 10.1044/2019_JSLHR-L-18-0331Free PMC article

Abstract


Purpose This study investigates the interaction of language ability status, cultural experience, and nonverbal cognitive skill performance in Spanish-English bilinguals with typical development (TD) and developmental language disorder (DLD). Method One hundred sixty-nine Spanish-English bilingual kindergartener's scores on the Symbolic Memory and Cube Design subtests from the Universal Nonverbal Intelligence Test (Bracken & McCallum, 1998) were analyzed by language ability (TD vs. DLD). Results t tests and analysis of variance showed bilingual children with TD and DLD performed comparably to the Universal Nonverbal Intelligence Test norming sample on the cube design task, while children with DLD had significantly lower performance on the symbolic memory task. Conclusion These results suggest that cultural experience minimally impacted performance for bilingual children with typically developing language. Bilingual children with DLD were differentially impacted on symbolic memory, a task that is verbally mediated despite nonverbal administration and performance. Findings are discussed within the Cattell-Horn-Carroll theory of cognitive abilities.
supplementary info
Publication types, MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













11
J Speech Lang Hear Res




. 2019 Sep 20;62(9):3431-3442. doi: 10.1044/2019_JSLHR-L-18-0400. Epub 2019 Sep 3.
Comprehension and Inference: Relationships Between Oral and Written Modalities in Good and Poor Comprehenders During Adolescence
Anna Potocki 1, Virginie Laval 1
Affiliations expand
PMID: 31479285
DOI: 10.1044/2019_JSLHR-L-18-0400

Abstract


Purpose We investigated the relationships between text reading comprehension and oral idiom comprehension in adolescents. We also examined the more specific relationships between inference in text comprehension and inference in idiom comprehension. Method We selected participants from an initial sample of 140 students aged 13-15 years to form 2 groups, according to their decoding and reading comprehension abilities: 1 group of good comprehenders/good decoders (n = 49) and 1 group of less skilled comprehenders but with adequate decoding skills (n = 20). The reading comprehension task comprised both literal and inferential (text-based and knowledge-based) questions. These 2 groups were then compared on an idiom comprehension task. In this task, idioms were presented orally, and students were placed in a situation that simulated a real-life oral interaction. The idioms were novel for the students (translated from a foreign language), either transparent or opaque, and presented either with a supportive context or without any context. Results Good reading comprehenders outperformed less skilled ones on the idiom task. Both groups benefited from the supportive context, especially the good comprehenders. Knowledge-based inferences in written text comprehension were related to contextual inferences for opaque idioms, while semantic inferences for transparent idioms were related to literal text comprehension, but not to text-connecting inferences. Conclusion These results are discussed both theoretically, in terms of cross-modal comprehension processes, and practically, in terms of implications for remediation.
supplementary info
MeSH termsexpand
full-text links

Proceed to details
Cite









Share













12
Randomized Controlled Trial
Laryngoscope




. 2020 Jul;130(7):1750-1755. doi: 10.1002/lary.28287. Epub 2019 Sep 9.
Role of voice rest following laser resection of vocal fold lesions: A randomized controlled trial
Sandeep S Dhaliwal 1, Philip C Doyle 1 2, Sebastiano Failla 2, Sarah Hawkins 3, Kevin Fung 1
Affiliations expand
PMID: 31498467
DOI: 10.1002/lary.28287

Abstract


Objectives/hypothesis: Voice rest is often prescribed following phonosurgery by most surgeons despite limited empiric evidence to support its practice. This study assessed the effect of postphonosurgery voice rest on vocal outcomes.

Study design: Prospective, randomized controlled trial.

Methods: Patients with unilateral vocal fold lesions undergoing CO2 laser excision were recruited in a prospective manner and randomized into one of two groups: 1) an experimental arm consisting of 7 days of absolute voice rest, or 2) a control arm consisting of no voice rest. The primary outcome measure was the Voice Handicap Index-10 (VHI-10) questionnaire. Secondary outcomes included aerodynamic measurements (maximum phonation time), acoustic measures (fundamental frequency, jitter, shimmer, and harmonic-to-noise ratio), and auditory-perceptual measures. Primary and secondary outcomes were assessed preoperatively and reassessed postoperatively at the 1- and 3-month follow-up. Patient compliance to voice rest instructions were controlled for using subjective and objective parameters.

Results: Thirty patients were enrolled with 15 randomized to each arm of the study. Statistical analysis for the entire cohort showed a significant improvement in the mean preoperative VHI-10 compared to postoperative assessments at 1-month (19.0 vs. 7.3, P < .05) and 3-month (19.0 vs. 6.2, P < .05) follow-up. However, between-group comparisons showed no significant difference in postoperative VHI-10 at either time point. Similarly, secondary outcome measures yielded no significant difference in between-group comparisons.

Conclusions: Our study shows no significant benefit to voice rest on postoperative voice outcomes as determined by patient self-perception, acoustic variables, and auditory-perceptual analysis.

Level of evidence: 1b CLINICAL TRIAL NUMBER: NCT02788435 (clinicaltrials.gov) Laryngoscope, 130:1750-1755, 2020.

Keywords: Voice rest; phonosurgery; vocal cord surgery.

© 2019 The American Laryngological, Rhinological and Otological Society, Inc.

24 references
supplementary info
Publication types, MeSH terms, Associated dataexpand
full-text links

Proceed to details
Cite









Share













13
J Speech Lang Hear Res




. 2019 Sep 20;62(9):3204-3219. doi: 10.1044/2019_JSLHR-S-18-0493. Epub 2019 Sep 3.
Effects of Encouraging the Use of Gestures on Speech
Alice Cravotta 1, M Grazia Busà 1, Pilar Prieto 2 3
Affiliations expand
PMID: 31479385
DOI: 10.1044/2019_JSLHR-S-18-0493

Abstract


Purpose Previous studies have investigated the effects of the inability to produce hand gestures on speakers' prosodic features of speech; however, the potential effects of encouraging speakers to gesture have received less attention, especially in naturalistic settings. This study aims at investigating the effects of encouraging the production of hand gestures on the following speech correlates: speech discourse length (number of words and discourse length in seconds), disfluencies (filled pauses, self-corrections, repetitions, insertions, interruptions, speech rate), and prosodic properties (measures of fundamental frequency [F0] and intensity). Method Twenty native Italian speakers took part in a narration task in which they had to describe the content of short comic strips to a confederate listener in 1 of the following 2 conditions: (a) nonencouraging condition (N), that is, no instructions about gesturing were given, and (b) encouraging condition (E), that is, the participants were instructed to gesture while telling the story. Results Instructing speakers to gesture led effectively to higher gesture rate and salience. Significant differences were found for (a) discourse length (e.g., the narratives had more words in E than in N) and (b) acoustic measures (F0 maximum, maximum intensity, and mean intensity metrics were higher in E than in N). Conclusion The study shows that asking speakers to use their hands while describing a story can have an effect on narration length and can also impact on F0 and intensity metrics. By showing that enhancing the gesture stream could affect speech prosody, this study provides further evidence that gestures and prosody interact in the process of speech production.
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













14
Observational Study
Cognition




. 2019 Dec;193:104025. doi: 10.1016/j.cognition.2019.104025. Epub 2019 Jul 17.
Keep trying!: Parental language predicts infants' persistence
Kelsey Lucca 1, Rachel Horton 2, Jessica A Sommerville 3
Affiliations expand
PMID: 31325720
DOI: 10.1016/j.cognition.2019.104025

Abstract


Infants' persistence in the face of challenges predicts their learning across domains. In older children, linguistic input is an important predictor of persistence: when children are praised for their efforts, as opposed to fixed traits, they try harder on future endeavors. Yet, little is known about the impact of linguistic input as individual differences in persistence are first emerging, during infancy. Based on a preliminary investigation of the CHILDES database, which revealed that language surrounding persistence is an early-emerging feature of children's language environment, we conducted an observational study to test how linguistic input in the form of praise and persistence-focused language more broadly impacts infants' persistence. In Study 1, 18-month-olds and their caregivers participated in two tasks: a free-play task (a gear stacker) and a joint-book reading task. We measured parental language and infants' persistent gear stacking. Findings revealed that infants whose parents spent more time praising their efforts and hard work (process praise), and used more persistence-focused language in general, were more persistent than infants whose parents used this language less often. Study 2 extended these findings by examining whether the effects of parental language on persistence carry over to contexts in which parents are uninvolved. The findings revealed that parental use of process praise predicted infants' persistence even in the absence of parental support. Critically, these findings could not be explained by caregivers' reporting on their own persistence. Together, these findings suggest that as early as 18 months, linguistic input is a key predictor of persistence.

Keywords: Cognitive development; Infancy; Language; Learning; Motivation; Parent-child interactions.

Copyright © 2019 Elsevier B.V. All rights reserved.
supplementary info
Publication types, MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













15
Sci Rep




. 2019 Jul 2;9(1):9531. doi: 10.1038/s41598-019-45836-2.
Talking matters - evaluative and motivational inner speech use predicts performance in conflict tasks
Miriam Gade 1 2, Marko Paelecke 3
Affiliations expand
PMID: 31266985
PMCID: PMC6606602
DOI: 10.1038/s41598-019-45836-2Free PMC article

Abstract


Conflict between response tendencies is ubiquitous in every day performance. Capabilities that resolve such conflicts are therefore mandatory for successful goal achievement. The present study investigates the potential of evaluative and motivational inner speech to help conflict resolution. In our study we assessed six tasks commonly used to measure conflict resolution capabilities and cognitive flexibility in 163 participants. Participants additionally answered questionnaires concerned with their habitual usage of inner speech such as silently rehearsing task instructions and evaluating performance. We found reduced conflict effects in tasks using symbolic, non-verbal stimuli for participants with higher self-reported use of evaluative and motivational inner speech. Overall, our findings suggest that silent self-talk and performance monitoring are beneficial for conflict resolution over and above constructs such as intelligence and working memory capacity that account for mean RT differences among participants.

Conflict of interest statement


The authors declare no competing interests.

Cited by 1 article
43 references
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













16
J Speech Lang Hear Res




. 2019 Sep 20;62(9):3413-3430. doi: 10.1044/2019_JSLHR-L-18-0207. Epub 2019 Aug 22.
Vocabulary Growth From 18 to 24 Months of Age in Children With and Without Repaired Cleft Palate
Marziye Eshghi 1, Reuben Adatorwovor 2, John S Preisser 2, Elizabeth R Crais 3, David J Zajac 4
Affiliations expand
PMID: 31437085
PMCID: PMC6808344
DOI: 10.1044/2019_JSLHR-L-18-0207Free PMC article

Abstract


Purpose This study investigated vocabulary growth from 18 to 24 months of age in young children with repaired cleft palate (CP), children with otitis media, and typically developing (TD) children. In addition, the contributions of factors such as hearing level, middle ear status, size of consonant inventory, maternal education level, and gender to the development of expressive vocabulary were explored. Method Vocabulary size of 40 children with repaired CP, 29 children with otitis media, and 25 TD children was measured using the parent report on MacArthur-Bates Communicative Development Inventories: Words and Sentences (Fenson et al., 2007) at 18 and 24 months of age. All participants underwent sound field audiometry at 12 months of age and tympanometry at 18 months of age. A multiple linear regression with and without covariates was used to model vocabulary growth from 18 to 24 months of age across the 3 groups. Results Children with CP produced a significantly smaller number of words at 24 months of age and showed significantly slower rate of vocabulary growth from 18 to 24 months of age when compared to TD children (p < .05). Although middle ear status was found to predict vocabulary growth from 18 to 24 months of age across the 3 groups (p < .05), the confidence interval was large, suggesting the effect should be interpreted with caution. Conclusions Children with CP showed slower expressive vocabulary growth relative to their age-matched TD peers. Middle ear status may be associated with development of vocabulary skills for some children.

2 figures
supplementary info
Publication types, MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













17
Int J Psychophysiol




. 2020 Jan;147:72-82. doi: 10.1016/j.ijpsycho.2019.11.005. Epub 2019 Nov 16.
Listen-and-repeat training improves perception of second language vowel duration: Evidence from mismatch negativity (MMN) and N1 responses and behavioral discrimination
Antti Saloranta 1, Paavo Alku 2, Maija S Peltola 3
Affiliations expand
PMID: 31743699
DOI: 10.1016/j.ijpsycho.2019.11.005

Abstract


The purpose of this study was to examine the efficacy of three days of listen-and-repeat training on the perception and production of vowel duration contrasts. Generalization to an untrained vowel and a non-linguistic sound was also examined. Twelve adults underwent four sessions of listen-and-repeat training over two days with the pseudoword contrast /tite/-/ti:te/. Generalization effects were examined with another vowel contrast, /tote/-/to:te/ and a sinusoidal tone pair as a non-linguistic stimulus. Learning effects were measured with psychophysiological (EEG) event-related potentials (mismatch negativity and N1), behavioral discrimination tasks and production tasks. The results showed clear improvement in all perception measurements for the trained stimuli. The effects also affected the untrained vowel by eliciting an N1 response, and affected the behavioral perception of the non-linguistic stimuli. The MMN response for the untrained linguistic stimuli, however, did not increase. These findings suggest that the training was able to increase the sensitivity of preattentive auditory duration discrimination, but that phoneme-specific spectral information may also be needed to shape the neural representation of phoneme categories.

Keywords: EEG; Event-related potential; Mismatch negativity; Production training; Second language acquisition.

Copyright © 2019 Elsevier B.V. All rights reserved.
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













18
Int J Psychophysiol




. 2020 Jan;147:137-146. doi: 10.1016/j.ijpsycho.2019.10.013. Epub 2019 Nov 20.
Seeking neurophysiological manifestations of speech production: An ERP study
Adithya Chandregowda 1, Yael Arbel 2, Emanuel Donchin 3
Affiliations expand
PMID: 31756406
DOI: 10.1016/j.ijpsycho.2019.10.013

Abstract


The aim of this study was to examine the neurophysiological correlates of speech production by elucidating pertinent ERP components. Such examination can pave way for investigations on typical and atypical speech neuromotor control. Participants completed a speech task by saying a specific word (speaking condition) or withholding the verbal response (non-speaking condition) based on the color of a frame placed around a fixation cross that were displayed on a computer screen. They also completed a simple hand motor task by pressing a button with the right or left index finger based on the color of a frame. The hand motor task was administered to verify that neural activity specific to motor preparation was detectable. Two ERP components emerged from the multichannel principal component analysis (PCA) as distinguishing between the speaking and no speaking conditions: a posterior negative component, and a left lateralized positive component. The morphology of the posterior negative component, as well as the correlation between its magnitude and mean response time suggest that this component is closely associated with speech motor control. The left-lateralized component was interpreted as reflecting a process possibly mediated by the speech dominant left hemisphere.

Copyright © 2019. Published by Elsevier B.V.
supplementary info
MeSH termsexpand
full-text links

Proceed to details
Cite









Share













19
J Speech Lang Hear Res




. 2019 Sep 20;62(9):3500-3515. doi: 10.1044/2019_JSLHR-H-18-0361. Epub 2019 Sep 15.
Behavioral Hearing Thresholds and Distortion Product Otoacoustic Emissions in Cannabis Smokers
Samantha Brumbach 1, Shawn S Goodman 2, Rachael R Baiduc 1
Affiliations expand
PMID: 31525116
DOI: 10.1044/2019_JSLHR-H-18-0361

Abstract


Purpose Cannabis is a widely used drug both medically and recreationally. The aim of this study was to determine if cannabis smoking is associated with changes in auditory function, as measured by behavioral hearing thresholds and/or distortion product otoacoustic emissions (DPOAEs). Method We investigated hearing thresholds and 2f1-f2 DPOAEs in 20 cannabis smokers and 20 nonsmokers between 18 and 28 years old. Behavioral thresholds were obtained from 0.25 to 16 kHz. DPOAEs were measured using discrete tones between f2 of 0.5 and 19.03 kHz using an f2/f1 ratio of 1.22 and L1/L2 = 65/55 dB SPL. Thresholds and DPOAE amplitudes were compared between groups using linear mixed-effects models with sex and frequency as predictors. Results Behavioral thresholds in smokers did not differ significantly between smokers and nonsmokers (all ps > .05). Although not significant, long-term smokers exhibited poorer thresholds than short-term smokers and nonsmokers. Smokers generally exhibited lower DPOAE amplitudes than nonsmokers, although the differences were not significant. Male smokers had significantly poorer DPOAE amplitudes than male nonsmokers in the low frequencies (f2 ≤ 2 kHz; p = .0245). Conclusion Results indicate that smoking cannabis may negatively alter the function of outer hair cells in young men. This subtle cochleopathology is evident in the absence of measurable differences in behavioral hearing thresholds between cannabis smokers and nonsmokers.
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













20
Sci Rep




. 2019 Aug 13;9(1):11773. doi: 10.1038/s41598-019-48033-3.
Syllables are Retrieved before Segments in the Spoken Production of Mandarin Chinese: An ERP Study
Chen Feng 1, Yuan Yue 1, Qingfang Zhang 2
Affiliations expand
PMID: 31409830
PMCID: PMC6692332
DOI: 10.1038/s41598-019-48033-3Free PMC article

Abstract


Languages may differ in terms of the functional units of word-form encoding used in spoken word production. It is widely accepted that segments are the primary units used in Indo-European languages. However, it is controversial what the functional units (syllables or segments) in Chinese spoken word production are. In the present study, Mandarin Chinese speakers named pictures while ignoring distractor words presented simultaneously, which shared atonal syllables, bodies or rhymes, or were unrelated with the name of the target pictures. Behavioral results showed that naming latencies in the 3 phonologically-related conditions were significantly shorter than those associated with the unrelated condition. EEG data indicated that the syllable-related condition modulated event-related potentials (ERPs) in a time window of 320-500 ms, the body-related condition modulated ERPs from 370-420 ms, while the rhyme-related condition modulated ERPs from 400-450 ms. The starting points for evident syllable, body, and rhyme priming effects were 322 ms, 368 ms, and 408 ms (by the Guthrie & Buchwald method) or 340 ms, 372 ms and 403 ms (by the jackknife procedure), respectively. Our findings provide a relative temporal course of syllable and segment encoding in Chinese spoken naming: Syllables are retrieved before segments, and constitute the primary processing units during the early stage of word-form encoding. Furthermore, segments and their order are retrieved incrementally from left to right when producing Chinese spoken words.

Conflict of interest statement


The authors declare no competing interests.

Cited by 1 article
53 references
1 figure
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













21
Trends Hear




. Jan-Dec 2019;23:2331216519886688. doi: 10.1177/2331216519886688.
Age-Related Temporal Processing Deficits in Word Segments in Adult Cochlear-Implant Users
Zilong Xie 1, Casey R Gaskins 1, Maureen J Shader 1, Sandra Gordon-Salant 1, Samira Anderson 1, Matthew J Goupell 1
Affiliations expand
PMID: 31808373
PMCID: PMC6900735
DOI: 10.1177/2331216519886688Free PMC article

Abstract


Aging may limit speech understanding outcomes in cochlear-implant (CI) users. Here, we examined age-related declines in auditory temporal processing as a potential mechanism that underlies speech understanding deficits associated with aging in CI users. Auditory temporal processing was assessed with a categorization task for the words dish and ditch (i.e., identify each token as the word dish or ditch) on a continuum of speech tokens with varying silence duration (0 to 60 ms) prior to the final fricative. In Experiments 1 and 2, younger CI (YCI), middle-aged CI (MCI), and older CI (OCI) users participated in the categorization task across a range of presentation levels (25 to 85 dB). Relative to YCI, OCI required longer silence durations to identify ditch and exhibited reduced ability to distinguish the words dish and ditch (shallower slopes in the categorization function). Critically, we observed age-related performance differences only at higher presentation levels. This contrasted with findings from normal-hearing listeners in Experiment 3 that demonstrated age-related performance differences independent of presentation level. In summary, aging in CI users appears to degrade the ability to utilize brief temporal cues in word identification, particularly at high levels. Age-specific CI programming may potentially improve clinical outcomes for speech understanding performance by older CI listeners.

Keywords: aging; cochlear implant; presentation level; temporal processing.

Cited by 2 articles
75 references
5 figures
supplementary info
Publication types, MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













22
Trends Hear




. Jan-Dec 2019;23:2331216519885568. doi: 10.1177/2331216519885568.
A Set of Time-and-Frequency-Localized Short-Duration Speech-Like Stimuli for Assessing Hearing-Aid Performance via Cortical Auditory-Evoked Potentials
Michael A Stone 1 2, Anisa Visram 1 2, James M Harte 3, Kevin J Munro 1 2
Affiliations expand
PMID: 31858885
PMCID: PMC6967206
DOI: 10.1177/2331216519885568Free PMC article

Abstract


Short-duration speech-like stimuli, for example, excised from running speech, can be used in the clinical setting to assess the integrity of the human auditory pathway at the level of the cortex. Modeling of the cochlear response to these stimuli demonstrated an imprecision in the location of the spectrotemporal energy, giving rise to uncertainty as to what and when of a stimulus caused any evoked electrophysiological response. This article reports the development and assessment of four short-duration, limited-bandwidth stimuli centered at low, mid, mid-high, and high frequencies, suitable for free-field delivery and, in addition, reproduction via hearing aids. The durations were determined by the British Society of Audiology recommended procedure for measuring Cortical Auditory-Evoked Potentials. The levels and bandwidths were chosen via a computational model to produce uniform cochlear excitation over a width exceeding that likely in a worst-case hearing-impaired listener. These parameters produce robustness against errors in insertion gains, and variation in frequency responses, due to transducer imperfections, room modes, and age-related variation in meatal resonances. The parameter choice predicts large spectral separation between adjacent stimuli on the cochlea. Analysis of the signals processed by examples of recent digital hearing aids mostly show similar levels of gain applied to each stimulus, independent of whether the stimulus was presented in isolation, bursts, continuous, or embedded in continuous speech. These stimuli seem to be suitable for measuring hearing-aided Cortical Auditory-Evoked Potentials and have the potential to be of benefit in the clinical setting.

Keywords: Cortical Auditory-Evoked Potential; auditory late response; auditory perception; hearing aids; modulations.

49 references
9 figures
supplementary info
Publication types, MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













23
Int J Environ Res Public Health




. 2020 Apr 26;17(9):3015. doi: 10.3390/ijerph17093015.
Receptive and Expressive Vocabulary Skills and Their Correlates in Mandarin-Speaking Infants with Unrepaired Cleft Lip and/or Palate
Si-Wei Ma 1 2 3, Li Lu 4, Ting-Ting Zhang 5, Dan-Tong Zhao 6, Bin-Ting Yang 1, Yan-Yan Yang 1, Jian-Min Gao 6
Affiliations expand
PMID: 32357522
PMCID: PMC7246725
DOI: 10.3390/ijerph17093015Free PMC article

Abstract


Background: Vocabulary skills in infants with cleft lip and/or palate (CL/P) are related to various factors. They remain underexplored among Mandarin-speaking infants with CL/P. This study identified receptive and expressive vocabulary skills among Mandarin-speaking infants with unrepaired CL/P prior to cleft palate surgery and their associated factors.

Methods: This is a cross-sectional study involving patients at the Cleft Lip and Palate Center of the Stomatological Hospital of Xi'an Jiaotong University between July 2017 and December 2018. The Putonghua Communicative Development Inventories-Short Form (PCDI-SF) was used to assess early vocabulary skills.

Results: A total of 134 children aged 9-16 months prior to cleft palate surgery were included in the study. The prevalences of delays in receptive and expressive vocabulary skills were 72.39% (95% CI: 64.00-79.76%) and 85.07% (95% CI: 77.89-90.64%), respectively. Multiple logistic regression identified that children aged 11-13 months (OR = 6.46, 95% CI: 1.76-23.76) and 14-16 months (OR = 24.32, 95% CI: 3.86-153.05), and those with hard/soft cleft palate and soft cleft palate (HSCP/SCP) (OR = 5.63, 95% CI: 1.02-31.01) were more likely to be delayed in receptive vocabulary skills.

Conclusions: Delays in vocabulary skills were common among Mandarin-speaking CL/P infants, and age was positively associated with impaired and lagging vocabulary skills. The findings suggest the necessity and importance of early and effective identification of CL/P, and early intervention programs and effective treatment are recommended for Chinese CL/P infants.

Keywords: China; Mandarin; cleft lip and/or palate; expressive vocabulary; infants; receptive vocabulary; vocabulary skills.

Conflict of interest statement


The authors report no conflicts of interest.

59 references
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













24
Can J Exp Psychol




. 2020 Mar;74(1):35-43. doi: 10.1037/cep0000185. Epub 2019 Aug 8.
Production of picture names improves picture recognition
Kathleen L Hourihan 1, Landon A Churchill 1
Affiliations expand
PMID: 31393155
DOI: 10.1037/cep0000185

Abstract


Words read aloud are later recalled and recognized better than words read silently: the production effect. Previous research (Fawcett, Quinlan, & Taylor, 2012) has demonstrated a production effect in old/new recognition of line drawings. The current study examined whether production at encoding can improve memory for the visual details of a picture, or whether it is primarily memory for the picture's verbal label that benefits from production. Participants studied a list of photographs of nameable objects by naming half of the objects aloud and half silently. In Experiment 1, a control group completed a free recall test for the object names while the experimental group completed a 4-alternative forced-choice recognition test for the studied pictures and provided confidence judgments in their recognition decisions. Both groups showed a significant production effect. Experiment 2 obtained image typicality ratings and naming data for use in Experiment 3. In Experiment 3, studied items were tested after a 1-week delay in one of three different types of 2-alternative forced-choice recognition test: versus a different picture exemplar of the same item; versus a different picture; or as a verbal label versus a different verbal label. Results showed a significant production effect in all testing conditions, with the magnitude of the effect similar across conditions. Production improves memory for both the visual details and verbal label of pictures. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
supplementary info
MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













25
Randomized Controlled Trial
J Speech Lang Hear Res




. 2019 Sep 20;62(9):3183-3203. doi: 10.1044/2019_JSLHR-S-18-0288. Epub 2019 Sep 3.
An N-of-1 Randomized Controlled Trial of Interventions for Children With Inconsistent Speech Sound Errors
Susan Rvachew 1, Tanya Matthews 1
Affiliations expand
PMID: 31479383
DOI: 10.1044/2019_JSLHR-S-18-0288

Abstract


Purpose The aim of this study was to test the hypothesis that children with inconsistent speech errors would respond differentially to 1 of 3 specific interventions depending on their primary underlying impairment: Children with deficient motor planning were expected to respond best to an auditory-motor integration (AMI) intervention, and children with deficient phonological planning were expected to respond best to a phonological memory and planning (PMP) intervention. Method Twelve participants were diagnosed with a motor planning (n = 7) or phonological planning (n = 5) deficit based on a comprehensive assessment, which included the Syllable Repetition Task as an important source of diagnostic evidence. An N-of-1 randomized controlled trial was used. Each child experienced all 3 interventions: AMI, PMP, and control (CTL); however, these interventions were randomly allocated to sessions within weeks (3 sessions per week × 6 weeks for 18 sessions). The AMI intervention procedures targeted knowledge of the acoustic-phonetic target and integration of auditory and somatosensory feedback during speech practice. The PMP intervention procedures targeted segmenting and recompiling the phonological plan for each word. The CTL intervention was standard drill practice. The child was taught 5 pseudowords in a meaningful context in each intervention condition. Results Same-day (SD) probes assessed transfer from taught pseudowords to untaught real words, and next-day (ND) probes assessed retention of that learning. Nonparametric resampling tests with pooling of p values across children with the same diagnosis were used to assess the results. Pooled p values indicated a significant benefit of AMI over PMP for the group with a motor planning deficit (p = 2.01E-04 for SD probes and 2.97E-03 for ND probes) and a significant benefit of PMP over AMI for the group with a phonological planning deficit (p = 1.22E-02 for SD probes and 1.32E-02 for ND probes). Response to the CTL intervention was variable within groups. Conclusion In this study, the child's underlying psycholinguistic deficit helped to predict response to intervention.
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













26
Cereb Cortex




. 2019 Dec 17;29(11):4803-4817. doi: 10.1093/cercor/bhz014.
A Double Dissociation in Sensitivity to Verb and Noun Semantics Across Cortical Networks
Giulia V Elli 1, Connor Lane 2, Marina Bedny 1
Affiliations expand
PMID: 30767007
PMCID: PMC6917520 (available on 2020-12-17)
DOI: 10.1093/cercor/bhz014

Abstract


What is the neural organization of the mental lexicon? Previous research suggests that partially distinct cortical networks are active during verb and noun processing, but what information do these networks represent? We used multivoxel pattern analysis (MVPA) to investigate whether these networks are sensitive to lexicosemantic distinctions among verbs and among nouns and, if so, whether they are more sensitive to distinctions among words in their preferred grammatical class. Participants heard 4 types of verbs (light emission, sound emission, hand-related actions, mouth-related actions) and 4 types of nouns (birds, mammals, manmade places, natural places). As previously shown, the left posterior middle temporal gyrus (LMTG+), and inferior frontal gyrus (LIFG) responded more to verbs, whereas the inferior parietal lobule (LIP), precuneus (LPC), and inferior temporal (LIT) cortex responded more to nouns. MVPA revealed a double-dissociation in lexicosemantic sensitivity: classification was more accurate among verbs than nouns in the LMTG+, and among nouns than verbs in the LIP, LPC, and LIT. However, classification was similar for verbs and nouns in the LIFG, and above chance for the nonpreferred category in all regions. These results suggest that the lexicosemantic information about verbs and nouns is represented in partially nonoverlapping networks.

Keywords: fMRI; lexicosemantic representations; multivoxel pattern analysis; nouns; verbs.

© The Author(s) 2019. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

Cited by 1 article
supplementary info
Publication types, MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













27
Clin Neuropsychol




. 2020 May;34(4):775-796. doi: 10.1080/13854046.2019.1704436. Epub 2019 Dec 23.
Walking, talking, and suppressing: Executive functioning mediates the relationship between higher expressive suppression and slower dual-task walking among older adults
Madison A Niermeyer 1, Yana Suchy 1
Affiliations expand
PMID: 31868093
DOI: 10.1080/13854046.2019.1704436

Abstract


Objective: Dual-task walking, which is related to fall risk, has also been shown to relate to executive functioning (EF). EF is known to be vulnerable to the effects of an emotion-regulation strategy known as expressive suppression, such that higher engagement in expressive suppression is related to subsequent decrements in EF. However, it is unknown whether expressive suppression is also associated with slower dual-task walking. In addition, if such an association exists, it is unknown whether EF mediates the relationship between expressive suppression and dual-task gait speed.Methods: Ninety-five community-dwelling older adults completed tasks of EF and lower-order component process using the Delis Kaplan Executive Function System (D-KEFS), as well as self-report measures of expressive suppression use in the 24 hours prior to testing and a measure of depressive symptoms.Results: Higher self-reported expressive suppression not only related to poorer EF, but also to slower dual-task walking beyond age and depressive symptoms; however these results did not hold when individuals with possible undiagnosed MCI were excluded. EF mediated the relationship between expressive suppression and dual-task walking speed.Conclusion: Expressive suppression appears to weaken EF, which in turn impacts executive aspects of motor functioning (such as walking under cognitive load) for cognitively vulnerable individuals. Quantifying and accounting for the taxing effect of effortful emotion regulation may improve the accuracy of EF assessment. Expressive suppression represents a potentially modifiable target to help reduce EF lapses and motor failings among older adults.

Keywords: Executive functions; aging; dual-task gait; expressive suppression; walking.
supplementary info
Publication types, MeSH termsexpand
Proceed to details
Cite









Share













28
Cognition




. 2019 Dec;193:103991. doi: 10.1016/j.cognition.2019.06.003. Epub 2019 Sep 14.
Underspecification in toddlers' and adults' lexical representations
Jie Ren 1, Uriel Cohen Priva 2, James L Morgan 2
Affiliations expand
PMID: 31525643
PMCID: PMC7134210 (available on 2020-12-01)
DOI: 10.1016/j.cognition.2019.06.003

Abstract


Recent research has shown that toddlers' lexical representations are phonologically detailed, quantitatively much like those of adults. Studies in this article explore whether toddlers' and adults' lexical representations are qualitatively similar. Psycholinguistic claims (Lahiri & Marslen-Wilson, 1991; Lahiri & Reetz, 2002, 2010) based on underspecification (Kiparsky, 1982 et seq.) predict asymmetrical judgments in lexical processing tasks; these have been supported in some psycholinguistic research showing that participants are more sensitive to noncoronal-to-coronal (pop → top) than to coronal-to-noncoronal (top → pop) changes or mispronunciations. Three experiments using on-line visual world procedures showed that 19-month-olds and adults displayed sensitivities to both noncoronal-to-coronal and coronal-to-noncoronal mispronunciations of familiar words. No hints of any asymmetries were observed for either age group. There thus appears to be considerable developmental continuity in the nature of early and mature lexical representations. Discrepancies between the current findings and those of previous studies appear to be due to methodological differences that cast doubt on the validity of claims of psycholinguistic support for lexical underspecification.

Keywords: Developmental continuity; Lexical representation; Mispronunciation processing; Phonological details; Underspecification.

Copyright © 2019 Elsevier B.V. All rights reserved.
supplementary info
Publication types, MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













29
J Speech Lang Hear Res




. 2019 Sep 20;62(9):3248-3264. doi: 10.1044/2019_JSLHR-S-19-0058. Epub 2019 Aug 21.
The Effect of Tongue-Jaw Coupling on Phonetic Distinctiveness of Vowels in Amyotrophic Lateral Sclerosis
Panying Rong 1
Affiliations expand
PMID: 31433712
DOI: 10.1044/2019_JSLHR-S-19-0058

Abstract


Purpose The aim of this study was to determine the relation of tongue-jaw coupling to phonetic distinctiveness of vowels in persons at different stages (i.e., early, middle, late) of bulbar motor involvement in amyotrophic lateral sclerosis (ALS) and healthy controls. Method The pattern of spatial tongue-jaw coupling was derived from 11 individuals with ALS and 11 healthy controls using the parallel factor analysis. Two articulatory components, which correspond to tongue displacement independent of the jaw (iTongue) and jaw contribution to tongue displacement (cJaw), were extracted from the composite tongue-jaw displacement. These articulatory components were correlated with F1 (i.e., height) and F2-F1 (i.e., advancement) of 4 vowels (/i/, /u/, /æ/, and /ɔ/) across all participants in each group. In addition, a comprehensive index of functional tongue-jaw coupling was derived as the ratio of cJaw/(iTongue + cJaw), and an acoustic index of vowel distortion (VowelDis) was derived to quantify the overall disease-related changes in phonetic distinctiveness of vowels. Based on these indices, disease-related changes in tongue-jaw coupling and phonetic distinctiveness of vowels were examined in individuals at the early, middle, and late stages of the disease. Results For healthy controls, both iTongue and cJaw contributed to F2-F1, while only cJaw contributed to F1. For individuals with ALS, both iTongue and cJaw contributed to F1, whereas only cJaw contributed to F2-F1. Disease-related changes in tongue-jaw coupling included (a) an overall decrease of the percent contribution of the tongue to the composite tongue-jaw displacement accompanied by an increase of percent contribution of the jaw and (b) several changes in the direction of tongue and jaw displacements occurred at different stages of the disease. These disease-related changes in tongue-jaw coupling had various impacts on phonetic distinctiveness of vowels, resulting in (a) a backward shift of front vowels and reduced front-back vowel contrasts, which occurred early and throughout the disease stages; (b) raising of all vowels during the middle stage of the disease; and (c) reduced high-low vowel contrasts during the late stage of the disease. Overall, phonetic distinctiveness of vowels deteriorated progressively throughout the disease course. Conclusions Different from healthy controls who established optimal functional coupling between the tongue and the jaw during vowel productions, individuals at the early-to-middle stages of bulbar ALS showed various adaptive changes in tongue-jaw coupling in response to the disease-related biomechanical and muscular changes in the articulators (particularly in the tongue). These adaptive changes in tongue-jaw coupling were found to be partially effective in mitigating the negative effect of articulatory involvement on phonetic distinctiveness of vowels. As the disease progressed to the late stage, such adaptations appeared to be no longer evident, resulting in a substantial overall reduction of vowel contrasts.
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













30
Int J Pediatr Otorhinolaryngol




. 2020 Jun;133:110003. doi: 10.1016/j.ijporl.2020.110003. Epub 2020 Mar 13.
Data logging variables and speech perception in prelingually deafened pediatric cochlear implant users
Sıdıka Cesur 1, Mustafa Yüksel 2, Ayça Çiprut 3
Affiliations expand
PMID: 32203760
DOI: 10.1016/j.ijporl.2020.110003

Abstract


Objectives: To investigate the relationship among objectively gathered data logging measurements, patient-related variables, and speech recognition performance of pediatric CI users.

Methods and materials: Thirty-two prelingually implanted children who have the ability to perform word discrimination test were included in this study. To reveal the relationship between speech perception abilities and auditory exposure, seven data logging variables were analyzed: "on-air," "off-air," "coil-off," "speech," "speech in noise," "music" and "noise. In addition, implantation age (months) and CI usage duration (months) were taken into account. Finally, it was investigated the differences between unilateral, sequential bilateral, and simultaneous bilateral CI users in terms of all study variables.

Results: The average on-air time ranged between 10.52 and 12.30 in the groups. In the case of sequential implantation, smaller on-air and higher coil off values were observed with the second CI. In the case of simultaneous bilateral implantation, data logging measurements were almost the same in both implants. WRS was significantly correlated (p < 0.05) with on-air time (r = 0.62), coil-off count (r = -0.48), chronological age (r = 0.48), and CI duration (r = 0.44). Multiple linear regression model was fit to predict the WRS, with on-air time, CI duration, and chronological age as predictors.

Conclusions: The critical importance of early intervention and long-term use of CI is well-established in the literature and is also corroborated by our findings. However, the key findings of the present study are that consistent CI use and the quality of daily listening environment also exerted a major and positive effect on the speech recognition performance of pediatric CI users. Therefore, during the monitoring of pediatric CI recipients, it is important to know the device usage data in order to detect problems in the early stages after CI.

Keywords: Cochlear implant; Data logging; Listening environment; Speech recognition.

Copyright © 2020 Elsevier B.V. All rights reserved.
supplementary info
MeSH termsexpand
full-text links

Proceed to details
Cite









Share













31
J Exp Psychol Learn Mem Cogn




. 2020 May;46(5):872-893. doi: 10.1037/xlm0000762. Epub 2019 Sep 30.
Chunking and redintegration in verbal short-term memory
Dennis Norris 1, Kristjan Kalm 1, Jane Hall 1
Affiliations expand
PMID: 31566390
PMCID: PMC7144498
DOI: 10.1037/xlm0000762Free PMC article

Abstract


Memory for verbal material improves when words form familiar chunks. But how does the improvement due to chunking come about? Two possible explanations are that the input might be actively recoded into chunks, each of which takes up less memory capacity than items not forming part of a chunk (a form of data compression), or that chunking is based on redintegration. If chunking is achieved by redintegration, representations of chunks exist only in long-term memory (LTM) and help to reconstructing degraded traces in short-term memory (STM). In 6 experiments using 2-alternative forced choice recognition and immediate serial recall we find that when chunks are small (2 words) they display a pattern suggestive of redintegration, whereas larger chunks (3 words), show a pattern consistent with data compression. This concurs with previous data showing that there is a cost involved in recoding material into chunks in STM. With smaller chunks this cost seems to outweigh the benefits of recoding words into chunks. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

46 references
13 figures
supplementary info
MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













32
Cereb Cortex




. 2019 Sep 13;29(10):4077-4089. doi: 10.1093/cercor/bhy289.
The Role of the Human Auditory Corticostriatal Network in Speech Learning
Gangyi Feng 1 2, Han Gyol Yi 3, Bharath Chandrasekaran 4
Affiliations expand
PMID: 30535138
PMCID: PMC6931274
DOI: 10.1093/cercor/bhy289Free PMC article

Abstract


We establish a mechanistic account of how the mature human brain functionally reorganizes to acquire and represent new speech sounds. Native speakers of English learned to categorize Mandarin lexical tone categories produced by multiple talkers using trial-by-trial feedback. We hypothesized that the corticostriatal system is a key intermediary in mediating temporal lobe plasticity and the acquisition of new speech categories in adulthood. We conducted a functional magnetic resonance imaging experiment in which participants underwent a sound-to-category mapping task. Diffusion tensor imaging data were collected, and probabilistic fiber tracking analysis was employed to assay the auditory corticostriatal pathways. Multivariate pattern analysis showed that talker-invariant novel tone category representations emerged in the left superior temporal gyrus (LSTG) within a few hundred training trials. Univariate analysis showed that the putamen, a subregion of the striatum, was sensitive to positive feedback in correctly categorized trials. With learning, functional coupling between the putamen and LSTG increased during error processing. Furthermore, fiber tractography demonstrated robust structural connectivity between the feedback-sensitive striatal regions and the LSTG regions that represent the newly learned tone categories. Our convergent findings highlight a critical role for the auditory corticostriatal circuitry in mediating the acquisition of new speech categories.

Keywords: MVPA; corticostriatal system; multi-modal imaging; speech category learning.

© The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

Cited by 2 articles
3 figures
supplementary info
Publication types, MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













33
J Psycholinguist Res




. 2020 Feb;49(1):73-97. doi: 10.1007/s10936-019-09672-9.
The Impact of a Human Figure in a Scene on Spatial Descriptions in Speech, Gesture, and Gesture Alone
Fey Parrill 1, Alexsis Blocton 2, Paige Veta 2, Mary Lowery 2, Ava Schneider 2
Affiliations expand
PMID: 31529372
DOI: 10.1007/s10936-019-09672-9

Abstract


The presence of a human figure in a scene appears to change how people describe it. About 20% of participants take the human figure's viewpoint (Tversky and Hard in Cognition 110:124-129, 2009. http://doi.org/10.1016/j.cognition.2008.10.008). Five exploratory studies compare descriptions of a scene with no person to descriptions of a scene with a person. About 20% of participants are predicted to use the person's point of view in the "person" conditions. Study 1 replicates the original pattern. Study 2 shows that the pattern holds when object/scene are changed, and that the figure's gaze towards/away from the object does not change the pattern. Studies 3 and 4 show the pattern holds when the object has different positions and when it is moving. Study 5 shows the pattern holds when the describer is talking to an interlocutor, in both speech and co-speech gesture, and when the person is using gesture alone. The presence of a human figure in a scene appears to be a robust variable in shaping spatial descriptions.

Keywords: Gesture; Perspective taking; Spatial description; Viewpoint.

24 references
supplementary info
MeSH termsexpand
full-text links

Proceed to details
Cite









Share













34
Cereb Cortex




. 2019 Dec 17;29(11):4743-4752. doi: 10.1093/cercor/bhz007.
The Dynamic Associations Between Cortical Thickness and General Intelligence are Genetically Mediated
J Eric Schmitt 1, Armin Raznahan 2, Liv S Clasen 2, Greg L Wallace 3, Joshua N Pritikin 4, Nancy Raitano Lee 5, Jay N Giedd 6, Michael C Neale 7
Affiliations expand
PMID: 30715232
PMCID: PMC6917515 (available on 2020-12-17)
DOI: 10.1093/cercor/bhz007

Abstract


The neural substrates of intelligence represent a fundamental but largely uncharted topic in human developmental neuroscience. Prior neuroimaging studies have identified modest but highly dynamic associations between intelligence and cortical thickness (CT) in childhood and adolescence. In a separate thread of research, quantitative genetic studies have repeatedly demonstrated that most measures of intelligence are highly heritable, as are many brain regions associated with intelligence. In the current study, we integrate these 2 streams of prior work by examining the genetic contributions to CT-intelligence relationships using a genetically informative longitudinal sample of 813 typically developing youth, imaged with high-resolution MRI and assessed with Wechsler Intelligence Scales (IQ). In addition to replicating the phenotypic association between multimodal association cortex and language centers with IQ, we find that CT-IQ covariance is nearly entirely genetically mediated. Moreover, shared genetic factors drive the rapidly evolving landscape of CT-IQ relationships in the developing brain.

Trial registration: ClinicalTrials.gov NCT00001246.

Keywords: MRI; cortical thickness; genetics; intelligence; neurodevelopment.

© The Author(s) 2019. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

Cited by 3 articles
supplementary info
Publication types, MeSH terms, Associated data, Grant supportexpand
full-text links

Proceed to details
Cite









Share













35
J Exp Psychol Learn Mem Cogn




. 2020 May;46(5):894-906. doi: 10.1037/xlm0000765. Epub 2019 Oct 17.
How in-group bias influences the level of detail of speaker-specific information encoded in novel lexical representations
Sara Iacozza 1, Antje S Meyer 1, Shiri Lev-Ari 1
Affiliations expand
PMID: 31621359
DOI: 10.1037/xlm0000765

Abstract


An important issue in theories of word learning is how abstract or context-specific representations of novel words are. One aspect of this broad issue is how well learners maintain information about the source of novel words. We investigated whether listeners' source memory was better for words learned from members of their in-group (students of their own university) than it is for words learned from members of an out-group (students from another institution). In the first session, participants saw 6 faces and learned which of the depicted students attended either their own or a different university. In the second session, they learned competing labels (e.g., citrus-peller and citrus-schiller; in English, lemon peeler and lemon stripper) for novel gadgets, produced by the in-group and out-group speakers. Participants were then tested for source memory of these labels and for the strength of their in-group bias, that is, for how much they preferentially process in-group over out-group information. Analyses of source memory accuracy demonstrated an interaction between speaker group membership status and participants' in-group bias: Stronger in-group bias was associated with less accurate source memory for out-group labels than in-group labels. These results add to the growing body of evidence on the importance of social variables for adult word learning. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
supplementary info
MeSH termsexpand
full-text links

Proceed to details
Cite









Share













36
Cereb Cortex




. 2019 Sep 13;29(10):4017-4034. doi: 10.1093/cercor/bhy282.
Propagation of Information Along the Cortical Hierarchy as a Function of Attention While Reading and Listening to Stories
Mor Regev 1 2 3, Erez Simony 4 5, Katherine Lee 6, Kean Ming Tan 7, Janice Chen 8, Uri Hasson 1 2
Affiliations expand
PMID: 30395174
PMCID: PMC6735257
DOI: 10.1093/cercor/bhy282Free PMC article

Abstract


How does attention route information from sensory to high-order areas as a function of task, within the relatively fixed topology of the brain? In this study, participants were simultaneously presented with 2 unrelated stories-one spoken and one written-and asked to attend one while ignoring the other. We used fMRI and a novel intersubject correlation analysis to track the spread of information along the processing hierarchy as a function of task. Processing the unattended spoken (written) information was confined to auditory (visual) cortices. In contrast, attending to the spoken (written) story enhanced the stimulus-selective responses in sensory regions and allowed it to spread into higher-order areas. Surprisingly, we found that the story-specific spoken (written) responses for the attended story also reached secondary visual (auditory) regions of the unattended sensory modality. These results demonstrate how attention enhances the processing of attended input and allows it to propagate across brain areas.

Keywords: attention; fMRI; information propagation; intersubject functional correlation; language.

© The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

Cited by 3 articles
10 figures
supplementary info
Publication types, MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













37
Sci Rep




. 2019 Apr 5;9(1):5686. doi: 10.1038/s41598-019-41794-x.
Neural correlates of abnormal auditory feedback processing during speech production in Alzheimer's disease
Kamalini G Ranasinghe 1, Hardik Kothare 2 3 4, Naomi Kort 2 3, Leighton B Hinkley 2 3, Alexander J Beagle 5, Danielle Mizuiri 3, Susanne M Honma 3, Richard Lee 5, Bruce L Miller 5, Maria Luisa Gorno-Tempini 5, Keith A Vossel 5 6, John F Houde 2, Srikantan S Nagarajan 2 3
Affiliations expand
PMID: 30952883
PMCID: PMC6450891
DOI: 10.1038/s41598-019-41794-xFree PMC article

Abstract


Accurate integration of sensory inputs and motor commands is essential to achieve successful behavioral goals. A robust model of sensorimotor integration is the pitch perturbation response, in which speakers respond rapidly to shifts of the pitch in their auditory feedback. In a previous study, we demonstrated abnormal sensorimotor integration in patients with Alzheimer's disease (AD) with an abnormally enhanced behavioral response to pitch perturbation. Here we examine the neural correlates of the abnormal pitch perturbation response in AD patients, using magnetoencephalographic imaging. The participants phonated the vowel /α/ while a real-time signal processor briefly perturbed the pitch (100 cents, 400 ms) of their auditory feedback. We examined the high-gamma band (65-150 Hz) responses during this task. AD patients showed significantly reduced left prefrontal activity during the early phase of perturbation and increased right middle temporal activity during the later phase of perturbation, compared to controls. Activity in these brain regions significantly correlated with the behavioral response. These results demonstrate that impaired prefrontal modulation of speech-motor-control network and additional recruitment of right temporal regions are significant mediators of aberrant sensorimotor integration in patients with AD. The abnormal neural integration mechanisms signify the contribution of cortical network dysfunction to cognitive and behavioral deficits in AD.

Conflict of interest statement


K.G.R., H.K., N.K., L.B.H., A.J.B., S.H., D.M., R.L., M.L.G., K.A.V., S.S.N. and J.F.H. declare no competing interests relevant to this work. B.L.M. has the following disclosures: serves as Medical Director for the John Douglas French Foundation; Scientific Director for the Tau Consortium; Director/Medical Advisory Board of the Larry L. Hillblom Foundation; and past president of the International Society of Frontotemporal Dementia (ISFTD).

Cited by 1 article
64 references
4 figures
supplementary info
Publication types, MeSH terms, Grant supportexpand
full-text links

Proceed to details
Cite









Share













38
Multicenter Study
J Head Trauma Rehabil




. Sep/Oct 2019;34(5):326-339. doi: 10.1097/HTR.0000000000000528.
Development and Psychometric Characteristics of the TBI-QOL Communication Item Bank
Matthew L Cohen 1, Pamela A Kisala, Aaron J Boulton, Noelle E Carlozzi, Christine V Cook, David S Tulsky
Affiliations expand
PMID: 31498231
DOI: 10.1097/HTR.0000000000000528

Abstract


Objective: To develop an item response theory (IRT)-based patient-reported outcome measure of functional communication for adults with traumatic brain injury (TBI).

Setting: Five medical centers that were TBI Model Systems sites.

Participants: A total of 569 adults with TBI (28% complicated-mild; 13% moderate; and 58% severe).

Design: Grounded theory-based qualitative item development, large-scale item calibration testing, confirmatory factor analyses, psychometric analyses with graded response model IRT.

Main measure: Traumatic Brain Injury-Quality of Life (TBI-QOL) Communication Item Bank, version 1.0.

Results: From an initial pool of 48 items, 31 items were retained in the final instrument based on adequate fit to a unidimensional model and absence of bias across several demographic and clinical subgroupings. The TBI-QOL Communication Item Bank demonstrated excellent score precision (reliability ≥ 0.95) across a wide range of communication impairment levels, particularly for individuals with more severe difficulties. The TBI-QOL Communication Item Bank is available as a full item bank, fixed-length short form, and as a computerized adaptive test.

Conclusions: The TBI-QOL Communication Item Bank permits precise measurement of patient-reported functional communication after TBI. Future development will validate the instrument against performance-based, clinician-reported, and surrogate-reported assessments.
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













39
Biol Lett




. 2020 May;16(5):20200232. doi: 10.1098/rsbl.2020.0232. Epub 2020 May 27.
Chimpanzee lip-smacks confirm primate continuity for speech-rhythm evolution
André S Pereira 1 2, Eithne Kavanagh 3, Catherine Hobaiter 1, Katie E Slocombe 3, Adriano R Lameira 1 4
Affiliations expand
PMID: 32453963
PMCID: PMC7280036 (available on 2021-05-01)
DOI: 10.1098/rsbl.2020.0232

Abstract


Speech is a human hallmark, but its evolutionary origins continue to defy scientific explanation. Recently, the open-close mouth rhythm of 2-7 Hz (cycles/second) characteristic of all spoken languages has been identified in the orofacial signals of several nonhuman primate genera, including orangutans, but evidence from any of the African apes remained missing. Evolutionary continuity for the emergence of speech is, thus, still inconclusive. To address this empirical gap, we investigated the rhythm of chimpanzee lip-smacks across four populations (two captive and two wild). We found that lip-smacks exhibit a speech-like rhythm at approximately 4 Hz, closing a gap in the evidence for the evolution of speech-rhythm within the primate order. We observed sizeable rhythmic variation within and between chimpanzee populations, with differences of over 2 Hz at each level. This variation did not result, however, in systematic group differences within our sample. To further explore the phylogenetic and evolutionary perspective on this variability, inter-individual and inter-population analyses will be necessary across primate species producing mouth signals at speech-like rhythm. Our findings support the hypothesis that speech recruited ancient primate rhythmic signals and suggest that multi-site studies may still reveal new windows of understanding about these signals' use and production along the evolutionary timeline of speech.

Keywords: chimpanzees; great apes; lip-smacks; speech evolution; speech-like rhythm.

Conflict of interest statement


The authors declare that they have no conflict of interest.
supplementary info
Publication types, MeSH terms, Associated dataexpand
full-text links

Proceed to details
Cite









Share













40
Case Reports
J Appl Behav Anal




. 2019 Jul;52(3):746-755. doi: 10.1002/jaba.569. Epub 2019 Apr 29.
Awareness training reduces college students' speech disfluencies in public speaking
Christina C Montes 1, Megan R Heinicke 1, Danielle M Geierman 1
Affiliations expand
PMID: 31032933
DOI: 10.1002/jaba.569

Abstract


Recent research suggests that a modified habit reversal procedure, including awareness training alone or combined with competing response training, is effective in decreasing speech disfluencies for college students. However, these procedures are potentially lengthy, sometimes require additional booster sessions, and could result in covariation of untargeted speaker behavior. We extended prior investigations by evaluating awareness training as a sole intervention while also measuring collateral effects of treatment on untargeted filler words and rate of speech. We found awareness training was effective for all participants without the use of booster sessions, and covariation between targeted filler words and secondary dependent variables was idiosyncratic across participants.

Keywords: awareness training; covariation; filler words; habit reversal; public speaking; speech disfluencies.

© 2019 Society for the Experimental Analysis of Behavior.
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













41
J Craniofac Surg




. 2020 Jun;31(4):1125-1128. doi: 10.1097/SCS.0000000000006275.
An Alternative Internal Le Fort I Distractor: Early Results With a New Trans-Nasal Device
Michael Lypka 1, Heather Hendricks
Affiliations expand
PMID: 32118665
DOI: 10.1097/SCS.0000000000006275

Abstract


Purpose: To report the early experience using a new internal trans-nasal Le Fort I distractor in patients with cleft lip and palate.

Methods: Patients with cleft lip and palate and severe maxillary deficiency, who were treated with the trans-nasal Le Fort I distractor, were retrospectively reviewed. Cephalometric images were evaluated preoperatively and at least 6 months postoperatively. Speech outcomes were measured before and at least 6 months after surgery. Patient experience with the device was documented and complications were recorded.

Results: Five male patients with bilateral cleft lip and palate (ages 11-19) underwent the maximum advancement allowed by the device (25 mm). Follow-up averaged 2 years. Average SNA changed from 75.5°preoperatively to 84.6°postoperatively. Average ANB angle changed from -2.8° to 7.4°, or a tendency to Class 2 overcorrection. There was an overall increase in upper anterior facial height by 7.5 mm. All patients achieved acceptable postoperative occlusions. Two patients with borderline velopharyngeal function preoperatively developed velopharyngeal insufficiency postoperatively that did not resolve 6 months postoperatively, necessitating further surgery. Families reported ease of turning with minimal discomfort reported by patients. All patients maintained normal mouth opening during and after the distraction phase. Two of the patients developed localized pin site infections after the distraction phase that were treated successfully with oral antibiotics.

Conclusion: The trans-nasal Le Fort I distractor can be an effective device to advance the deficient maxilla and is well tolerated by patients.
supplementary info
MeSH termsexpand
full-text links

Proceed to details
Cite









Share













42
J Psycholinguist Res




. 2020 Feb;49(1):163-174. doi: 10.1007/s10936-019-09676-5.
Analysis of Articulation Errors in Dysarthric Speech
Upashana Goswami 1, S R Nirmala 2 3, C M Vikram 4, Sishir Kalita 4, S R M Prasanna 4
Affiliations expand
PMID: 31659578
DOI: 10.1007/s10936-019-09676-5

Abstract


Imprecise articulation is the major issue reported in various types of dysarthria. Detection of articulation errors can help in diagnosis. The cues derived from both the burst and the formant transitions contribute to the discrimination of place of articulation of stops. It is believed that any acoustic deviations in stops due to articulation error can be analyzed by deriving features around the burst and the voicing onsets. The derived features can be used to discriminate the normal and dysarthric speech. In this work, a method is proposed to differentiate the voiceless stops produced by the normal speakers from the dysarthric by deriving the spectral moments, two-dimensional discrete cosine transform of linear prediction spectrum and Mel frequency cepstral coefficients features. These features and cosine distance based classifier is used for the classification of normal and dysarthic speech.

Keywords: Articulation errors; Dysarthic speech; Mel-frequency cepstral coefficients; Spectral moments; Stops; Two-dimensional discrete cosine transform.

5 references
supplementary info
MeSH termsexpand
full-text links

Proceed to details
Cite









Share













43
Complement Ther Clin Pract




. 2020 May;39:101162. doi: 10.1016/j.ctcp.2020.101162. Epub 2020 Apr 7.
Can music influence cardiac autonomic system? A systematic review and narrative synthesis to evaluate its impact on heart rate variability
Helia Mojtabavi 1, Amene Saghazadeh 2, Vitor Engrácia Valenti 3, Nima Rezaei 4
Affiliations expand
PMID: 32379689
DOI: 10.1016/j.ctcp.2020.101162

Abstract


Background: and purpose: The impact of music on the human body extends beyond an emotional response. Music can bring benefits to the cardiovascular system by influencing heart rate variability (HRV), which is a well-accepted method to analyze the oscillations of the intervals between successive heartbeats and investigate the cardiovascular autonomic nervous system (ANS). This study is a systematic review to examine the effect of musical interventions on HRV.

Methods: We conducted a systematic search in PubMed, Scopus, Web of Science, and Cochrane and identified additional studies with hand searching of reference lists of relevant references.

Results: 29 original articles (24 pre-post intervention studies and five randomized controlled trials) with a total of 1368 subjects were available and eligible to be included in the systematic review. Within the whole, only three studies reveal no significant impact of music on HRV, which might be due to using a small sample size and a concise duration of music administration. Interestingly, the rest of the studies have suggested a positive impact of music on HRV with a 0.05 level of significance.

Conclusion: This systematic review confirms music as a stimulus acting to the cardiac ANS that increases parasympathetic activity and HRV. The effects are, however, associated with a high risk of bias. Therefore, further studies are necessary to compare the impact of individualized music therapy to passive listening and preferred soundtracks.

Keywords: Autonomic nervous system; Heart rate variability; Music therapy; Systematic review.

Copyright © 2020 Elsevier Ltd. All rights reserved.
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













44
Eur Child Adolesc Psychiatry




. 2020 Sep;29(9):1217-1229. doi: 10.1007/s00787-019-01432-3. Epub 2019 Nov 8.
Toddlers' diurnal cortisol levels affected by out-of-home, center-based childcare and at-home, guardian-supervised childcare: comparison between different caregiving contexts
Katja Tervahartiala 1, Linnea Karlsson 2 3, Juho Pelto 2, Susanna Kortesluoma 2 4, Sirpa Hyttinen 5, Annarilla Ahtola 6, Niina Junttila 7, Hasse Karlsson 2 8
Affiliations expand
PMID: 31705206
PMCID: PMC7497366
DOI: 10.1007/s00787-019-01432-3Free PMC article

Abstract


Previous research suggests that attending non-parental out-of-home childcare is associated with elevated cortisol levels for some children. We aimed to compare diurnal saliva cortisol levels between children having out-of-home, center-based childcare or those having at-home, guardian-supervised childcare in Finland. A total of 213 children, aged 2.1 years (SD = 0.6), were drawn from the ongoing Finnish birth cohort study. Saliva samples were collected over 2 consecutive days (Sunday and Monday), with four samples drawn during each day: 30 min after waking up in the morning, at 10 am, between 2 and 3 pm, and in the evening before sleep. These results suggest that the shapes of the diurnal cortisol profiles were similar in both childcare groups following a typical circadian rhythm. However, the overall cortisol levels were on average 30% higher (95% CI: [9%, 54%], p = .004) with the at-home childcare in comparison with the out-of-home childcare group. Furthermore, a slight increase in the diurnal cortisol pattern was noticed in both groups and in both measurement days during the afternoon. This increase was 27% higher ([2%, 57%], p = .031) in the out-of-home childcare group during the out-of-home childcare day in comparison with the at-home childcare day. The elevated afternoon cortisol levels were partly explained by the afternoon naps, but there were probably other factors as well producing the cortisol rise during the afternoon hours. Further research is needed to define how a child's individual characteristic as well as their environmental factors associate with cortisol secretion patterns in different caregiving contexts.

Keywords: At-home, guardian-supervised childcare; Diurnal cortisol levels; Early childhood education and care (ECEC); Hypothalamus–pituitary–adrenal (HPA) axis; Out-of-home, center-based childcare.

Conflict of interest statement


On behalf of all authors, the corresponding author states that there is no conflict of interest.

Cited by 2 articles
47 references
2 figures
supplementary info
MeSH terms, Substances, Grant supportexpand
full-text links

Proceed to details
Cite









Share













45
J Speech Lang Hear Res




. 2019 Sep 20;62(9):3397-3412. doi: 10.1044/2019_JSLHR-L-17-0305. Epub 2019 Sep 13.
A Pilot Study of Early Storybook Reading With Babies With Hearing Loss
Michelle I Brown 1, David Trembath 1, Marleen F Westerveld 1, Gail T Gillon 2
Affiliations expand
PMID: 31518512
DOI: 10.1044/2019_JSLHR-L-17-0305

Abstract


Purpose This pilot study explored the effectiveness of an early storybook reading (ESR) intervention for parents with babies with hearing loss (HL) for improving (a) parents' book selection skills, (b) parent-child eye contact, and (c) parent-child turn-taking. Advancing research into ESR, this study examined whether the benefits from an ESR intervention reported for babies without HL were also observed in babies with HL. Method Four mother-baby dyads participated in a multiple baseline single-case experimental design across behaviors. Treatment effects for parents' book selection skills, parent-child eye contact, and parent-child turn-taking were examined using visual analysis and Tau-U analysis. Results Statistically significant increases, with large to very large effect sizes, were observed for all 4 participants for parent-child eye contact and parent-child turn-taking. Limited improvements with ceiling effects were observed for parents' book selection skills. Conclusion The findings provide preliminary evidence for the effectiveness of an ESR intervention for babies with HL for promoting parent-child interactions through eye contact and turn-taking.
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













46
Mem Cognit




. 2020 Jan;48(1):111-126. doi: 10.3758/s13421-019-00966-w.
Forward and backward recall: Different visuospatial processes when you know what's coming
Dominic Guitard 1, Jean Saint-Aubin 2, Marie Poirier 3, Leonie M Miller 4, Anne Tolan 5
Affiliations expand
PMID: 31346926
DOI: 10.3758/s13421-019-00966-w

Abstract


In an immediate memory task, when participants are asked to recall list items in reverse order, benchmark memory phenomena found with more typical forward recall are not consistently reproduced. These inconsistencies have been attributed to the greater involvement of visuospatial representations in backward than in forward recall at the point of retrieval. In the present study, we tested this hypothesis with a dual-task paradigm in which manual-spatial tapping and dynamic visual noise were used as the interfering tasks. The interference task was performed during list presentation or at recall. In the first four experiments, recall direction was only communicated at the point of recall. In Experiments 1 and 2, fewer words were recalled with manual tapping than in the control condition. However, the detrimental effect of manual tapping did not vary as a function of recall direction or processing stage. In Experiment 3, dynamic visual noise did not influence recall performance. In Experiment 4, articulatory suppression was performed on all trials and manual tapping was added on half of them. As in the first two experiments, manual tapping disputed forward and backward recall to the same extent. In Experiment 5, recall direction was known before list presentation. As predicted by the visuospatial hypothesis, when manual tapping was performed during recall, its detrimental effect was limited to backward recall. Overall, results can be explained by calling upon a modified version of the visuospatial hypothesis.

Keywords: Backward recall; Short-term memory; Visuospatial hypothesis.

53 references
supplementary info
MeSH termsexpand
full-text links

Proceed to details
Cite









Share













47
J Craniofac Surg




. Jul-Aug 2020;31(5):1395-1399. doi: 10.1097/SCS.0000000000006483.
Abnormal Acoustic Features Following Pharyngeal Flap Surgery in Patients Aged Six Years and Older
Haiyan Zhou 1 2, Jingwei Lu 3, Chuhan Zhang 4, Xiao Li 1 2, Yuru Li 5
Affiliations expand
PMID: 32371713
DOI: 10.1097/SCS.0000000000006483

Abstract


In our study, older velopharyngeal insufficiency (posterior velopharyngeal insufficiency) patients were defined as those older than 6 years of age. This study aimed to evaluate the abnormal acoustic features of older velopharyngeal insufficiency patients before and after posterior pharyngeal flap surgery. A retrospective medical record review was conducted for patients aged 6 years and older, who underwent posterior pharyngeal flap surgery between November 2011 and March 2015. The audio records of patients were evaluated before and after surgery. Spectral analysis was conducted by the Computer Speech Lab (CSL)-4150B acoustic system with the following input data: The vowel /i/, unaspirated plosive /b/, aspirated plosives /p/, aspirated fricatives /s/ and /x/, unaspirated affricates /j/ and /z/, and aspirated affricates /c/ and /q/. The patients were followed up for 3 months. Speech outcome was evaluated by comparing the postoperatively phonetic data with preoperative data. Subjective and objective analyses showed significant differences in the sonogram, formant, and speech articulation before and after the posterior pharyngeal flap surgery. However, the sampled patients could not be considered to have a high speech articulation (<85%) as the normal value was above or equal to 96%. Our results showed that pharyngeal flap surgery could correct the speech function of older patients with posterior velopharyngeal insufficiency to some extent. Owing to the original errors in pronunciation patterns, pathological speech articulation still existed, and speech treatment is required in the future.
supplementary info
MeSH termsexpand
full-text links

Proceed to details
Cite









Share













48
Cognition




. 2019 Dec;193:104008. doi: 10.1016/j.cognition.2019.104008. Epub 2019 Jun 25.
False recognition modality effects in short-term memory: Reversing the auditory advantage
Lionel C L Lim 1, Winston D Goh 2
Affiliations expand
PMID: 31252074
DOI: 10.1016/j.cognition.2019.104008

Abstract


The auditory advantage in short-term false recognition - reduced false memories for auditory compared to visually presented words (Olszewska, Reuter-lorenz, Munier, & Bendler, 2015), has been attributed to greater item distinctiveness in auditory compared to visual memory traces. If so, varying auditory trace distinctiveness should influence false recognition rates. Phonologically and semantically related words were presented visually or aurally. The auditory advantage for semantic lists was replicated but a reversal was observed for phonological lists. Reducing modality-specific acoustic and phonological distinctiveness by increasing phonological similarity led to increased false memory. The findings are consistent with a framework positing the generation of input-dependent memory traces and the role of relative distinctiveness in influencing short-term memory.

Keywords: Distinctiveness; False memory; Modality effects; Phonological similarity; Short-term memory.

Copyright © 2019 Elsevier B.V. All rights reserved.
supplementary info
Publication types, MeSH termsexpand
full-text links

Proceed to details
Cite









Share













49
Int J Pediatr Otorhinolaryngol




. 2020 Jun;133:110009. doi: 10.1016/j.ijporl.2020.110009. Epub 2020 Mar 16.
Language therapy outcomes in deaf children with cochlear implant using a new developed program: A pilot study
Nasibe Soltaninejad 1, Nahid Jalilevand 2, Mohammad Kamali 3, Reyhane Mohamadi 4
Affiliations expand
PMID: 32203758
DOI: 10.1016/j.ijporl.2020.110009

Abstract


Background: Cochlear implanted (CI) children have problems in most aspects of language and in particular with regards to grammar. Considering the lack of studies in the field of grammar treatment in CI children and bearing in the mind that CI children have the potential to develop language, the aim of the present study was to investigate the effect of treating grammar in CI children using a treatment grammar program.

Methodology: first, the literature related to grammar were reviewed so as to extract different grammatical components for developing grammar treatment program and to make sentences for each element as well as to compile a manual for its implementation. Second, the validity of the sentences was examined using the Delphi method. Third, grammar treatment was performed on five CI children. Persian Developmental Sentence Scoring(PDSS) and Mean Length of Utterance(MLU) were used to evaluate them before and after treatment.

Results: Five grammatical classes were extracted, and the grammatical elements were classified in each category according to age. For all of the grammatical items, 2076 sentences were constructed. After applying the Delphi method, a total of 1936 sentences with Kendall's coefficient of concordance (W) of 71%, remained. Using this program, grammar treatment was effective in all five children. The PDSS and MLU increased in all five children during the treatment phase, which was confirmed by Percentage of Non-overlapping Data (PND), Improvement Rate Difference (IRD). During the follow-up period, the children showed that they were able to maintain the trained components.

Conclusion: Cochlear implants have the potential to learn language skills, and the present study confirms their ability to learn grammar, using a comprehensive grammar treatment program.

Keywords: Cochlear implant; Grammar; Language disorder; MLU; PDSS; Persian; Treatment program.

Copyright © 2020 Elsevier B.V. All rights reserved.

Conflict of interest statement


Declaration of competing interest There is no conflict of interest.
supplementary info
MeSH termsexpand
full-text links

Proceed to details
Cite









Share













50
Acta Psychol (Amst)




. 2020 Jul;208:103094. doi: 10.1016/j.actpsy.2020.103094. Epub 2020 Jun 7.
Listeners are better at predicting speakers similar to themselves
Lauren V Hadley 1, Nina K Fisher 2, Martin J Pickering 2
Affiliations expand
PMID: 32521301
PMCID: PMC7408002
DOI: 10.1016/j.actpsy.2020.103094Free PMC article

Abstract


Although it takes several hundred milliseconds to prepare a spoken contribution, gaps between turns in conversation tend to be much shorter. To produce these short gaps, it appears that interlocutors predict the end of their partner's turn. The theory of prediction-by-simulation proposes that individuals use their own motor system to model a partner's upcoming actions by referring to prior production experience. In this study we investigate the role of motor experience for both predicting a turn-end and producing a spoken response by manipulating the similarity of heard speech to participants' own production style. We hypothesised that they would be better at predicting, and initiating responses to, speech produced in the style they speak themselves. Participants recorded a series of questions in two sessions, and several months later they listened to their own speech and that of a stylistically similar and a stylistically dissimilar participant (as assessed by independent raters). Participants predicted the end of 60 of these questions by pressing a button, and for the remaining 60 questions, by producing a spoken response. An analysis of response times showed that participants' button-press responses were faster for utterances spoken by themselves and by a stylistically similar partner, than for utterances spoken by a stylistically dissimilar partner. We conclude that simulation facilitates prediction of similar speakers.

Keywords: Conversation; Prediction; Simulation; Speech style; Turn-taking.

Copyright © 2020 The Authors. Published by Elsevier B.V. All rights reserved.

Conflict of interest statement


Declaration of competing interest None.

37 references
2 figures
supplementary info
MeSH termsexpand
full-text links

Proceed to details
Cite









Share

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου