The purpose of this qualitative study was to expand upon the findings of Pfeiffer et al.'s (2025) study of the perceptions and experiences of assistant professors in communication sciences and disorders (CSD) related to open science (OS) by examining those of associate and full professors. Thirty-one faculty in CSD (15 associate professors and 16 full professors) each participated in one of four 1-hr virtual focus groups conducted via Zoom videoconferencing software. The researchers used both deductive and inductive coding methods to analyze the focus group data and develop categories and subcategories summarizing the discussions. The researchers developed five categories to summarize the focus group discussions: (a) a desire to learn more about various OS practices and how to implement them by learning with and from others through a variety of formats; (b) OS practices have the potential to positively impact their research process and products, their careers, and the research communities they serve (e.g., clinicians, clinical populations); (c) OS practices could enhance the quality and credibility of research in CSD and reduce the research-to-practice gap by engaging both clinicians and researchers; (d) identification of both individual-level and systemic-level factors that could act as barriers or serve as facilitators to their use of OS practices; and (e) recommendations for a cultural shift to reduce barriers to engage in OS practices in CSD. Associate and full professors in CSD perceive many of the same barriers and facilitators to engaging in OS as assistant professors; however, they uniquely highlighted the need for a cultural shift from the ways they were trained to enhance implementation of OS practices. This shift includes embedding education about OS early in academic training, clearly outlining benefits and incentives for engaging in OS, and providing opportunities for clinicians to partner with researchers in learning about and implementing these practices. https://doi.org/10.23641/asha.31418480.
This study aimed to advance the understanding of consonant acquisition with quantitative and qualitative evidence from various groups of Chinese-speaking children. Normative patterns of phonological development of consonants were affirmed by utilizing phoneme transcription and perceptual judgment of a single-word normative data set, followed by analyses of comparable characteristics of a multiword data set of hearing and deaf/hard of hearing children. The single-word normative data set comprised 798 typically developing Chinese-speaking children, whereas the multiword data set consisted of 79 normal hearing and 45 deaf/hard of hearing children. The percentage of consonants correct (PCC) was derived from phonemes transcribed by automatic alignment and human verification. Perceptual acceptability/intelligibility ratings include the percentage of correctly produced words (AccWord) in the normative data set and the intelligibility scores (IntScore) in the multiword data set. Distribution and correlation of PCC and AccWord/IntScore, as well as consonant error patterns, were examined and compared. Developmental patterns and phonological aspects of consonant acquisition in Chinese-speaking children were thoroughly reported. PCC was significantly correlated with AccWord/IntScore across all subject groups in both single-word and multiword data sets. This finding suggested that PCC can indicate speech performance above the phoneme level. In all subject groups, stopping errors occurred more frequently than frication, the accuracy rates of retroflex sounds were low, and there was a mixed use of /n, l, ʐ/. The current study featured developmental growth curves, error analysis, and possible clinical applications of a wordlist-based normative data set as reference standards. The fact that PCC is correlated with acceptability/intelligibility ratings across data sets and subject groups supports its efficacy as a quantitative indicator of child speech assessment.
Hearing aid use in older adults has been suggested to reduce cognitive load, thereby improving performance on auditory-based cognitive tasks. However, there is limited research regarding how hearing aids impact performance during cognitively demanding auditory and visual tasks. The purpose of this study was to investigate the impact of advanced-level hearing aid use during auditory (on-domain) and visual (off-domain) cognitive tasks in background noise. Thirty-one older adults aged 60-87 years participated in the study. All participants were experienced and satisfied hearing aid users. Participants were fitted with a study hearing aid to ensure consistent signal processing characteristics. A series of six cognitive tasks were completed in quiet and background noise with and without hearing aids. The visual tasks included the Trail Making Test, Stroop Color Word Test, and Size Comparison Span Test. The auditory tasks were the Oral Trail Making Test, Auditory Stroop Task, and Word Auditory Recognition and Recall Measure. Results indicated that, for measures of inhibition, executive function, and attention, there was no significant benefit to the use of hearing aids. In contrast, hearing aid use resulted in better performance on working memory tasks. Results indicated that the benefit from hearing aids for auditory (on-domain) and visual (off-domain) cognitive task performance was mixed. Older adults performed better in quiet than in noise with and without hearing aids. Furthermore, hearing aids were beneficial in quiet environments when the working memory task was auditory (on-domain). The findings from the current study support that the use of hearing aids improves access to working memory in both quiet and noisy conditions, which may ultimately improve speech understanding.
The aim of this study was to evaluate the effect of hearing devices for adults with mild-to-severe hearing losses. Specifically, we assessed the magnitude of change across outcome domains, identified measurement tools used, and reported adverse effects associated with device use. We conducted a systematic review and meta-analysis following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Searches were performed in PubMed, CINAHL, and Embase. Included studies were randomized controlled trials (RCTs) involving adults (≥ 18 years of age) with mild-to-severe hearing loss, comparing any air-conduction hearing device to passive or active controls. Effect sizes were calculated as Hedges's g, and random-effects models estimated pooled effects. Thirty-three RCTs (N = 4,471 participants) met the inclusion criteria, although pooled estimates could be derived from only a subset of trials due to limited reporting. Hearing aids demonstrated moderate-to-large benefits on hearing-related self-report outcomes compared with no-intervention or waitlist controls; however, pooled meta-analytic estimates could not be generated for this comparison because of insufficient data across trials. Compared with placebo, hearing aids yielded a small pooled effect (g ≈ 0.37), driven largely by trials including participants with comorbid Alzheimer's disease. Personal sound amplification products (PSAPs) showed a pooled medium effect compared with no intervention (g ≈ 0.42), with benefits primarily observed for hearing-specific self-report outcomes and selected behavioral measures. In head-to-head comparisons, hearing aids showed a large pooled advantage over other hearing devices, including smartphone hearing aid applications (SHAAs) and extended-wear hearing aids (EWHAs; g ≈ 0.88), based on data from two trials. Across the included studies, most outcomes were self-reported (≈ 81%) and behavioral (≈ 45%), with very limited assessment of cognitive or neurophysiological domains. Nine studies reported adverse events, with only one device-related incident. Heterogeneity was high (I2 > 80%), but no publication bias was detected. Hearing aids provide substantial benefit for hearing-related self-reported outcomes in comparison to PSAPs, SHAAs, EWHAs, and placebo. However, high heterogeneity prevents reliable conclusions based on pooled estimates. There also remains limited evidence on cognitive, neurophysiological, and long-term behavioral outcomes, underscoring the need for more rigorous, domain-diverse RCTs in this field. https://doi.org/10.23641/asha.32086299.
Young autistic children have a range of language and cognitive abilities and, as a result, may differentially benefit from interventions supporting skills in these and related domains. Although studies have previously examined the extent to which participant characteristics interact with intervention effects, they have primarily restricted the analyses to a single intervention approach. In the present study, we drew on data from a comprehensive meta-analysis of group design, nonpharmacological intervention studies for young autistic children to test these effects. Specifically, we conducted a secondary meta-regression analysis to examine whether cognitive and language standard scores and age equivalents at study entry significantly moderated intervention effects across intervention type on adaptive, cognitive, language, and social communication outcomes and separately across outcome type for behavioral, developmental, naturalistic developmental behavioral interventions and technology-based interventions. Cognitive and language ability was quantified using reported or estimated standard scores, and cognitive and language level was quantified using reported or estimated age-equivalent scores. Analyses within outcome type were conducted using a data set of 1,911 effect sizes from 202 independent samples, and analyses within intervention type were conducted using a data set of 2,137 effect sizes from 144 independent samples. Few studies reported standard scores and/or age equivalents for participant language. None of the putative moderators significantly predicted intervention effects by outcome domain (i.e., adaptive, cognitive, language, and social communication). Both cognitive standard and age-equivalent scores positively and significantly predicted effects of technology-based interventions exclusively, but we did not find robust evidence that language standard or age-equivalent scores significantly predicted effects by intervention type. These findings are exploratory and warrant cautious interpretation. Future intervention researchers should extensively characterize participant samples in terms of their language and cognitive ability to aid meta-analytic investigation. The field would benefit from additional high-quality randomized controlled trials testing whether intervention effects vary by participant characteristics, using preplanned moderator analyses, valid measures, and large representative samples. https://doi.org/10.23641/asha.31967844.
The purpose of this study was to explore, among first-time hearing aid users, (a) modifiable predictors of self-reported hearing aid satisfaction and benefit, (b) how hearing aid satisfaction and benefit progress with time throughout the first 24 weeks after fitting, and (c) motivation (intention and self-efficacy) and volition (action planning and coping planning) for hearing aid use and their changes postprovision. Fifty-four first-time hearing aid users completed questionnaires on various aspects before and after fitting. Before fitting, assessments included personality, lifestyle, expectations, reason for help-seeking, importance of hearing improvement, as well as motivation and volition. After fitting, participants periodically evaluated hearing aid benefit, satisfaction, motivation, and volition over 24 weeks. The importance of improving hearing was the primary modifiable predictor of hearing aid outcomes. Participants reported moderate to high satisfaction and benefit at 2 weeks. Benefit levels remained stable over 24 weeks, while satisfaction showed slight improvements. The intention to use hearing aids was high before fitting, while action planning and coping planning were lower and remained unchanged postfitting. The findings indicate that the intrinsic value of improving hearing is crucial for positive outcomes. This, combined with the lower scores for planning and coping with challenges of consistent hearing aid use, highlights the need for awareness and educational tools. https://doi.org/10.23641/asha.32012565.
Solid food stimuli (e.g., crackers) are commonly used in videofluoroscopic swallowing studies (VFSS). A variety of additional food textures may be included when exploring the benefit of compensatory strategies. However, interpretation regarding these textures is hindered by a lack of data outlining expected values for measures of swallowing safety, kinematics, timing, and efficiency. We report preliminary data for quantitative VFSS measures with International Dysphagia Diet Standardisation Initiative (IDDSI) food levels MM5 (minced and moist), SB6 (soft-and-bite-sized), and RG7 (regular) in healthy young adults. VFSS were performed at 30 frames per second in 20 participants (10 men, 10 women; Mage = 28 years, range: 23-55) who swallowed two boluses each of MM5 (teaspoon), SB6 (cube 1.5 cm3), and RG7 (bite) barium stimuli. Blinded duplicate rating identified key frames on the initial swallow of each bolus, from which timing measures were derived relative to hyoid burst (HYB) and end of aggregation (EOA). Safety was scored using the 8-point Penetration-Aspiration Scale. Anatomically normalized pixel-based measures of pharyngeal area at maximum pharyngeal constriction (PhAMPC), upper esophageal sphincter maximum distention (UESMAX), and residue were obtained. Intraclass correlations were calculated for reliability; discrepancies were resolved by consensus. Friedman's tests explored differences by texture. The results were compared to previously reported reference data for teaspoons of EX4/PU4 using one-sample t tests. Timing measures relative to HYB increased across textures: MM5 < SB6 < RG7 and were significantly longer than reported values for EX4/PU4. Timing measures calculated relative to EOA did not differ by texture. UES opening duration was significantly longer for MM5 than SB6, RG7, and EX4/PU4. UESMAX distention was significantly smaller for RG7 and EX4/PU4 than MM5 and SB6. PhAMPC was larger for MM5, SB6, and RG7 than EX4/PU4. Total pharyngeal residue was significantly greater for MM5 and RG7 than for SB6 and EX4/PU4. These data suggest that variations in some pharyngeal phase parameters should be expected across IDDSI levels EX4/PU4, MM5, SB6, and RG7. Additional research is needed to elucidate interactions of bolus size and cohesiveness with these texture-related differences. Teaspoons of EX4/PU4 stimuli are insufficient to predict swallowing physiology with higher food texture levels. RG7 stimuli are recommended to test pharyngeal constriction and swallowing efficiency. SB6 stimuli can provide additional insights regarding the benefits of altering bolus properties in terms of swallowing efficiency. https://doi.org/10.23641/asha.31592143.
This study was divided into two parts. Study 1 aimed to investigate how dual-task and select background noise conditions impact language production of neurologically healthy adults (NHA). Study 2 aimed to use the sample from Study 1 to identify whether four people with mild aphasia perform at an expected level when compared with their NHA peer group. Study 1 examined the spoken language production of NHA in sustained, selective, and divided attention conditions during a story retell task. NHA participant groups consisted of 21 young and midlife adults (26-54 years), 19 early older adults (55-69 years), and 20 late older adults (70-85 years). Study 2 used a case series approach to investigate how the language production of four people with aphasia (PWA) compared to their respective NHA group. All participants retold stories in a silent baseline condition, three background noise conditions (cocktail party, conversation, phone call), and one dual-task condition (tone discrimination). Language production measures (language informativeness, lexical diversity, lexical-phonological errors, speech rate, disfluent verbalizations), tone discrimination accuracy and response time, and perceived effort and stress were compared across groups and conditions. Study 1 revealed that the language of late older adults was significantly less efficient than the other two groups, that both late and early older adults produced more disfluent verbalizations than young and midlife adults, and that late older adults demonstrated more lexical diversity than early older adults. The tone discrimination accuracy and response time of late older adults were also significantly lower than those of young and midlife adults. Across groups, language informativeness decreased and lexical-phonological errors increased during the dual-task condition, and lexical diversity decreased while lexical-phonological errors and disfluent verbalizations increased during the phone call condition. Costs to tone discrimination accuracy, tone discrimination response time, perceived effort, and perceived stress were found in the dual-task condition across groups. In Study 2, four PWA showed impaired language production when compared with their age-matched NHA group across multiple dependent variables with somewhat unique responses for each participant. Ultimately, three of the four showed some degree of interference in the attentionally demanding conditions, whereas one showed some degree of benefit. The findings of Study 1 suggest that some, but not all, measures of spoken language production are impacted by aging, and that selective and divided attention interfere with spoken language production for NHA. Study 2 suggests that although attentional demands may disproportionately affect error production for many PWA, some may also experience benefits to their spoken language during attentionally demanding conditions. These findings emphasize the importance of individualized evaluation of the impact of everyday communication environments for PWA. https://doi.org/10.23641/asha.31804207.
People with aphasia have a comprehensive range of needs due to their language impairment and its resulting impact on everyday life. Aphasia can be compounded by environments and contexts that are not aphasia friendly. This calls for a range of speech and language interventions targeting the language impairment and its consequences, as modeled by the International Classification of Functioning, Disability and Health. Intensive Comprehensive Aphasia Programs (ICAPs) aim to tackle this issue by providing a range of interventions in a time-limited schedule. However, when this service delivery model was developed, the rationale and evidence base for each component of the model was not clearly defined or mapped out. Applying theory of change (TOC) may be helpful in detailing how the therapeutic input is hypothesized to produce a desired change. A TOC is coconstructed with key stakeholders, people with aphasia in this instance. This process can be mapped on a logic model (LM), and potential negative or adverse outcomes (dark logic) can also be considered. This article provides an overview of ICAPs, key gaps in the literature, and provides a methodological example of how TOC, logic modeling, and dark logic can be applied to an ICAP despite some limitations with the approach. An extensive scoping of the literature and discussion with aphasia researchers produced an initial TOC, which was then refined by people with aphasia (n = 8) using focus group methodology. The focus group explored potential adverse outcomes of an ICAP using dark logic modeling. The TOC was mapped onto an LM. A provisional TOC and LM with dark logic for an ICAP was produced, though inclusion of other stakeholder groups is required for thorough application of a TOC to ICAPs. There are challenges in applying TOC, LM, and dark logic modeling to a service delivery model. However, this approach was useful in mapping an ICAP in a methodological manner and in identifying how the theoretical underpinning, design, outcome measurement, and evaluation of an ICAP including a consideration of risks might be enhanced. https://doi.org/10.23641/asha.31478488.
Intensive and Comprehensive Aphasia Programs (ICAPs) have been implemented in various settings in English-speaking regions, demonstrating beneficial effects on participants' communication abilities, participation levels, and overall well-being. Considering the limited number of ICAP studies involving Chinese- or Cantonese-speaking people with aphasia (PWA) and the existing gaps in local rehabilitation services in Hong Kong, our research aimed to develop a culturally and linguistically specific ICAP for Cantonese-speaking PWA. We first outlined the logistical constructs for the Hong Kong Intensive and Comprehensive Aphasia Program (HK-ICAP). Then, we examined the effects of the ICAP on language recovery and quality of life among PWA. Our research team developed the HK-ICAP construct based on the knowledge shared in published ICAP research studies. Subsequently, we adapted evidence-based treatment approaches to our language and developed culturally tailored treatment stimuli for this purpose. Twenty-eight right-handed adults with chronic aphasia were provided with a 2.5-week, 39-hr ICAP intervention in Hong Kong between 2023 and 2025. Linguistic and quality of life-related measurements were taken at baseline, immediately posttreatment, and at 1 month follow-up. Data were analyzed at both group and individual levels. At the group level, significant improvements were observed in all linguistic and quality of life measures at posttreatment, and most of the gains were maintained at 1 month follow-up. At the individual level, the Minimum Detectable Change90 was used to identify therapeutic gains across various linguistic measures. The findings demonstrated that 33%-42% of the participants achieved therapeutic gains in each corresponding measure. A post hoc analysis of individual performance revealed that 90% of the participants (i.e., 25 of 28) achieved at least one therapeutic improvement in at least one measure at posttreatment. The findings indicate that an ICAP is a feasible intervention model in culturally and linguistically diverse settings. This study provides robust evidence supporting the application of this intervention model among Cantonese- or Chinese-speaking populations. https://doi.org/10.23641/asha.31934634.
Oral and oropharyngeal cancer and its treatment can have a devastating impact on speech. The goal of this study is to characterize the changes in English sibilant /s/ production associated with resection site and the sex and age of the patients following surgical removal of oral and oropharyngeal tumors. The acoustics of 4,371 productions of /s/ from read continuous speech of 89 patients (66 men, 23 women) with an mean age of 58.2 years (range: 22-82) were analyzed before and after surgery for oral and/or oropharyngeal cancer. The center of gravity (COG) of the fricative power spectrum was analyzed with a linear mixed-effects model with assessment time (pre-operative and 1, 6, and 12 months postoperative), age, sex, and proportion of resections (%) within oral and pharyngeal structures as fixed effects and random intercepts for speaker and phonetic context. Before surgery, male sex and older age were associated with lower COG. After surgery, COG was reduced with partial recovery at 1 year and dropped more for females than males. Overall, recovery was better among those who did not have radiation. At 1 year, the COG of /s/ was most impacted by resections to the tongue (without radiation), followed by resections to the velopharyngeal mechanism (with radiation). The additional effect of radiation treatment was modulated by age. The results suggest partial recovery of speech function at 1 year. The recovery was gendered with females remaining further away from the pretreatment values after surgery compared to the males. https://doi.org/10.23641/asha.31953024.
Pupillometry has been frequently used to examine the influence of auditory task demand on listening effort. However, the intelligibility effect on the pupil dilation response might be altered under high memory load. We assessed the effects of signal-to-noise ratio (SNR; auditory demand), memory load, and stimulus rehearsal on the pupil dilation response. Twenty-four participants with normal hearing were included (Mage = 22 years, 16 women). Sequences of four or six digits were presented in stationary noise at two auditory demand levels. For either 20% or 80% of the trials, digits were rehearsed. Participants rated listening effort, task difficulty, performance, and tendency to give up. Linear mixed-model analyses indicated that intelligibility was higher for four digits compared to six digits and for lower auditory demand compared to higher auditory demand. The mean pupil dilation was larger for lower auditory demand during listening. In the repetition interval, the peak and mean pupil dilations were larger for lower auditory demand compared to higher auditory demand, for six digits compared to four digits, and for 80% compared to 20% stimulus rehearsal. Subjective listening effort and task difficulty were higher for higher auditory demand than for lower auditory demand and for six digits than for four digits. A lower auditory demand also resulted in higher performance ratings and lower tendency to give up compared to higher auditory demand. The established decrease in the pupil dilation response with decreasing auditory demand (higher SNR) can be altered in tasks with relatively high memory demands. It is important to consider the memory demands imposed by the listening task when assessing the pupil dilation response. https://doi.org/10.23641/asha.31974978.
This study investigated decontextualized talk produced by preschoolers with language impairment. Although literature has highlighted the pivotal transition from using "here and now" to using "there and then" language in typically developing children entering preschool age, there is limited understanding of this phenomenon in children with developmental disorders. This study analyzed play-based conversations between four preservice speech-language pathologists (SLPs) and two participant groups: children with autism spectrum disorder (ASD; n = 7) and children with developmental language disorder (DLD; n = 9). Data collection included video-recorded conversation samples and standardized assessments of child receptive language development. For dyadic measures, the study examined group differences in total conversation turns, turn-taking rates, and proportion of decontextualized turns. For child and adult language measures, the study analyzed their speech samples, including mean length of utterance (MLU) in words and number of different words (NDW) per minute, and preservice SLPs' use of language facilitation techniques during play-based conversations. Our findings revealed that dyads from both groups (DLD, ASD) engaged in decontextualized talk (narratives, explanations). Children's standardized language measures correlated with decontextualized talk and conversation turns, although no group differences were observed in these measures. Analysis of preservice SLPs' language behaviors showed equivalent linguistic input (syntactic length, lexical diversity) and comparable use of language facilitation techniques across diagnostic groups. Preservice SLPs' use of repetition showed strong positive correlations with children's immediate language outcomes (MLU in words, NDW per minute). When controlling for both child age and adult facilitation technique use, group differences in dyadic measures remained nonsignificant. This lack of group differences may be attributed to the unique features of language elicited from child-led free-play, a small sample size, and the heterogeneity of language profiles even within two diagnostic groups. Results provide new information about play-based verbal interactions of children with DLD and ASD, suggesting potential for clinicians to incorporate decontextualized talk into interventions. Future studies can examine the effects of decontextualized talk strategies, such as engaging children in narratives and explanations in more structured activities, on their language outcomes.
Individuals with hearing impairment can perceive speech sounds with the help of cochlear implants or hearing aids. Recent studies have shown that orofacial somatosensory inputs may modify speech perception. This study explored the potential role of such somatosensory inputs in speech perception in individuals with hearing impairment. Twenty-one native French-speaking participants with various profiles of bilateral hearing impairment and wearing hearing aids and/or cochlear implants were tested to explore the extent to which an orofacial somatosensory stimulation, consisting in facial skin stretch provided by a robotic device, would affect their speech perception performance in a vowel identification task, in comparison with 25 participants with normal hearing. This potential somatosensory effect was evaluated in relation to their hearing ability assessed by both their hearing threshold of digits in acoustic noise and their audiological profile, and with their production ability assessed by variability in a vowel production task. The somatosensory effects varied depending on participants' hearing ability, with three different profiles. A first group of eight participants, who were able to identify the vowels auditorily though with thresholds in noise higher than for individuals with normal hearing, did not show any somatosensory effect, with identical vowel identification scores without and with facial deformation. The other 13 participants were unable to auditorily identify the vowels and had high thresholds in noise. Among them, eight participants who had used a hearing device for a long duration (≥ 18 years) showed a strong somatosensory bias modifying vowel identification when facial deformation was present. These participants also showed greater variability in vowel production. The last five participants in this second group showed no somatosensory effect at all. These three profiles were related to the role of auditory experience and hearing ability in the development of auditory-somatosensory integration.
The overall aims of this study were to (a) examine how vocal fold kinematics differ across typical, pressed, and breathy phonation in vocally healthy adults and (b) investigate the relationships between high-speed videoendoscopic-derived kinematic measures and acoustic measures of cepstral peak prominence (CPP) and the amplitude difference between the first two spectral harmonics (H1-H2) and whether the relationships vary by phonation type. Forty vocally healthy adults (32 female, eight male, with a mean age of 26 years) underwent simultaneous transoral rigid high-speed videoendoscopy (HSV; 4,000 frames per second) and acoustic recording during sustained /i:/ in three phonation types: typical, pressed, and breathy. Primary HSV parameters included closing quotient (ClQ), speed index (SI), amplitude-to-length ratio (ALR), stiffness index (STI), and normalized maximum area declination rate (MADRn). Primary acoustic measures were CPP and H1-H2. Mixed analyses of variance were conducted for phonation type differences in HSV parameters with main effects of phonation type, sex, and their interaction. Then, multiple regression models with phonation type interactions were conducted to assess the relationships between HSV and acoustic measures. Relative to typical phonation, simulated pressed phonation showed lower values of ClQ, higher MADRn, and higher STI with large effects, whereas simulated breathy phonation demonstrated higher ClQ and lower MADRn with medium effects. CPP was significantly negatively correlated with ClQ and positively correlated with MADRn, SI, and STI. H1-H2 was significantly positively correlated with ClQ and ALR and negatively correlated with MADRn, SI, and STI. There was a significant phonation type interaction with the correlations between H1-H2 and MADRn, SI, and STI; in each, breathy phonation had a strong, negative relationship and pressed phonation had a small or negligible relationship. ClQ consistently correlated with both acoustic measures across all phonation types. Vibratory patterns in pressed phonation were suggestive of increased vocal fold contact stress, as lower ClQ and higher MADRn values suggest more abrupt, faster glottal closure. CPP and H1-H2 can reflect underlying glottal physiology, but their predictive value depends on phonation type in most cases. However, findings suggest that ClQ could be a robust physiological parameter with stable acoustic correlates regardless of phonation type.
The purpose of this study was to examine the validity of measures obtained from the Sentence Diversity Priming Task (SDPT), a structured elicitation protocol for assessing sentence development under supported conditions. We compared measures from the SDPT and a play-based language sample and between late-talking (LT) toddlers and typically developing (TD) peers. We evaluated differences between the two sampling contexts and examined how measures obtained from the two contexts were related. A sample of 60 LT toddlers and 77 TD toddlers between 30 and 38 months of age were drawn from the Midwest When to Worry study. Toddlers completed the SDPT and a 10-min parent-child language sample delivered and/or recorded through remote video chat platforms. Samples were analyzed for the number of complete and intelligible utterances, mean length of utterance (MLU), number of different words, verb diversity, and third-person (3P) subject diversity. We used repeated-measures analyses of variance to examine differences in measures across sampling context and LT language status as well as Pearson correlations to examine associations between measures. The SDPT elicited longer utterances with more diverse 3P subjects and verbs in fewer utterances than the play samples. Measures obtained from the SDPT also differentiated LT and TD groups, with a significant Group × Sampling Context interaction for MLU and 3P subject diversity. Measures across the SDPT and play sample were also significantly associated. These findings support the validity of the SDPT as an efficient tool for assessing sentence diversity with young children. Potential uses of the measures derived from the SDPT to distinguish toddlers most at risk for developmental language disorder are discussed. Demonstrating discriminative utility will be an important next step. https://doi.org/10.23641/asha.32065587.
We aimed to explore the rates of bound morpheme production at two time points (T1 and T2) by deaf and hard of hearing (DHH) preschoolers and their typically hearing (TH) peers. We further sought to describe the rates and types of unscorable responses children produced. Sixty-four DHH preschoolers and 66 TH preschoolers participated as part of a larger, ongoing longitudinal study. Children were given the Test of Early Grammatical Impairment (TEGI) screener, which elicits productions of the third-person singular present and past tense. TEGI screeners were given twice, spaced 6 months apart. TH children produced significantly more singular present-tense and regular past-tense morphemes than cochlear implant (CI)-using children at both time points; hearing aid-using children were not significantly different from TH or CI users. All children were more accurate with the regular past tense at T2 than at T1. No interactions were significant. Examining the types of unscorable responses indicated that the DHH children were more likely to echo the prompt than TH children, particularly at T1. Assessments that elicit bound morpheme productions may not best capture DHH children's morphological sensitivity. When language samples are not feasible, receptive tasks may be a good alternative to probe children's knowledge.
We aimed to explore with a Bayesian network how clinically important factors related to feeding infants with complex congenital heart disease (CCHD) are associated with each other and influence feeding outcomes. Our goal was to raise questions for further study. This descriptive study included data from 19 infants on severity of neonatal illness, early oral-motor (OM) and swallowing skills, feeding patterns, liquid and solid intake, and weight-for-age at 2 and 6 months. Bayesian network analysis was used to estimate the conditional probabilities of these variables in relation to each other and predict the 6-month weight-for-age z score and feeding outcomes (volume of liquid and solid food consumed) in the context of the other variables in the model. In clinically oriented scenarios, we manipulated the probability of specific predictor variables to 100% to examine the network effect on outcome variables. Descriptive analyses revealed feeding and growth patterns consistent with prior literature. Bayesian network modeling identified three key themes: (a) feeding profiles may support risk stratification and guide targeted intervention, (b) OM skill development emerged as a foundational predictor of feeding and growth outcomes, and (c) clinical stability may obscure underlying feeding vulnerabilities. Bayesian network analysis provided insights into the conditional relationships among multiple factors, showing a method that could support clinical decision making. Further study with a larger, more diverse sample is needed to explore whether closer monitoring of intake and growth would promote better feeding outcomes, particularly for infants with less severe CCHD. https://doi.org/10.23641/asha.31856314.
This study investigates the semantic processes underlying how children acquire and use nouns and predicates (verbs, adjectives/adverbs), focusing on age and cross-linguistic differences in these naming strategies. Ninety-two children aged 23-25 months (53 English and 39 Italian) and 115 children aged 29-31 months (69 English and 46 Italian) took part in a picture-naming task to assess their acquisition of nouns and predicates. We investigated the types of responses (correct, incorrect, no response, and unintelligible) and the distribution of incorrect responses (semantic errors, visual errors, and other errors) across two ages and two languages. Response accuracy increased significantly from 24 to 30 months for lexical categories and languages. At 30 months, children produced fewer no responses, incorrect responses, and unintelligible responses for nouns and fewer no responses for predicates. Italian children showed a higher frequency of unintelligible responses for nouns, while English children produced more no responses for predicates. The distribution of semantically incorrect responses also varied with age: Compared to 24-month-olds, 30-month-olds produced fewer semantic associative errors and onomatopoeic responses in nouns but more semantic coordinate errors for predicates. English children produced more semantic coordinate and subordinate errors in nouns and fewer semantic associative and onomatopoeic errors in predicates than Italian children. Data are discussed in the context of cross-linguistic comparisons of semantic representations underlying noun and predicate acquisition at 2-3 years.
This study examined how bilingualism influences early linguistic and pragmatic alterations in idiopathic Parkinson's disease (IPD), integrating group-based and factorial analyses to identify early communicative markers. Sixty-five participants (13 bilingual IPD, 14 monolingual IPD, 14 bilingual healthy, 24 monolingual healthy) produced Turkish narratives based on Frog, Where Are You? Group comparisons (Kruskal-Wallis H and Mann-Whitney U tests) were performed across four groups for microstructural indices (mean length of utterance in morphemes [MLU-M], type-token ratio [TTR], morphological errors, verbal fragmentations) and pragmatic markers (enrichment, exclamation, uncertainty, metaphor, emotional terms). Supplementary 2 × 2 factorial analyses (disease: IPD vs. healthy; bilingualism: bilingual vs. monolingual) were conducted to examine main and interaction effects, with acoustic parameters (fundamental frequency [F0] and intensity ranges) included for prosodic evaluation. Group comparisons revealed that bilingual IPD speakers exhibited the lowest MLU-M (p = .012), highest morphological error (p = .036), and greatest verbal fragmentation (p < .001). Pragmatically, they produced fewer enrichment expressions (p = .041) but more exclamations than monolingual IPD participants (p < .001). Acoustic analysis showed reduced but still broader F0 and intensity ranges in bilingual IPD speakers relative to monolingual IPD speakers (p = .012, p = .047). The 2 × 2 factorial analysis confirmed significant main effects of disease on MLU-M and TTR (p < .05) and Disease × Bilingualism interactions for morphological errors and enrichment (p < .05), demonstrating that bilingualism amplified morphosyntactic instability but mitigated prosodic flattening. Early-stage IPD involves concurrent microstructural and pragmatic decline, with bilingualism exerting both protective and burdening effects. Crucially, the reduction of enrichment expressions (p < .05) emerged as an early and sensitive indicator of pragmatic deterioration in bilingual Parkinson's disease, linking executive-control demands with sociopragmatic incompleteness. Discourse-level analyses combining group-based and factorial approaches thus provide a refined framework for identifying subclinical linguistic-pragmatic changes beyond conventional motor or lexical measures. https://doi.org/10.23641/asha.31999344.