Children and young people (CYP) presenting in mental health crisis to acute paediatric settings represent a growing clinical and system-level challenge. These environments, primarily designed for physical healthcare, frequently lack tailored, evidence-based digital tools to support risk mitigation and clinical decision-making in safety-critical situations. In response, we co-developed a prototype digital risk mitigation pathway, designed as a clinical decision support system, to enhance safety and care quality for CYP in crisis. This study examined the feasibility of implementing this pathway across diverse general hospital contexts. We conducted a multi-site, exploratory mixed-methods study using a multiple case study design across three acute paediatric hospitals in England. Data were collected between October 2022 and March 2023 through organisational surveys, documentary analysis, semi-structured interviews and focus groups. Qualitative data were analysed inductively, with within-case analysis followed by cross-case synthesis to identify patterns of barriers and enablers influencing implementation feasibility. Three organisational surveys were completed, 13 organisational documents analysed, and 30 healthcare professionals participated in interviews and focus groups. Five overarching themes were identified as key determinants of implementation: digital infrastructure; information and communications technology training and support; communication; information governance; and healthcare professional attributes. Marked variation in digital maturity was observed across sites. Feasibility was strongly shaped by the alignment of digital readiness with clinical safety requirements, device availability, workflow integration, governance processes and workforce support in time-pressured crisis care. This study provides novel, contextually grounded insights into the organisational and workforce determinants shaping the feasibility of implementing digital decision support in acute paediatric care. Our findings highlight the central importance of aligning digital readiness with clinical safety priorities and addressing multi-level implementation challenges. These insights offer actionable evidence to inform the design, deployment and scaling of contextually responsive digital health interventions for CYP admitted in mental health crisis.
Visual impairment affects approximately 2.2 billion people worldwide and has significant impacts on various aspects of life, including physical, social, economic, and emotional domains. Assessing the quality of life of these individuals is essential for identifying their needs and guiding health promotion strategies. However, no studies were found that systematically cataloged the instruments used for this evaluation specifically for people with visual impairment. This study aims to systematically map the scientific evidence regarding the instruments used to assess quality of life in individuals with visual impairment at any health care level. The population, concept, and context framework guided the development of the research question: What instruments are available in the scientific literature to assess the quality of life of people with visual impairment across health care levels? Data will be collected from major databases and gray literature, with duplicates managed in Mendeley and screening conducted independently by 2 reviewers using Rayyan. Full texts will be assessed based on eligibility criteria, and data will be synthesized in Microsoft Excel and reported using a flowchart and narrative summary, following PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. This protocol was registered on the Open Science Framework platform on July 28, 2025. The results of this study will be disseminated through publication in a peer-reviewed scientific journal. It is expected that the findings will provide valuable support for the development and advancement of a broader research project. Identifying and evaluating instruments used to assess the quality of life in individuals with visual impairment are crucial to ensure the use of reliable and scientifically sound tools. This process not only advances scientific knowledge but also informs public health policies aimed at promoting equity, inclusion, and improved living conditions for this population.
This dataset provides full metabolomic data for root and stem samples of Sophora flavescens, a commonly used Chinese medicine. Metabolites were extracted separately from root and stem samples and subjected to high-resolution LC-MS in positive and negative modes. Raw spectral data were used for peak detection, alignment, normalization, and metabolite annotation using established metabolomic pipelines and public databases. Principal component analysis and partial least-squares discriminant analysis were used to obtain tissue-specific metabolic profiles. Differential metabolite analysis was conducted to determine the metabolites that were more enriched and depleted in roots and stems, and pathway enrichment analysis was performed to determine the metabolic pathways related to these metabolites. The dataset contains raw LC-MS files, processed feature tables, annotated metabolite lists, and statistical analysis results in a standard format. In these data, there are important materials for studying the tissue-specific metabolism different of S. flavescens, as well as for conducting research on the different distribution of medicine materials and studying biosynthesis routes and quality identification of medicinal plants. The same will be helpful for comparative metabolomics, integrative multi-omics, and natural products.
Mobility-related disabilities are the most prevalent disabilities in Saudi Arabia, significantly affecting quality of life. While mobility-assistive technologies (MATs) help mitigate impairments, unmet needs persist with a critical knowledge gap regarding barriers to their provision. This study investigated the barriers to the provision of MAT in the healthcare system of Saudi Arabia. This mixed methods study combined a cross-sectional survey with a qualitative embedded multiple case study across three public hospitals in two Saudi regions. The survey assessed MAT availability among 67 stakeholders and facilitated recruitment for the qualitative investigation. Semi-structured interviews were conducted with 33 participants (healthcare professionals [HCPs], managers and leaders), and six policy documents were analysed. The Consolidated Framework for Implementation Research (CFIR) guided data collection and analysis. The survey demonstrated that, while basic MATs were generally accessible, advanced devices were not and there were substantial regional disparities. The qualitative analysis identified 52 barriers across all CFIR domains, including dependence on imported devices, high costs and procurement delays, restrictive policies and regulatory ambiguities, insufficient professional healthcare training, inadequate strategic planning and clinical leadership, infrastructural constraints and sociocultural factors that influence patient acceptance. This study is the first comprehensive investigation of barriers to MAT provision in Saudi Arabia. These findings provide policymakers and practitioners with an evidence-based foundation for designing contextually relevant implementation strategies to enhance equitable MAT provision. Rehabilitation professionals require targeted education on MAT assessment and prescription to address identified knowledge gaps.Culturally sensitive education strategies are needed to address stigma regarding device use.Clear protocols defining roles and responsibilities across rehabilitation, administrative and procurement departments are essential to reduce coordination delays in MAT provision.Comprehensive services encompassing assessment, environmental evaluation, patient training and maintenance support are needed.
Use of administrative health data to identify chronic disease cases can cause misclassification bias. Reclassification-based exit rules may reduce misclassification bias. Manitoban administrative health data (1995-2022) were used to ascertain multiple sclerosis (MS) and "juvenile diabetes" (JD) prevalence. We constructed multivariable logistic regression model-based algorithms and used a model-predicted probability exit rule to reclassify JD and MS case status annually. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and reclassification rates were estimated. Linear regression tested for differences in prevalence estimates for the model-based algorithm with an exit rule and an existing Canadian Chronic Disease Surveillance System (CCDSS) algorithm without an exit rule. The MS cohort included 60 228 individuals (608 cases, 59 620 non-cases) and the JD cohort 44 125 individuals (2506 cases, 41 619 non-cases). Model-based algorithm sensitivity was 0.62 to 0.85 for MS and 0.87 to 0.95 for JD. PPV for MS was 0.21 to 0.60 and for JD was 0.92 to 0.95. Specificity and NPV were consistently high (0.98-1.00). Non-cases were frequently misclassified; reclassification rates for non-cases were higher than for cases for MS (0.22-0.33 vs. 0.14-0.28) and JD (0.18-0.65 vs. 0.13-0.15). The model-based algorithm with an exit rule for MS, but not for JD, had a slower increase in prevalence than the CCDSS algorithm. Case ascertainment algorithms with an exit rule can address misclassification bias when estimating chronic disease prevalence using administrative health data. Improvements are disease dependent. L’utilisation de données administratives sur la santé pour recenser les cas de maladies chroniques peut entraîner des biais de classification. Des règles de sortie basées sur la reclassification pourraient réduire ces biais de classification. Nous avons utilisé les données administratives du Manitoba sur la santé (1995-2022) pour mesurer la prévalence de la sclérose en plaques (SP) et du diabète insulino-dépendant (DID). Nous avons élaboré des algorithmes basés sur un modèle de régression logistique multivarié et utilisé une règle de sortie de probabilité prédite par le modèle pour reclasser chaque année les cas de DID et de SP. Nous avons estimé la sensibilité, la spécificité, la valeur prédictive positive (VPP), la valeur prédictive négative (VPN) et les taux de reclassification. La régression linéaire a permis de tester les différences dans les estimations de prévalence entre un algorithme basé sur un modèle avec règle de sortie et un algorithme existant du Système canadien de surveillance des maladies chroniques (SCSMC) sans règle de sortie. La cohorte de la SP comprenait 60 228 personnes (608 cas, 59 620 non-cas) et la cohorte du DID 44 125 personnes (2 506 cas, 41 619 non-cas). La sensibilité de l’algorithme basé sur un modèle était comprise entre 0,62 et 0,85 pour la SP et entre 0,87 et 0,95 pour le DID. La VPP était comprise entre 0,21 et 0,60 pour la SP et entre 0,92 et 0,95 pour le DID. La spécificité et la VPN étaient systématiquement élevées (0,98 à 1,00). Les non-cas ont souvent été mal classés et les taux de reclassification des non-cas ont été plus élevés que ceux des cas à la fois pour la SP (0,22 à 0,33 c. 0,14 à 0,28) et pour le DID (0,18 à 0,65 c. 0,13 à 0,15). L’algorithme basé sur un modèle avec règle de sortie a produit une augmentation plus lente de la prévalence que l’algorithme du SCSMC pour la SP, mais pas pour le DID. Les algorithmes de vérification des cas avec règle de sortie peuvent résoudre les biais de classification erronée lors de l’estimation de la prévalence des maladies chroniques à l’aide de données administratives sur la santé. Les améliorations dépendent de la maladie. The performance of the algorithms that identify cases of multiple sclerosis in individuals 20 years and older and of diabetes (both type 1 and type 2) in individuals 18 years and younger in administrative health data was better when using health care covariates based on a higher number of years. An exit rule that uses probabilities to reclassify case status annually found that non-cases had a higher reclassification rate than cases. Prevalence trends for multiple sclerosis obtained using a model-based algorithm with an exit rule had a slower increase than the current algorithm used by the Canadian Chronic Disease Surveillance System. Les algorithmes permettant de recenser les cas de sclérose en plaques chez les personnes de 20 ans et plus et les cas de diabète (de type 1 et de type 2) chez les personnes de 18 ans et moins dans les données administratives sur la santé étaient plus performants lorsqu’ils utilisaient des covariables de soins de santé basées sur un plus grand nombre d’années. Une règle de sortie qui utilise des probabilités pour reclasser le statut des cas chaque année a révélé que les non-cas avaient un taux de reclassification plus élevé que les cas. Les tendances de prévalence de la sclérose en plaques obtenues à l’aide d’un algorithme basé sur un modèle avec règle de sortie ont augmenté plus lentement que celles obtenues avec l’algorithme actuel utilisé par le Système canadien de surveillance des maladies chroniques.
As remote-access thyroidectomy (RAT) becomes more widely used, evidence on patient- reported outcomes, particularly health-related quality of life (HRQoL), remains scarce and inconsistent. This meta-analysis compared postoperative HRQoL between thyroid cancer (TC) patients undergoing RAT and open thyroidectomy (OT) and assessed changes over time. A comprehensive search of five major databases was conducted from inception to August 2025. Studies reporting HRQoL after RAT or OT were included. Outcomes were grouped by postoperative timepoints. Outcomes were stratified across distinct postoperative timepoints to calculate pooled standardized mean difference (SMD) or mean difference (MD). Heterogeneity was explored through rigorous subgroup analyses encompassing surgical modalities, countries, and assessment instruments. Forty-one studies met the inclusion criteria, and 29 records were included in the quantitative synthesis. RAT demonstrated early advantages in comprehensive quality of life at 1 month and 3 months, and these advantages dissipated long term. Pain trajectories exhibited a biphasic pattern: RAT was associated with lower pain scores on postoperative day 1 but paradoxically higher scores during the 1-2 week period, with subsequent convergence. Cosmetic satisfaction and swallowing function consistently favored RAT from 1-2 weeks through 6 months, while voice outcomes showed no discernible differences. RAT appears to confer selected short- to medium-term patient-reported advantages over OT, particularly in cosmetic satisfaction and swallowing function. But these benefits are heterogeneous and not consistently maintained across all domains or timepoints. Future studies should standardize cross-culturally validated PRO instruments and adopt harmonized follow-up intervals and reporting guidelines to clarify the patient-centered value of RAT.
Treatment for exocrine pancreatic insufficiency (EPI) includes pancreatic enzyme replacement therapy (PERT). Although PERT may improve symptoms, there are currently no EPI symptom measures to monitor PERT treatment. This analysis evaluated the utility of the Pancreatic Exocrine Insufficiency Questionnaire (PEI-Q) in patients with chronic pancreatitis and EPI treated with pancrelipase. This prospective, observational study (NCT04949828) included adult patients with chronic pancreatitis and EPI who were recommended pancrelipase 72,000 lipase units per meal and 36,000 lipase units per snack by an independent physician. Outcomes included the change from baseline to 1 month and 3 months after treatment initiation in PEI-Q symptom score, PEI-Q symptom severity categories, and health-related quality of life (HRQoL) as measured by the PEI-Q impact domain score. The per-protocol population included 32 and 29 patients with data at 1 and 3 months after pancrelipase initiation, respectively. A significant reduction in mean PEI-Q symptom scores was observed from baseline to 1 month (- 1.0; p < 0.001) and 3 months (- 1.1; p < 0.001). Despite all patients having moderate/severe EPI symptoms at baseline, 62.5% and 62.1% of patients reported mild/no EPI symptoms after 1 and 3 months, respectively. There was also a significant reduction in mean PEI-Q impact scores from baseline to 1 month (- 1.0; p < 0.001) and 3 months (- 1.2; p < 0.001), indicating improved HRQoL. Pancrelipase improved patient-reported EPI symptoms and HRQoL based on the PEI-Q, which may be a useful tool for monitoring PERT treatment in patients with chronic pancreatitis and EPI.
Musculoskeletal (MSK) conditions like lower limb deformities significantly contribute to the global burden of disease. Low-income countries (LICs) and lower-middle-income countries (LMICs) are disproportionately affected, yet specifics surrounding MSK conditions in these regions remain largely unquantified. This scoping review seeks to (1) assess the epidemiologic information currently available for lower limb deformities in LICs and LMICs, including etiology and treatment, as well as (2) examine the individual, societal, and socioeconomic impact of these conditions. A scoping review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews guidelines searched PubMed, Embase, and Web of Science databases for studies published between 2012 and 2022 in LICs and LMICs containing statistics and epidemiological data quantifying the prevalence of lower limb deformities and/or providing sociological or economic analysis of the burden of those deformities. Forty-two studies of the original 3476 screened were included in this review. The studies, spanning 51 different LICs and LMICs, were primarily cross-sectional (n = 19) in design. Most studies had level IV evidence (n = 27, 64%). Trauma was the most common reported etiology of lower limb deformity (60%), clubfoot the most often reported congenital condition (75%), and amputation the most common reported treatment (67%). Less than half of included studies (n = 18, 43%) investigated disease burden utilizing either quality-adjusted life years or disability-adjusted life years, employment level, functional outcomes, psychological/interpersonal impact, and/or access to care. Despite the high prevalence of lower limb deformities in LICs and LMICs, associated epidemiological and disease burden data remain limited. Available studies broadly categorized sequelae of MSK injuries with limited information on treatment outcomes. Amputation was the most commonly mentioned surgical treatment despite the potential societal stigma regarding amputees. Even fewer studies explored the burden of MSK conditions impacting quality of life. A deeper understanding of the prevalence and impact of specific MSK conditions in LICs and LMICs will provide actionable data to inform a more comprehensive, locally relevant, and sustainable approach to managing lower limb deformities in resource-challenged environments. (1)Lower limb deformities from a variety of congenital and acquired etiologies can impact patients from birth, including their physical and psychosocial well-being and function in society.(2)Lower limb deformities are especially impactful in the setting of low-income countries (LICs)/lower-middle-income countries (LMICs) where adaptive, surgical, and prosthetic options can be limited, and there is a dearth of data on the exact epidemiology of these deformities that limits the ability to calculate specific prevalence or incidence by country.(3)More granular and standardized data collection and distribution is needed on the epidemiology and burden of lower limb deformities in LICs/LMICs in order to inform policies aimed at increasing access to care, preventing, and treating such deformities as well as to better allow for analysis and quantification of the burden of disease on a socioeconomic and individual level.
Digital interventions are increasingly promoted as scalable options for reducing the treatment gap in eating disorders, with the evidence base expanding in recent years to include new populations, delivery formats, and therapeutic approaches. A comprehensive, up-to-date synthesis is needed to clarify the current evidence for digital treatment delivery formats in eating disorders. To evaluate the association of digital interventions for eating disorders with core and transdiagnostic symptom outcomes in the acute and longer-term phases. MEDLINE, PsycINFO, Web of Science, and Scopus were searched (October 2025) using terms related to eating disorder, digital health, and randomized clinical trials. Randomized clinical trials evaluating a digital intervention for threshold or subthreshold eating disorders were eligible. Interventions had to be delivered via digital technologies (eg, websites, applications, chatbots), with or without support, and compared against a control. Two reviewers extracted data. Risk of bias was assessed using 4 Cochrane risk of bias criteria. Meta-analyses were conducted using random-effects models, calculating Hedges g for continuous outcomes and odds ratios for symptom abstinence. Primary outcomes included core eating disorder symptoms (global eating disorder psychopathology, binge eating frequency, compensatory behaviors, abstinence, and symptom-specific subscales). Secondary outcomes included comorbid mental health symptoms (depression, anxiety, general distress) and general well-being (quality of life, clinical impairment, self-esteem). A total of 36 trials were included. At posttreatment assessment, digital interventions compared with controls produced significant improvements in primary eating disorder psychopathology (Hedges g = 0.49; 95% CI, 0.38-0.60) and objective binge eating (Hedges g = 0.37; 95% CI, 0.24-0.51) outcomes, as well as other symptom-specific and comorbid mental health outcomes. Effect sizes largely remained significant when adjusting for various sources of biases. Significant benefits were mostly observed across specific clinical populations (eg, bulimia nervosa, binge-eating disorder). Effect sizes were largest for trials that used a waiting list relative to other controls. At follow-up, digital interventions produced weaker but statistically significant sustained improvements for 7 of 9 outcomes. In this study, digital interventions were associated with consistent and durable benefits across numerous symptom-specific and transdiagnostic outcomes. These results highlight their potential to expand access to evidence-based support and to inform future clinical implementation efforts.
This meta-analysis aimed to investigate the incidence proportion of obesity and its influencing factors in patients with schizophrenia (SCH). A comprehensive literature search was conducted using a combination of MeSH subject headings and free-text terms across multiple databases, including PubMed, Web of Science, Embase, Cochrane Library, CNKI, Wanfang Data, VIP Database, and SinoMed. The search covered all publications up to May 26, 2025. Meta-analyses were performed using RevMan 5.4 and Stata 18.0 software. A total of 18 studies involving 9, 351 patients with SCH were included in the analysis. This meta-analysis revealed that the incidence proportion of obesity in patients with SCH was 33.0%. Additionally, the following factors were significantly associated with an elevated risk of obesity in this population (P < 0.05): female sex (OR = 1.14, 95% CI = 1.10-1.20), elevated FG (OR = 1.08, 95% CI = 1.04-1.12), diabetes (OR = 2.36, 95% CI = 1.79-3.10), elevated TG (OR = 1.13, 95% CI = 1.08-1.18), high LDL (OR = 1.88, 95% CI = 1.45-2.44), olanzapine use (OR = 7.40, 95% CI = 4.98-11.00), combined antipsychotic therapy (OR = 3.19, 95% CI = 2.31-4.41), use of typical antipsychotics (OR = 1.46, 95% CI = 1.18-1.82), use of atypical antipsychotics (OR = 1.70, 95% CI = 1.42-2.03), high red blood cell count (OR = 2.80, 95% CI = 1.60-4.90), and low HDL (OR = 1.75, 95% CI = 1.41-2.16). Current evidence indicates that the major risk factors for obesity in patients with SCH include female sex, elevated FG, diabetes, high TG levels, elevated LDL, use of olanzapine, combined antipsychotic treatment, use of typical antipsychotics, use of atypical antipsychotics, higher red blood cell count, and reduced HDL levels. Clinicians should implement early screening and targeted interventions to mitigate the development of obesity in this population, thereby improving their overall quality of life. https://www.crd.york.ac.uk/PROSPERO/, identifier CRD420251121298.
Unhealthy alcohol and drug use have significant health-related sequelae. Given racial and ethnic disparities in complications of substance use, successful screening and medication prescribing for addictions are important in community health settings serving diverse populations. To evaluate alcohol and drug use screening and prescribing of medications for addiction treatment in adults by race, ethnicity, and language preference. This cohort study included US adults seen between 2012 and 2020 in a multistate electronic health record (EHR) network (1394 primary care clinics). Analyses were completed October 2024. Race and ethnicity with language preference groups: non-Hispanic White, non-Hispanic Black, Latino with Spanish language preferred, and Latino with English language preferred. Multivariable logistic regression estimated covariate-adjusted odds ratios (aOR) of receipt of alcohol and drug screening and EHR-documented prescription of medication for alcohol (AUD) or opioid use disorders (OUD). There were 2 191 945 patients across 25 states (mean (SD) age, 41.3 [15.2] years; 1 236 818 female [56.4%]); 416 607 identified as non-Hispanic Black (19.0%), 1 015 066 non-Hispanic White (46.3%), 474 389 Latino with Spanish-language preference (21.6%), and 285 883 Latino with English-language preference (13.0%). Over the study period, 869 609 (39.7%) had documented completed alcohol screening, and 862 263 (39.3%) completed drug screening-113 629 (5.2%) had a diagnosis of AUD and 247 530 (11.3%) had an OUD diagnosis. Spanish-preferring Latino patients had 59% increased odds of screening compared with non-Hispanic White patients (aOR, 1.59; 95% CI, 1.31-1.93). All minoritized race and ethnicity with language preference groups had lower odds of prescribed medications for addictions treatment compared with non-Hispanic White patients; non-Hispanic Black patients had the lowest odds of any group (AUD: aOR, 0.55; 95% CI, 0.43-0.69; OUD: aOR, 0.38; 95% CI, 0.31-0.46). In this cohort study, there was an overall low likelihood of completed screening for alcohol and drug use among all minoritized race and ethnicity with language preference groups. All minoritized groups had lower odds of receipt of medications for addiction treatment compared with the non-Hispanic White group. Improving screening and addressing this emerging treatment inequity should be prioritized.
Emotion dysregulation (ED) is a hallmark of attention-deficit/ hyperactivity disorder (ADHD); however, routine retrospective evaluations neglect its dynamic nature. To address this limitation, Ecological Momentary Assessment (EMA) provides a useful methodology for collecting data in real time and in participants' natural surroundings. The current systematic review synthesizes findings from EMA studies on emotion dysregulation in ADHD. Systematic searches were conducted through April 2025 across three databases (PubMed, Scopus, and Web of Science). We included 33 studies, with approximately 2678 participants. Findings demonstrate that EMA successfully captures variation in emotion dysregulation while relating it to negative affect and functional impairment, regardless of age. Compliance rates (60-99%) confirm feasibility; however, studies with larger sample sizes and longitudinal data, particularly conducted with adolescents, are required. By providing "real-time" insight into the everyday dynamics of ADHD, the evidence compiled in this review offers a strong basis for the clinical use of EMA in patient management.
Alzheimer's disease and related dementias (ADRD) are progressive neurodegenerative conditions where early detection is critical for timely intervention and care planning. However, current diagnostic methods are often inaccessible, costly, and delayed, especially for underserved populations. There is a growing need for scalable, noninvasive tools that can support timely diagnosis. Spontaneous speech contains rich acoustic and linguistic markers that can serve as noninvasive behavioral markers for cognitive decline. Foundation models, pretrained on large-scale audio or text data, generate high-dimensional embeddings that encode rich contextual and acoustic information. This study benchmarks open-source foundation language and speech models to evaluate their effectiveness in detecting ADRD from spontaneous speech as a potential solution for early, noninvasive, and scalable ADRD detection. In this study, we used the Pioneering Research for Early Prediction of Alzheimer's and Related Dementias EUREKA (PREPARE) Challenge dataset, which consists of audio recordings from over 1600 participants with 3 distinct categories of cognitive decline: healthy control (HC), mild cognitive impairment (MCI), and Alzheimer's disease (AD). We further excluded samples that are non-English, nonspontaneous speech, or of poor quality. Our final samples included 703 (59.13%) HC, 81 (6.81%) MCI, and 405 (34.06%) AD cases. We systematically benchmarked 18 open-source foundation speech and language models to classify cognitive status into 3 categories (HC, MCI, or AD). Post hoc interpretability analysis was performed for the best-performing model using Shapley additive explanations linking high-dimensional embeddings with explainable acoustic and linguistic markers. Whisper-medium model achieved the highest performance among speech models at 0.731 accuracy and 0.802 area under the curve, while Bidirectional Encoder Representations from Transformers with pause annotation achieved the top accuracy of 0.662 and 0.744 area under the curve among language models. Overall, ADRD detection based on state-of-the-art automatic speech recognition model-generated audio-embeddings outperformed other models, and the inclusion of nonsemantic information, such as pause patterns, consistently improved the classification performance of text-embedding-based models. Our work presents a comprehensive comparative evaluation of state-of-the-art speech and language models for AD and MCI detection on a large, clinically relevant dataset. Embeddings derived from acoustic models, which capture both semantic and acoustic information, show promising performance and highlight the potential for developing a more scalable, noninvasive, and cost-effective early detection tool for ADRD.
Osteoporosis is a metabolic bone disease characterized by reduced bone mass and deterioration of bone microarchitecture, in which impaired osteogenic differentiation of bone marrow-derived mesenchymal stem cells (BMSCs) represents a central pathological mechanism. In recent years, ferroptosis, a newly recognized form of regulated cell death, has been demonstrated to play an important role in the initiation and progression of osteoporosis. Mitochondrial dysfunction can exacerbate oxidative stress and disrupt iron metabolism, thereby triggering ferroptosis in BMSCs and ultimately inhibiting osteogenic differentiation. The present study aimed to identify and validate key genes associated with mitochondrial homeostasis and osteoporosis, with a particular focus on the role of glial cell line-derived neurotrophic factor (GDNF) in regulating mitochondrial function, suppressing ferroptosis, and promoting osteogenic differentiation of BMSCs. Gene expression data from normal human bone tissues and osteoporotic bone tissues were obtained from public transcriptomic datasets in the Gene Expression Omnibus (GEO) database. Through differential expression analysis, GDNF was identified as a candidate gene. In vitro experiments demonstrated that GDNF markedly improved mitochondrial membrane potential, reduced intracellular reactive oxygen species (ROS) levels, and restored GPX4 expression, thereby promoting osteogenic differentiation of BMSCs. Furthermore, animal experiments confirmed that GDNF intervention effectively increased bone mineral density and improved trabecular microarchitecture in ovariectomized (OVX) mice. In conclusion, this study identified GDNF may suppress ferroptosis by maintaining mitochondrial homeostasis in BMSCs, thereby enhancing osteogenic differentiation and alleviating osteoporosis, providing a potential theoretical basis for molecular targeted therapy in osteoporosis.
Universal Credit (UC) is a major UK welfare reform that consolidates six means-tested benefits into a single monthly payment, aiming to simplify benefits delivery and incentivize labor market participation. However, concerns have emerged regarding its potential adverse consequences on recipients' mental and physical well-being. Existing evidence is limited by methodological weaknesses, short follow-up time, and a narrow focus on psychological distress. Applying the heterogeneous difference-in-differences approach developed by Callaway and Sant'Anna, we used waves 6-14 of the UK Household Longitudinal Survey (UKHLS), focusing on working-age individuals receiving social benefits to evaluate the short- and long-term effects of that welfare reform on psychological distress (GHQ-12), mental functioning (SF-12 MCS), physical functioning (SF-12 PCS), but also employment, perceived financial outlook, benefits income, and total income. Transitioning to UC significantly increased GHQ-12 scores by 1.20 points (95% CI: 0.33 to 2.07) and decreased SF-12 MCS scores by 2.19 points (95% CI: - 3.79 to - 0.59), indicating deteriorating mental health. No significant effect was observed for SF-12 PCS. UC was also associated with a £93.05 reduction in monthly benefit income, a £222 decrease in total income, and an 8%-point decrease in perceived financial optimism. No significant effect on employment status was detected. Our findings suggest that the transition to UC adversely affected mental health and financial well-being, while yielding limited employment benefits. These adverse impacts reflect both implementation challenges, such as payment delays and benefit deductions, and structural design flaws, including rigid conditionality and reduced income security for vulnerable groups. The results underscore the need for welfare reforms that integrate health considerations and provide more flexible, targeted support to mitigate unintended harms.
Timely, individually tailored support for family caregivers of cancer patients is stressed, reinforcing the importance of implementing screening tools in clinical practice. This study aims to evaluate the validity and reliability of the Swedish CancerSupportSource-Caregiver among 145 Swedish family caregivers of persons diagnosed with cancer. We evaluated the validity and reliability of the Swedish CancerSupportSource-Caregiver among 145 Swedish family caregivers of persons diagnosed with cancer who responded to the Swedish CancerSupportSource-Caregiver, sociodemographic questions, and the Hospital Anxiety and Depression Scale. Psychometric analyses were performed using descriptive statistics and classical test theory to evaluate data quality, targeting, scaling assumptions, and internal validity. Construct validity was assessed through confirmatory factor analysis; criterion validity through concurrent validity; and reliability through internal consistency. Overall, in the sample, evaluations demonstrated generally satisfactory psychometric properties with respect to data quality, targeting, and scaling assumptions. The hypothesized five-domain model showed an acceptable fit to the data, although there were indices that it could be improved. Item loadings were generally high, supporting the proposed construct structure. Further, assessments of the criterion validity were satisfactory. However, the evaluations of internal validity and internal consistency indicated redundancy, mainly within the emotional well-being domain. The Swedish CancerSupportSource-Caregiver demonstrated preliminary satisfactory abilities to screen for support needs and psychological distress among Swedish family caregivers of persons diagnosed with cancer. Further evaluations in larger samples, using Rasch measurement theory, could provide a deeper understanding of the functioning of items and response options.
New graduate nurses experience transition shock during the first year of their professional lives. Limited prior studies have shown how transition shock affects freshly graduated nurses' ability to provide care, but further evidence related to missed care is required. This study is a report that evaluates the relationship between transition shock experienced and missed nursing care among new graduate nurses. This descriptive and correlational study involved 277 new graduate nurses working in four hospitals. Data were collected using two standardised scales: the MISSCARE Survey and the Nursing Transition Shock Scale. The data were collected from December 2023 to February 2024. The data were analysed using Pearson correlation and multiple regression. Transition shock was significantly associated with missed nursing care practices and the causes of missed nursing. Transition shock was significantly associated with human resources, material resources and communication. These results showed that transition shock significantly predicted missed nursing care practices and their causes. The study highlighted that the transition shock of new graduate nurses is associated with missed nursing care. To prevent missed care by new graduate nurses, the determinants should be considered when providing nursing care. According to the study's conclusions, helping recently graduated nurses with continuing education and mentoring may have beneficial effects on preventing missed care. Adhered to the STROBE guidelines. No patient or public contribution.
Sleep regularity may represent a modifiable risk factor affecting osteoporosis susceptibility, but epidemiological evidence remains scarce. This research sought to analyze the link between sleep regularity parameters and incident osteoporosis, its interaction with genetic risk, and the potential for improved sleep regularity to mitigate risk in individuals with different genetic predispositions within a population-based cohort. A longitudinal analysis was conducted using data from the UK Biobank, which included 87,231 participants without osteoporosis at the time of accelerometer data collection in 2013-2015, with follow-up until June 30, 2023. Sleep regularity parameters were determined by calculating the within-person standard deviation (SD) of 7-day accelerometer-tracked sleep parameters (including sleep duration, onset time, wake-up time, and midpoint). We investigated the association between these four sleep regularity parameters and osteoporosis risk, and assessed the potential reduction of osteoporosis occurrence by enhancing sleep regularity. Additionally, subgroup and sensitivity analyses were executed. Across a median follow-up period of 8.6 years, 2035 new osteoporosis cases were recorded. Participants in the top quartile of SD for sleep duration, onset time, and midpoint had higher osteoporosis risk compared to those in the bottom quartile (fully adjusted HRs ranged from 1.17 to 1.21). Sleep duration SD showed the highest population attributable fraction (PAF). Moreover, a statistically significant additive gene-sleep interaction was identified. When considering both sleep regularity and osteoporosis polygenic risk score (PRS), the group with the highest risk nearly doubled their osteoporosis risk compared to the group with the lowest risk (fully adjusted HRs ranged from 2.22 to 2.24). Importantly, improving sleep regularity mitigated the PRS effect on osteoporosis, with the greatest absolute risk reduction observed in individuals with intermediate PRS-1.6 to 1.7 times that of those with high PRS. Irregular sleep patterns were associated with an increased risk of developing osteoporosis, with exploratory analyses suggesting potential variation across levels of genetic susceptibility. These findings underscore the potential importance of maintaining stable sleep patterns for bone health and suggest that sleep regularity may represent a modifiable behavioral factor for osteoporosis prevention, warranting further investigation.
Elexacaftor/tezacaftor/ivacaftor (ETI) triple therapy has been introduced as causal therapy for people with cystic fibrosis (pwCF) carrying at least one F508del allele. Data on the real-world effect of ETI in pwCF without any F508del allele are limited. In this observational cohort study, data from the German CF Registry for pwCF who received off-label ETI for up to 365 days were used to study effects on lung function, nutritional status and sweat chloride concentration (SCC). 616 pwCF (mean±sd age 28.8±13.9 years) were registered with ETI treatment within compassionate use (n=509; at least one F508del allele present) or off-label use (carrying non-F508del mutations likely to respond to ETI (n=81) or not (n=26)). After 1 year, mean percentage predicted forced expiratory volume in 1 s increased by 10.3±11.0 (p<0.0001) and 9.0±10.7 (p<0.0001) percentage points compared to baseline in the at least one F508del and non-F508del predicted responsive groups, and remained stable in the non-F508del predicted non-responsive group (p=0.31). Body mass index percentile increased in the at least one F508del and non-F508del predicted responsive groups (11.4±15.2; p<0.0001 and 5.0±12.3; p<0.05, respectively), and remained stable in the non-F508del predicted non-responsive group (p=0.14). Mean SCC after 3 months decreased by 43.9±24.6 mmol·L-1 (p<0.0001) and 34.4±22.9 mmol·L-1 (p<0.0001), respectively, in the at least one F508del and non-F508del predicted responsive groups, and increased in the non-F508del predicted non-responsive group (14.8±9.5 mmol·L-1; p<0.05). Real-world data align with the results of randomised clinical trials for pwCF carrying at least one F508del allele. Clinical improvements in pwCF with a range of CF transmembrane conductance regulator mutations treated off-label with ETI provide compelling support for the recent ETI label extension in Europe.
Advances in cancer therapy have improved survival but increased the risk of treatment-related cardiotoxicity, which remains difficult to detect early with existing biomarkers. [¹⁸F]F-AraG is a PET tracer that targets cells with active mitochondrial biogenesis, including cardiomyocytes and activated T cells, and may enable concurrent assessment of cardiac involvement and therapy-associated immune activity. This study evaluated whether [¹⁸F]F-AraG PET can serve as an early imaging biomarker of cardiac effects across different cancer therapies. Twenty-six healthy subjects underwent [¹⁸F]F-AraG PET to establish baseline myocardial uptake. Seven patients with stage III melanoma and ten with advanced non-small cell lung cancer were imaged before and after immunotherapy. Myocardial uptake (SUVmax, SUVmean, SUVtotal) was quantified in the left (LV) and right (RV) ventricles, with LV regional uptake analyzed using a 17-segment model. Myocardial uptake was examined in relation to abnormal cardiac status in a subset of patients with available electrocardiogram (ECG) data. Associations between cardiac uptake, mitochondrial content, and PGC-1α expression were evaluated. Healthy myocardium demonstrated consistent and spatially uniform [¹⁸F]F-AraG uptake across age and sex, with higher uptake in the LV than RV. Conventional therapy was associated with increased global myocardial uptake, whereas immunotherapy was associated with additional heterogeneous and focal myocardial uptake. Altered myocardial uptake patterns were observed in patients with ECG abnormalities. [¹⁸F]F-AraG PET detects therapy-associated changes in myocardial tracer uptake following cancer treatment. These findings support its potential utility as a noninvasive imaging approach for early evaluation of cardiac effects in patients receiving cancer therapies.