Preterm infants, often gavage-fed due to immaturity, may benefit from interventions that hasten the move to breastfeeding. Olfactory stimulation may increase appetite and speed up the transition to full oral feeding, potentially shortening hospital stay and improving outcomes for both infants and families. To evaluate the benefits and harms of olfactory stimulation for reducing morbidity and promoting development in hospitalized preterm infants. We searched MEDLINE, Embase, CENTRAL, CINAHL, Epistemonikos, two trial registries, and conference abstracts up to 2 April 2025. We checked the reference lists of included studies and systematic reviews on olfactory or sensory stimulation. We included randomized controlled trials (RCTs) and quasi-RCTs evaluating olfactory stimulation with different odorants (maternal breast milk, food-associated odors, or non-food-associated odors) in preterm infants (born before 37 weeks' gestation). Eligible controls were no intervention, placebo, or standard care (considered together) or another odorant. We excluded studies combining olfactory stimulation with taste, as another review will focus on combined sensory stimulation. Our critical outcomes were apnea, intermittent hypoxemia, duration of hospital stay, time to full oral feeding, and exclusive breastfeeding. Our important outcomes included blindness and sensorineural deafness requiring amplification. We used the Cochrane risk of bias tool (RoB 2) to assess risk of bias in the included studies. We conducted meta-analyses using fixed-effect models to calculate risk ratios (RRs) for dichotomous data and mean differences (MDs) for continuous data, each with its 95% confidence interval (CI). We assessed the certainty of evidence using GRADE. We included 14 trials enrolling 1087 neonates. The types of olfactory stimulation under investigation were maternal breast milk (9 studies); food-associated odors such as cinnamon, vanilla, or anise (5 studies); and non-food-associated odors such as rose or parents' scent (3 studies). Three studies evaluated two different odorants. Gestational age and bodyweight varied widely. The comparators were placebo, no intervention, and standard care. Eleven studies aimed to assess the effect of olfactory stimulation on infant feeding outcomes such as time to full oral feeding, weight gain, length of hospital stay, or a combination of these outcomes. Three studies aimed to assess the effect of olfactory stimulation on apnea prevention, oxygen saturation, or both. We identified five ongoing trials. No studies reported apnea as a dichotomous outcome, intermittent hypoxemia, exclusive breastfeeding, or major neurodevelopmental disability for any of the comparisons. We downgraded the certainty of evidence for limitations in study design, imprecision, and indirectness. Olfactory stimulation with maternal breast milk versus no intervention, placebo, or standard care The evidence is very uncertain about the effect of olfactory stimulation with maternal breast milk on the mean number of daily apnea episodes (MD -0.50, 95% CI -1.27 to 0.27; 1 study, 26 participants; very low-certainty evidence) and duration of hospital stay in days (MD -0.18, 95% CI -0.64 to 0.27; I² = 0%; 4 studies, 270 participants; very low-certainty evidence). Olfactory stimulation with maternal breast milk may result in a slight reduction in time to full oral feeding in days (MD -1.68, 95% CI -3.25 to -0.11; I² = 30%; 3 studies, 204 participants; low-certainty evidence). Olfactory stimulation with food-associated odors versus no intervention, placebo, or standard care Olfactory stimulation with food-associated odors may result in a slight reduction in the mean number of daily apnea episodes (MD -1.99, 95% CI -2.69 to -1.29; I² = 58%; 2 studies, 62 participants; low-certainty evidence). The evidence is very uncertain about the effect of olfactory stimulation with food-associated odors on duration of hospital stay in days (MD -2.65, 95% CI -6.18 to 0.89; I² = 20%; 3 studies, 185 participants; very low-certainty evidence) and time to full oral feeding in days (MD -2.06, 95% CI -5.16 to 1.04; I² = 36%; 3 studies, 185 participants; very low-certainty evidence). Olfactory stimulation with non-food-associated odors versus no intervention, placebo, or standard care Olfactory stimulation with non-food-associated odors may result in a slight reduction in duration of hospital stay in days (MD -3.23, 95% CI -5.50 to -0.97; I² = 0%; 2 studies, 94 participants; low-certainty evidence). The evidence is very uncertain about the effect of olfactory stimulation with non-food-associated odor on the mean number of daily apnea episodes (MD -2.13, 95% CI -2.28 to -1.98; 1 study, 60 participants; very low-certainty evidence) and time to full oral feeding (mean 21 (standard deviation 3.1) days in the intervention group and mean 21 (standard deviation 15.6) days in the control group; 1 study, 27 participants; very low-certainty evidence). Olfactory stimulation with maternal breast milk compared to no intervention, placebo, or standard care may result in a slight reduction in time to full oral feeding, but the evidence is very uncertain about its effect on frequency of apnea episodes and duration of hospital stay. Olfactory stimulation with food-associated odors compared to no intervention, placebo, or standard care may result in a slight reduction in frequency of apnea episodes, but the evidence is very uncertain about its effect on duration of hospital stay and time to full oral feeding. Olfactory stimulation with non-food-associated odors compared to no intervention, placebo, or standard care may result in a slight reduction in duration of hospital stay, but the evidence is very uncertain about its effect on frequency of apnea episodes and time to full oral feeding. Future studies should be more rigorous in their design, report using TIDieR (Template for Intervention Description and Replication) checklists, have larger sample sizes, and measure outcomes such as apnea (number of infants with ≥ 1 episode), intermittent hypoxemia (number of infants with ≥ 1 episode), exclusive breastfeeding, and major neurodevelopmental disabilities. Dedicated funding for this review can be found in the 'Sources of support' section. Protocol: https://doi.org/10.1002/14651858.CD016074.
Bladder cancer (BC) is a prevalent malignancy with evolving treatment strategies and an increasingly aging patient population, resulting in a growing and complex burden of hospitalizations that extends beyond urological care and remains insufficiently characterized in real-world Internal Medicine settings. This study aimed to analyze the clinical data and outcomes for patients with BC admitted to the medicine ward. Additionally, this research presents three cases of fever of unknown origin, which all exhibited identical clinical and laboratory findings but ultimately resulted in different disease diagnoses. This retrospective case-series study included all adult patients with BC admitted to the Internal Medicine ward of a tertiary referral hospital between 1 January 2020, and 31 December 2024. Data acquisition was performed through a systematic search of electronic discharge records using the ICD-10 code C67. Data recording involved detailed review of electronic medical records to collect demographic characteristics, clinical history, cancer-related treatments, causes of hospitalization, and outcomes. Three patients previously treated with intravesical Bacillus Calmette-Guérin (iBCG) who presented with fever of unknown origin were analyzed in detail. Data analysis comprised descriptive statistics and comparative testing using Fisher's exact test and unpaired two-tailed Student's t-test, with p < 0.05 considered statistically significant. We identified 77 hospitalizations among 67 BC patients who were predominantly male, with a mean age of 75.2. A high prevalence of metabolic syndrome comorbidities and chronic obstructive pulmonary disease was documented. In addition, 31.1% of patients had metastatic BC, 22.9% had a second malignancy, 49.2% had undergone urological surgeries, and 38% had received chemotherapy or immunotherapy other than iBCG. The most common causes of hospitalization were infections, anemia/transfusions, a newly diagnosed metastatic disease, and acute renal failure. The mortality in this cohort was high (17%), with the leading cause of death again being an infection. Among patients who had previously received BCG immunotherapy, three cases of fever of unknown origin were noticed, and despite identical clinical settings, they were identified with different diseases [metastatic disease, infection caused by Bacillus Calmette-Guérin (BCGitis), and Hodgkin's lymphoma], necessitating individualized therapeutic medications. BC patients in the Internal Medicine unit are generally older adults, often dealing with several chronic conditions and a considerable cancer burden. They are predominantly admitted due to infections, which points to the urgent need for effective infection prevention strategies for this vulnerable population. When BC patients have a fever lasting more than seven days following BCG instillation, which is the maximum duration for self-limited adverse events to occur, regardless of whether an antibiotic regimen has been prescribed, they should consult an internal medicine department for further evaluation.
Spinal cord injury (SCI) causes persistent physical and psychological impairments and is associated with reduced quality of life. Telemedicine may improve rehabilitation access and follow-up care, but its effectiveness across multiple outcome domains in SCI remains uncertain. This study aimed to evaluate the effects of telemedicine interventions on psychological health, quality of life, sleep, functional independence, and participation, and pain intensity in individuals with SCI. We searched PubMed, Web of Science, Embase, Ovid MEDLINE, and Cochrane CENTRAL until 17 February 2026. We included English-language randomized controlled trials (RCTs) of telemedicine interventions in individuals with SCI. Two reviewers independently screened studies, extracted data, and assessed risk of bias using the Risk of Bias 2 (RoB 2; Cochrane) tool. Random-effects meta-analyses used the Hartung-Knapp-Sidik-Jonkman method with restricted maximum likelihood estimation of between-study variance. Effects were summarized as standardized mean differences (SMD) or mean differences (MD) with 95% CIs. For main meta-analyses, 95% prediction intervals were reported when at least 5 studies were available, but not for analyses with fewer than 5 studies or for subgroup meta-analyses. Certainty of evidence was assessed using GRADE (Grading of Recommendations Assessment, Development, and Evaluation). We included 33 studies (35 reports). Telemedicine improved the World Health Organization Quality of Life-BREF (WHOQOL-BREF) social domain (MD 3.27, 95% CI 0.64 to 5.89; P=.03) and sleep quality at 3 months (MD -2.24, 95% CI -3.82 to -0.67; P=.04). Depressive symptoms also improved in the >3-≤6 months follow-up subgroup (SMD -0.31, 95% CI -0.57 to -0.04; P=.03). Overall effects for depressive symptoms were not significant (SMD -0.11, 95% CI -0.26 to 0.05; prediction interval -0.37 to 0.15; P=.16; I²=36.3%), while findings for anxiety, other WHOQOL-BREF domains, sleep quality at 1 month, functional outcomes, and pain intensity generally favored telemedicine but did not reach statistical significance. Approximately half of the studies were rated as low risk overall on RoB 2, with most remaining studies rated as having some concerns and a smaller subset rated as high risk. GRADE certainty was high for the >3-≤6-month depressive-symptoms subgroup, moderate for the WHOQOL-BREF social domain, Pittsburgh Sleep Quality Index (PSQI), and Spinal Cord Independence Measure (SCIM), and low for depressive symptoms overall, anxiety, and pain intensity. Telemedicine may improve selected outcomes in SCI, with the most consistent evidence for social aspects of quality of life, sleep after sustained intervention exposure, and a more favorable effect on depressive symptoms in midterm follow-up subgroup analyses. These results suggest telemedicine as a practical adjunct for extending SCI rehabilitation access and continuity. Further trials should focus on optimizing intervention components, intensity, and patient targeting.
Objective: To investigate the clinical efficacy, toxicity and prognosis of SOX regimen (Oxaliplatin plus S-1) combined with Pabolizumab in the treatment of patients with metastatic gastric cancer. Methods: The clinical data of 107 patients with advanced metastatic gastric cancer admitted to Wuxi Branch of Ruijin Hospital Shanghai Jiao Tong University School of Medicine from May 2020 to May 2024 were retrospectively collected. According to the treatment methods, the patients in the chemotherapy group (n=56) only received SOX chemotherapy, and the patients in the combined group (n=51) were given SOX chemotherapy combined with Pembrolizumab. The differences of objective response rate (ORR), median progression-free survival (PFS), median overall survival (OS), grade Ⅲ-Ⅳ toxicity and quality of life improvement rate between the two groups were analyzed and compared. Results: According to the efficacy evaluation criteria of RECIST1.1, among all subjects, there were 1 case of complete response (CR) and 21 cases of partial response (PR) in chemotherapy group, 2 cases of CR and 31 cases of PR in combination group. The objective response rate (CR+PR%) in combination group was significantly higher than that in chemotherapy group (64.7% vs 39.3%, P=0.015). Stratified analysis showed that in patients with PD-L1 combined positive score (CPS) ≥1, the ORR of the combination group further increased to 75.9% (22/29), with a more significant advantage than the ORR of 38.9% (14/36) in the chemotherapy group (P=0.004). Survival analysis showed that among all enrolled patients, median PFS and median OS in combination group were 9.3 months and 17.4 months respectively, which were significantly longer than 8.4 and 13.1 months in chemotherapy group (PFS: P=0.020; OS: P=0.011). In addition, among patients with PD-L1 CPS ≥1, the median PFS and OS of the combination group showed a more significant advantage in prolonging compared to the chemotherapy group (median PFS: 10.7 months vs 8.2 months, P=0.003; median OS: 19.0 months vs 12.8 months, P=0.005). Among all enrolled patients, 14 cases showed improvement in quality of life in the chemotherapy group and 23 cases in the combination group, and the improvement rate of quality of life in combination group was significantly higher than that in chemotherapy group (45.1% vs 25.0%, P=0.029). There was no statistical difference in the incidence [67.9% (38/56) in chemotherapy group vs 78.4% (40/51) in combination group] of grade Ⅲ-Ⅳ toxicity between the two groups (P=0.219). Conclusion: In patients with PD-L1 CPS ≥1 or all enrolled patients, compared with chemotherapy alone, SOX regimen combined with pembrolizumab in the treatment of metastatic gastric cancer can significantly improve ORR, prolong PFS and OS, improve prognosis and quality of life, while the toxicity has not increased significantly. 目的: 探讨SOX方案(奥沙利铂+替吉奥)联合帕博利珠单抗治疗转移性胃癌患者的临床疗效、不良反应及预后。 方法: 回顾性收集上海交通大学医学院附属瑞金医院无锡分院2020年5月至2024年5月收治的107例晚期转移性胃癌患者的临床资料,根据治疗方法分为化疗组(n=56)和联合组(n=51),化疗组患者仅接受SOX方案化疗,联合组患者给予SOX方案联合帕博利珠单抗治疗。比较客观缓解率(ORR)、中位无进展生存时间(PFS)、中位总生存时间(OS)、Ⅲ~Ⅳ级不良反应、生活质量改善率等观察指标在两组间的差异。 结果: 根据实体瘤的疗效评价标准1.1版评估疗效,在全部受试者中,化疗组有1例完全缓解(CR)、21例部分缓解(PR),联合组有2例CR、31例PR,联合组ORR(CR+PR)显著高于化疗组(64.7%和39.3%,P=0.015)。分层分析显示,在程序性死亡受体配体1(PD-L1)联合阳性评分(CPS)≥1分的患者中,联合组ORR进一步提升至75.9%(22/29),较化疗组ORR 38.9%(14/36)升高优势更为明显(P=0.004)。生存分析显示,在全部入组患者中,化疗组中位PFS、中位OS分别为8.4和13.1个月,而联合组中位PFS、中位OS分别为9.3和17.4个月,均较化疗组明显延长(PFS P=0.020,OS P=0.011);此外,在PD-L1 CPS≥1分的患者中,联合组的中位PFS及OS对比化疗组的延长优势均更加显著(中位PFS:10.7和8.2个月,P=0.003;中位OS:19.0和12.8个月,P=0.005)。所有入组患者中化疗组生活质量改善者14例,联合组为23例,联合组生活质量改善率较化疗组显著升高[45.1%(23/51)和25.0%(14/56),P=0.029]。化疗组和联合组间Ⅲ~Ⅳ级不良反应发生率[化疗组67.9%(38/56),联合组78.4%(40/51)]差异无统计学意义(P=0.219)。 结论: 在PD-L1 CPS≥1分的患者或所有入组胃癌患者中,与单纯化疗相比,SOX方案联合帕博利珠单抗治疗转移性胃癌可显著提升ORR,并延长PFS、OS,改善预后,提高生活质量,而不良反应未见明显增加。.
Objective: To investigate the efficacy and safety of 36 000 IU recombinant human erythropoietin (rhEPO) in the treatment of cancer-related anemia (CRA) and to evaluate whether 36 000 IU rhEPO can serve as a rational "reduced-dose alternative" to the 40 000 IU rhEPO regimen. Methods: The multicenter, open-label, non-inferiority, randomized controlled trial was conducted from March 2023 to July 2024 across 12 hospitals in China, including Liaoning Cancer Hospital. A total of 119 patients with CRA were enrolled and randomly assigned to receive the 36 000 IU rhEPO (n=61) or 40 000 IU rhEPO (n=58). The primary efficacy endpoint was the change in hemoglobin (Hb) levels from baseline at weeks 9-13. Secondary efficacy endpoints included hematologic and other biochemical parameters, transfusion requirements, quality of life (QOL), which was assessed by QOL scores and Karnofsky performance status (KPS) scores, and overall survival. Safety was evaluated by the incidence of treatment-emergent adverse events (TEAEs). Results: The least-squares mean changes in Hb from baseline to weeks 9-13 were (12.9±2.3) g/L in the 36 000 IU group and (13.4±2.4) g/L in the 40 000 IU group. Analysis of covariance showed no statistically significant difference between groups (F=-0.21, P=0.836), with a between-group difference of (-0.5±2.5) g/L (95% CI: -5.4 g/L, 4.4 g/L). The lower limit of the 95% CI (-5.4 g/L) exceeded the predefined non-inferiority margin of -10 g/L, indicating non-inferiority of the 36 000 IU dose compared to the 40 000 IU. At week 13, the proportions of patients with Hb increase ≥10 g/L were 82.0% (50/61) in the 36 000 IU group and 86.2% (50/58) in the 40 000 IU group, with no significant difference between groups (Qmh=0.40, P=0.527). The average weekly transfusion rate was 2.0% in both groups. No significantly significant differences were observed between the two groups in terms of changes in hematocrit, reticulocyte percentage, folate, vitamin B12, albumin, iron metabolism markers, QOL scores, KPS scores, or overall survival (all P>0.05). Regarding safety, the incidence of TEAE was 80.3% (49/61) in the 36 000 IU group and 84.5% (49/58) in the 40 000 IU group, with nausea, fever, and fatigue being the most common symptoms (incidence>5%). No drug-related serious adverse events were reported, and there were no significant differences between the groups (P>0.05). Conclusions: The 36 000 IU dose of rhEPO is non-inferior to the 40 000 IU dose in terms of efficacy and has a favorable safety profile for the treatment of CRA. These findings support the use of 36 000 IU rhEPO as a reasonable clinical option for managing CRA. 目的: 探索36 000 IU重组人促红细胞生成素(rhEPO)治疗肿瘤相关性贫血(CRA)的有效性和安全性,评估36 000 IU rhEPO能否成为40 000 IU rhEPO的合理“减量替代”。 方法: 采用多中心、开放标签、非劣效、随机对照试验设计,于2023年3月至2024年7月在辽宁省肿瘤医院等中国12家医院开展。共纳入119例CRA受试者,根据rhEPO给药剂量规格随机分为36 000 IU组(n=61)和40 000 IU组(n=58)。主要疗效指标为第9~13周血红蛋白(Hb)水平较基线的变化值,次要疗效指标包括血细胞比容等血液生化指标、输血需求、生活质量[生活质量(QOL)评分和卡氏功能状态(KPS)评分]及总生存时间。安全性指标为治疗期不良事件(TEAE)发生率。 结果: 36 000 IU组和40 000 IU组受试者在第9~13周Hb较基线变化值的最小二乘均值分别为(12.9±2.3)g/L与(13.4±2.4)g/L,协方差分析显示组间差异无统计学意义(F=-0.21,P=0.836),组间差值为(-0.5±2.5)g/L(95% CI:-5.4~4.4 g/L),95% CI下限(-5.4 g/L)高于预设的非劣效界值(-10 g/L),表明36 000 IU组相对于40 000 IU组具有非劣效性。在第13周时,36 000 IU组和40 000 IU组Hb升高≥10 g/L的受试者比例分别为82.0%(50/61)和86.2%(50/58),差异无统计学意义(Qmh=0.40,P=0.527),且两组平均周输血率均为2.0%。两组受试者治疗前后的血细胞比容、网织红细胞百分比、叶酸、维生素B12、白蛋白、铁代谢指标、QOL评分、KPS评分以及总生存时间差异均无统计学意义(均P>0.05)。在安全性方面,36 000 IU组和40 000 IU组TEAE发生率分别为80.3%(49/61)与84.5%(49/58),以恶心、发热和乏力症状为主(发生比例高于5%),均未报告任何药物相关严重不良事件,组间差异无统计学意义(均P>0.05)。 结论: 36 000 IU rhEPO在治疗CRA方面对比40 000 IU rhEPO具有非劣效性,且安全性良好,36 000 IU剂量可作为临床治疗CRA的合理选择。.
The incidence of oropharyngeal squamous cell carcinoma (OPSCC) is increasing, and human papillomavirus (HPV)-associated OPSCC is contributing substantially to this trend. Salivary HPV testing enables early detection of HPV-driven OPSCC. The GeneXpert system (Cepheid) is an automated polymerase chain reaction (PCR)-based diagnostic platform designed primarily for cervical cancer screening using vaginal swabs. Our goal was to adapt this platform for HPV testing using salivary oral rinse (SOR) samples to enable early detection of HPV-driven OPSCC. Patients suspected of having any type of head and neck cancers and people at higher risk of acquiring HPV infections were recruited (n = 67). A volume of 5 to 10 mL of SOR samples was tested using the GeneXpert system. Genomic DNA was extracted from the same SOR samples and analyzed using quantitative real-time PCR for human papillomavirus type 16 (HPV-16). The detection status of HPV-16 was used to evaluate the efficiency of HPV detection using the GeneXpert system. The GeneXpert system had excellent specificity of 100% and sensitivity of 72.09%. There was strong agreement with quantitative real-time PCR results (κ = .649). This confirms the suitability of SOR as a sample medium on the GeneXpert system to detect oral HPV with minimal bias. The GeneXpert system can effectively detect HPV in SOR samples. It offers rapid HPV detection, helps identify people at high risk of developing HPV-associated OPSCC, thereby holding promise for the early detection of HPV-driven OPSCC. The results of this study showed the feasibility of using SOR samples on the GeneXpert platform as a scalable, noninvasive screening approach for the early detection of patients at higher risk of developing HPV-associated OPSCC.
Delirium is a multifactorial and potentially life-threatening syndrome that remains underdiagnosed and undertreated in nursing homes, despite residents' high vulnerability due to advanced age, multimorbidity and cognitive impairment. To date, little is known how healthcare professionals perceive and manage delirium in these settings, particularly regarding prevention, diagnosis, therapy and interdisciplinary collaboration, as well as differentiation from other neurodegenerative diseases. This study explores the perspectives of nurses and general practitioners (GPs) on the quality of delirium care in German nursing homes in order to identify barriers and opportunities for improvement. An exploratory qualitative design was employed. A total of 30 semi-structured interviews were conducted with 15 nurses and 15 GPs in Germany. Participants were recruited using a criterion-based purposive sampling strategy to ensure their direct involvement in nursing home care. Data were collected using collaboratively developed interview guides and analyzed using qualitative content analysis with a deductive-inductive approach. Both nurses and GPs reported uncertainty and variability in the understanding, recognition and management of delirium in nursing homes. While preventive and other non-pharmacological measures were applied intuitively, they were rarely identified as delirium-specific. Both professions highlighted limited knowledge and training, unclear responsibilities and the absence of standardized tools as major barriers to effective care. Diagnostic practices were largely based on clinical impression rather than structured assessments. Interprofessional and interdisciplinary cooperation was considered essential but was often hindered by organizational factors and individual attitudes. Participants also emphasized the value of involving relatives and other significant others in the care process but noted that this was inconsistent. Delirium care in German nursing homes is non-standardized and marked by substantial variability in practice and outcomes. Although individual nurses and GPs recognize the challenges and apply some effective routines intuitively, care remains insufficiently systematic and rarely guided by standardized strategies in general. Addressing knowledge gaps, improving interprofessional communication and implementing structured care pathways are crucial steps toward enhancing the prevention, diagnosis and therapy of delirium in these settings.
The use of CT has increased markedly over recent decades, and this trend is also observed among pregnant women. According to current clinical guidelines, fetal radiation exposure below a threshold of 100 to 200 mGy does not appear to increase the risk of congenital malformations. Although the estimated fetal dose from a maternal abdominopelvic CT examination is generally lower than this range, evidence regarding the risks associated with such relatively low-dose diagnostic exposures during pregnancy remains scarce. This study aims to evaluate the association between abdominopelvic CT exposure during the first trimester of pregnancy and the risk of congenital malformations in infants. We will conduct a nationwide population-based cohort study using the National Health Insurance Service (NHIS) database of South Korea. All live births from 2011 through 2023 will be identified and linked to their mothers. Exposure will be defined as maternal abdominopelvic CT during the first trimester of pregnancy. The primary outcome is major congenital malformations in infants, defined according to the standardised classification system of the European Surveillance of Congenital Anomalies. Secondary outcomes include organ-specific malformations and congenital malformations requiring neonatal intensive care unit admission or corrective surgery. A propensity score-based fine stratification weighting approach will be used to adjust for covariates as potential confounders, and relative risks will be estimated. Prespecified sensitivity analyses will be conducted to assess robustness of the findings. This study has been approved by the Institutional Review Board of Seoul National University Bundang Hospital (Institutional Review Board No. X-2508-993-902). Informed consent was waived because only anonymised administrative claims data will be used. All analyses will be conducted within the secure NHIS research environment, and no individually identifiable data will be released. The study findings will be disseminated through peer-reviewed publications, scientific conference presentations and communication with regulatory authorities, clinicians and policymakers.
Sepsis is a significant cause of global mortality and most frequently first presents in the emergency department. Dynamic platelet changes play a key pathophysiological role in sepsis, yet their comprehensive prognostic value-particularly from a multi-dimensional perspective that includes value, magnitude, and timing-remains underexplored. In this retrospective cohort study, 363 episodes of sepsis first diagnosed in the emergency department of Beijing Aviation General Hospital between April 2020 and January 2025 were included. We systematically analyzed three dimensions of platelet counts: 1) nadir value; 2) the magnitude of platelet decline; 3) dynamic timing (time to nadir and maximum decline). Multivariable Cox regression models were used to assess their independent associations with 28-day all-cause mortality, adjusting for demographics, comorbidities, infection characteristics, and disease severity. Among 363 patients (median age 81 years, 54.5% male), the 28-day mortality rate was 39.9%. Compared to survivors, non-survivors had a lower platelet nadir [70.0 vs. 134 × 10⁹/L, p < 0.001] and a greater magnitude of platelet decline [62.8% vs. 31.5%, p < 0.001].Multivariable analysis revealed dose-response relationships for platelet nadir (moderate [50-99 × 10⁹/L]: aHR = 2.04, p = 0.014; severe [20-49 × 10⁹/L]: aHR = 2.98, p < 0.001; profound [<20 × 10⁹/L]: aHR = 3.46, p = 0.003) and magnitude of decline (moderate [30-50%]: aHR = 1.79, p = 0.044; severe [>50%]: aHR = 2.43, p < 0.001). Timing analysis demonstrated that intermediate and late platelet nadir were associated with higher mortality risk (intermediate: aHR = 4.39-7.28, all p < 0.001; late: aHR = 5.85-9.58, all p < 0.001), while initial and early nadir showed no significant association. Similarly, both intermediate and late maximum platelet decline were associated with increased mortality (intermediate: aHR = 2.70-2.98, all p < 0.001; late: aHR = 6.33-6.87, all p < 0.001). Through stepwise analysis, we demonstrate that dynamic platelet parameters-magnitude and timing of decline-independently predict 28-day mortality in sepsis. The magnitude applies to all patients but is static, while timing adds a critical dimension: the same degree of decline has different prognostic implications depending on when it occurs. Delayed decline and intermediate/late nadir indicate poor prognosis, whereas early nadir does not. These findings underscore the importance of longitudinal platelet monitoring in the emergency setting, as the trajectory-integrating both magnitude and timing-identifies at-risk patients, including those without thrombocytopenia.
This study aims to translate, culturally adapt, and validate the Chinese version of the King's Stool Chart among patients receiving enteral nutrition for use in nursing practice. This is a descriptive, cross-sectional study. This study was conducted in the intensive care unit (ICU) of a tertiary hospital in Henan Province, China. A total of 144 patients receiving enteral nutrition were included. This study was conducted in two phases. Phase I involved the translation and cultural adaptation of the King's Stool Chart using established methodologies, including forward translation, synthesis, back-translation and expert review. Phase II comprised psychometric evaluation with 144 patients receiving enteral nutrition. The Chinese version of the King's Stool Chart was used to assess stool frequency, consistency, weight and the daily total score. Validity was tested through content, construct and concurrent validity, while inter-rater reliability was assessed using the kappa coefficient. The Chinese version of the King's Stool Chart demonstrated excellent content validity, with item-level indices ranging from 0.95 to 1.00. Construct validity was supported by the ability of the Chinese version of the King's Stool Chart to differentiate between clinical subgroups with varying stool characteristics. Sensitivity rates for stool weight categorisation were above 89%, and substantial inter-rater reliability (kappa=0.735) was observed. The daily total score was effective in identifying patients at risk for diarrhoea, with significant differences observed among clinical subgroups. Diarrhoea classification, using a threshold of ≥15 points, showed strong construct validity. Within the scope of this single-centre sample, the Chinese version of the King's Stool Chart demonstrates acceptable validity and substantial to excellent inter-rater reliability for assessing stool frequency, consistency, weight and diarrhoea classification in enterally fed ICU patients in China. These psychometric properties provide preliminary support for its use in routine nursing practice for gastrointestinal function monitoring in ICU settings. Further multicentre and large-sample studies are required to verify its external validity and generalisability to broader Chinese-speaking populations and non-ICU clinical settings.
Several anticancer drugs can cause chemotherapy-induced peripheral neurotoxicity (CIPN) in approximately 70% of patients; of these, 30% continue to have chronic symptoms hampering quality of life. As more people live with and beyond cancer therapy, it is becoming increasingly important for clinicians to assess those at higher risk of CIPN and provide care for managing CIPN severity. Biomarkers could be appropriate in this regard. However, an initial step towards biomarker-informed care is to advance scientific knowledge regarding the potential of current biomarkers such as neurofilament light chain (NfL). Here, outcomes of a CIPN biomarker workshop and discussions among clinical and scientific experts held during the MASCC 2024 annual conference are reported. The aims of this work are to (1) identify knowledge gaps regarding the potential for serum NfL (sNfL) as a predictive CIPN biomarker, (2) provide guidance for future biomarker research, and (3) explore whether sNfL is ready for use in clinical practice. Consensus of the experts was gained using a Normative Group Technique. A two-stage iterative approach was used: Stage 1 involved a literature review and an in-person workshop; Stage 2 involved a virtual zoom event to confirm understanding and priorities following the workshop. Eleven studies identified that increases in serum NfL after receiving 2-3 chemotherapy doses are associated with CIPN severity post treatment. Despite preliminary evidence suggesting positive correlations among sNfL and CIPN objective and subjective measures, wide variation in sNfL levels, the prescribed neurotoxic chemotherapy agent, dose and schedule, and symptom assessment time points make it difficult to use sNfL as a predictor of CIPN severity. While there is potential for sNfL to be a useful biomarker, larger scale research exploring the predictive value is needed before sNfL can be used to inform clinical decision-making. Identifying links with validated clinical assessments could be utilized to enhance biomarker accuracy and to address the multiple pathological mechanisms of neurotoxicity that occur from the differing chemotherapies. Moreover, workshop participants identified significant barriers to biomarker implementation (e.g. costs, long processing time, and unknown cut off points).
Cancer in women represents a significant disease burden, posing challenges for prevention, treatment, and caregiving. This study aimed to analyze the epidemiological trends of the women's cancer burden and the main influencing factors in the group of twenty (G20) from 1990 to 2023. Incidence, prevalence, mortality, and disability-adjusted life years (DALYs) for breast, cervical, uterine, and ovarian cancers, as well as fertility rates for G20 and its 98 locations, were sourced from the Global Burden of Disease Study 2023. Age-standardized rates (ASRs), quality of care index (QCI), and 5-year relative survival of integrated women's cancers were calculated. Average annual percent changes (AAPCs) were used to determine the temporal trends by age and region. Decomposition analysis identified drivers of changes in case numbers, linear regression assessed the associations with DALY rate changes, and dominance analysis identified dominant predictors. In 2023, the incidence, prevalence, mortality, and DALYs from women's cancers in G20 were 3.29 [95% uncertainty interval (UI) 2.60-4.14], 26.71 (95% UI 21.99-32.40), 1.16 (95% UI 0.91-1.45), and 36.58 million (95% UI 28.40-46.32), respectively, with ASRs of 87.63/100,000 (95% UI 65.12-115.85), 706.16/100,000 (95% UI 555.75-890.02), 30.03/100,000 (95% UI 22.10-39.58), and 994.79/100,000 (95% UI 728.43-1328.81). The QCI was 75.13 [95% confidence interval (CI) 73.67-76.59], and the 5-year relative survival was 65.74% (95% CI 65.53-65.95). From 1990 to 2023, there was a significant increase in incidence, prevalence, mortality, and DALYs in G20, primarily driven by population growth. Age-standardized incidence rate, QCI, and 5-year relative survival increased, while age-standardized mortality and DALY rates decreased. Changes in prevalence rates of breast cancer and cervical cancer for women aged 15-49 years were positively associated with changes in DALY rates of women's cancers, whereas changes in the total fertility rate were negatively associated. Dominance analysis confirmed these three factors consistently as dominant predictors between 1990 and 2023. Reducing the prevalence of breast and cervical cancers and increasing fertility among women aged 15-49 years could lower the overall DALY burden attributable to women's cancer. The incidence, prevalence, mortality, and DALYs of women's cancers in G20 have increased substantially from 1990 to 2023. Tailored prevention strategies should consider age and cancer type, emphasizing reproductive health for women of reproductive age.
Tuberous sclerosis complex (TSC) is a rare genetic disorder caused by pathogenic variants in the TSC1 or TSC2 genes. Apart from multisystem physical manifestations, most individuals with TSC experience TSC-associated neuropsychiatric disorders (TAND). Little is known about how TAND severity changes over time and what factors may predict these changes. Preliminary data suggest the presence of differential TAND severity trajectories. Caregiver well-being may act as a mediator of TAND severity, and a well-being intervention designed for caregivers of children with developmental disabilities may improve caregiver well-being. The study aims are to (1) examine longitudinal trajectories of TAND severity in a large sample of individuals with TSC and to examine potential predictors of differential trajectories, (2) evaluate the association between caregiver well-being characteristics, TAND severity, and severity trajectories, and (3) adapt and evaluate the feasibility, acceptability, and potential efficacy of a brief, online group-based well-being intervention for family caregivers. For the first 2 aims, 500 individuals with TSC or their caregivers will be recruited in an accelerated longitudinal design to document TAND severity at 5 time points over 12 months via a web-based app. At each time point, participants will complete demographic, TSC characteristics, intervention, and well-being questionnaires. Data will be analyzed using latent class mixed and multinomial regression modeling (aim 1) and structural equation and mediation modeling (aim 2). Participatory methods will be used to adapt an existing caregiver well-being intervention for the TSC community (aim 3). Thirty caregivers will be invited to participate in the adapted group-based online well-being intervention. This study was funded from July 2024 (HT94252410790 and HT94252410791), and ethics approvals were obtained from the University of Cape Town (July 2024), Vrije Universiteit Brussel (November 2024), and the Department of Defense Office of Human Research Oversight (December 2024). The TAND Toolkit app was adapted for longitudinal data collection (aims 1 and 2). Recruitment started in December 2025 and will continue until 500 participants are enrolled (anticipated December 2026). Primary outputs are expected by July 2028. For aim 3, experiential and adaptation workshops were completed in June 2025, the pilot intervention was delivered in November 2025, and data collection will continue till May 2026. Outputs are expected by December 2026. Identification of differential longitudinal TAND trajectories and their correlates will stimulate research in TSC and generate evidence for the self-report quantified TAND checklist as a clinical outcome measure. Understanding the association between caregiver well-being and TAND severity will provide support for targeted well-being interventions. A successful pilot trial will provide preliminary data for larger-scale clinical trials, with the potential to support caregivers and improve TAND outcomes. Together, the findings from the study will help close the gap in interventions for TAND. ClinicalTrials.gov NCT06879665; https://clinicaltrials.gov/study/NCT06879665. DERR1-10.2196/91726.
Nonsteroidal anti-inflammatory drugs (NSAIDs) are widely used as part of multimodal analgesia, but their association with acute kidney injury (AKI) in patients undergoing elective surgery remains uncertain. We conducted a retrospective, propensity score-matched study at a single-center academic hospital (ChiCTR2300076725). The primary outcome was the incidence of AKI, defined by the Kidney Disease: Improving Global Outcomes (KDIGO) serum creatinine criteria, compared between patients who did and did not receive NSAIDs during surgery. Secondary outcomes included AKI severity staging and prolonged postoperative hospitalization (≥7 days). A total of 11,139 patients were included, with 3,361 in the NSAIDs group and 7,778 in the control group. After propensity score matching (PSM), 3,361 matched pairs were obtained. The incidence of postoperative AKI was comparable between the NSAIDs and control groups after matching (4.2% vs. 4.2%, p > 0.999). The distribution of AKI stages (p = 0.830) and the median duration of postoperative hospital stay were also comparable between the two groups (both 6 days, p = 0.290). Sequentially adjusted logistic regression models consistently showed no significant association between NSAIDs and AKI, AKI staging, or prolonged hospitalization, both before and after PSM. Stratified analyses by NSAID type and dose-response analyses revealed no significant association with AKI. A sensitivity analysis using complete cases without imputation (n = 9,385) yielded consistent results. In conclusion, intraoperative NSAIDs administration was not independently associated with an increased risk of postoperative AKI in patients undergoing major noncardiac surgery.
Patients with disorders of consciousness (DoC) resulting from severe traumatic brain injury (TBI) may recover consciousness and independence years later. There is a prevailing belief that recovery, when limited to the restoration of independence in activities of daily living, will be accompanied by poor self-reported quality of life (QOL) and psychological health. This perception may influence early clinical decision-making related to the provision of life-sustaining treatment and access to specialized rehabilitation. In this observational study, we utilized data from the multisite TBI Model Systems (TBIMS) to evaluate the outcomes of QOL (Satisfaction With Life Scale [SWLS]), anxiety (Generalized Anxiety Disorder-7 Scale [GAD-7]), and depression (Patient Health Questionnaire-9 [PHQ-9]) in participants who were admitted to inpatient rehabilitation with DoC and recovered the ability to provide self-report on these measures by 1 year post-TBI. Among 887 TBIMS participants admitted to inpatient rehabilitation with DoC (defined as the absence of command-following; 74% male; mean [standard deviation, SD] age = 36.82 [17.87] years; days post-injury on rehabilitation admission = 33.63 [22.51]), 50% regained the capacity to respond to questions on self-report measures at the 1-year follow-up time point. The mean (SD) total scores were as follows: SWLS = 20.38 (7.81), GAD-7 = 4.00 (5.66), and PHQ-9 = 5.22 (5.04). A minority of patients endorsed dissatisfaction (15%) or extreme dissatisfaction (9%) with life, and similarly, only 14% and 16%, respectively, reported anxiety and depression symptoms above the clinical cutoff points. The results were similar at the 2- and 5-year follow-up time points. In summary, at the group level, QOL and psychological health in persons who recover from DoC are similar to those of individuals with less severe brain injuries and to the general population. These findings challenge the assumption that recovery from DoC is associated with poor QOL and psychological health. Clinicians should be aware that patients with a broad range of residual disability after DoC are unlikely to report dissatisfaction with life or have significant anxiety and depression up to 5 years post-TBI.
Acute kidney injury (AKI) is common in patients with acute coronary syndrome (ACS). However, understanding its prevalence, risk factors and prognosis remains incomplete. We identified 21328 patients admitted for chest pain to a regional hospital in 2021; 6685 had confirmed ACS. AKI episodes were identified by the serum creatinine criteria of the Kidney Disease: Improving Global Outcomes (KDIGO) AKI guideline. The rates of recovery and inpatient, 30-day and 90-day mortality rates were analyzed. Multi-variable analysis showed that ACS was independently associated with AKI (adjusted odds ratio [OR] 2.327; 95% confidence interval [CI] 2.130-2.542; p < 0.0001). Subgroup analysis showed that ACS was independently associated with AKI on admission (adjusted OR 2.516, 95% CI 2.305-2.746, p < 0.0001) but not new-onset AKI during hospitalization. Other factors associated with AKI were similar between patients with and without ACS. AKI in patients with ACS had similar rate of recovery as those without ACS (p = 0.4). Multi-variable logistic regression showed that both ACS types in AKI were associated with higher inpatient, 30-day and 90-day mortality rates, and they had a synergistic effect. Other factors associated with inpatient, 30-day and 90-day mortality rates of patients with AKI were similar between patients with and without ACS. We conclude that ACS was associated with a higher incidence of AKI, and the risk was mainly associated with AKI on admission. The recovery rates from AKI were similar between patients with and without ACS, but the presence of AKI and ACS synergistically increased the inpatient, 30-day and 90-day mortality rates. What is known: Acute kidney injury (AKI) is common in patients with acute coronary syndrome (ACS).The study add: ACS was mainly associated with AKI on admission, and the presence of AKI and ACS synergistically increased the mortality rates.Potential impact: Identification of AKI on admission is important for the risk stratification of patients admitted for ACS.
Multiple Sclerosis (MS) is a chronic neurological disorder, prevalent in young adults. MS leads in disability accrual, thus affecting overall patients' quality of life. Moreover, the management of MS poses significant burden on health systems worldwide. The present study delves into the impact of different health providing settings (e.g. private office/Clinic vs. specialized MS Center) on the effectiveness of MS management, as well as on patient-reported outcomes related to the quality of life (Axis A). Moreover, the study addresses their relative effectiveness in a health crisis, such as the COVID-19 pandemic (Axis B). Data were collected based on questionnaires administered to people with MS (pwMS). Upon the pandemic and prior to the COVID-19 vaccines being available, all data were collected via online questionnaires. Since March 2021, data were collected both online and in person. Overall, 776 pwMS participated in the study and answered Axis B questionnaire. Of those, 215 additionally answered Axis A questionnaire. Regarding Axis A, disease management by a specialized MS Center was associated with increased access to healthcare professionals (p < 0.001) and/or MRI examinations (p < 0.001) and was also linked to improved time-to-diagnosis following symptom onset, compared to the disease management in a private office/Clinic (p < 0.001). Regarding Axis B, specialized MS centers demonstrated remarkable adaptability during the pandemic, swiftly implementing remote care solutions to ensure continuity of care. These findings suggest that care delivered in specialized MS centers is associated with improved access to healthcare services and better patient-reported outcomes, both under routine care conditions and during healthcare crises.
Digital emergency care applications offer potential to reduce delays, enhance triage, and improve care coordination, yet evidence remains limited on their real-world implementation at scale. Maccabi Healthcare Services developed Maccabi-RED, a mobile application allowing patients to request urgent community-based care as an alternative to hospital emergency department visits. This study examines the implementation and utilization of Maccabi-RED during 2020-2023, aiming to describe demographic and clinical characteristics of patients initiating emergency care requests, identify factors associated with request approval and successful routing to community-based care and examine healthcare utilization patterns following app-initiated requests. This retrospective study analyzed de-identified electronic health record data from Maccabi Healthcare Services, including all patient-initiated emergency care requests through the Maccabi-RED application between January 2020 and December 2023. The study included 94,795 requests from 77,508 patients. We extracted demographic and clinical variables and examine patterns of subsequent healthcare utilization in the week following app-initiated emergency care requests, comparing approved versus non-approved requests. During the study period, 51.6% of requests were approved, resulting in urgent community clinic appointments. Service utilization increased substantially from 11,058 requests in 2020 to 36,532 in 2023. Approved requests were more common among older patients and those with chronic conditions. Emergency type strongly influenced approval rates, with foreign body cases showing substantially higher approval odds than orthopedic cases. Geographic, ethnic, and socioeconomic disparities in approval rates were observed. In adjusted analyses, approved requests were associated with lower 7-day healthcare utilization, including fewer primary care physician visits and reduced odds of hospital emergency department and emergency medical center visits. The Maccabi-RED application demonstrates feasibility of scaling patient-initiated digital emergency routing, with potential to reduce downstream acute care utilization. However, observed approval disparities across age groups, geographic regions, and socioeconomic strata indicate that digital maturity alone does not guarantee equitable access. These findings underscore the importance of embedding equity considerations in system design, monitoring protocols, and capacity planning. Future development, including artificial intelligence-enabled decision support, should prioritize transparency and algorithmic fairness to improve performance without amplifying existing health inequities.
To evaluate the predictive value of the systemic immune-inflammation index (SII) for clinical efficacy of acupuncture combined with exercise rehabilitation in patients with knee osteoarthritis (KOA). In this retrospective observational study, 151 patients with radiographically confirmed Kellgren-Lawrence Grade II or III KOA who completed an 8-week standardized protocol of acupuncture and exercise rehabilitation were enrolled. Clinical efficacy was defined as the attainment of minimal clinically important improvement (MCII) in both the 11-point Numeric Rating Scale (NRS) for pain (≥ 2-point reduction) and the 68-point WOMAC function subscale (≥ 6-point improvement, Likert version 3.1). Baseline demographic, clinical, and hematological data were collected, and SII was subsequently calculated from baseline hematological parameters. Uni- and multivariate logistic regression analyses were used to identify independent predictors of treatment response. Receiver operating characteristic (ROC) curves were constructed to assess the discriminative ability of SII, body mass index (BMI), and lymphocyte count, individually and in combination. Of the 151 patients, 73 were classified as responders and 78 as nonresponders. Baseline BMI and SII were significantly lower, while lymphocyte count was higher in the effective group (all p < 0.001). Multivariate logistic regression identified BMI (OR = 0.596, 95% CI: 0.472-0.753), lymphocyte count (OR = 34.597, 95% CI: 3.234-370.131), and SII (OR = 0.912, 95% CI: 0.873-0.953) as independent predictors of treatment response. ROC analysis showed that SII demonstrated only moderate predictive power (AUC = 0.749, 95% CI: 0.670-0.828), with improved but still moderate accuracy when combined with BMI and lymphocyte count (AUC = 0.858, 95% CI: 0.799-0.917). Although SII is an independent and accessible biomarker for predicting clinical efficacy, its discriminative ability is moderate. A combined model incorporating SII, BMI, and lymphocyte count may provide better, though not strong, predictive performance and could support clinical decision-making and individualized treatment planning. However, the retrospective single-center design may introduce selection bias and limit generalizability. Residual confounding from unmeasured variables cannot be excluded, and the absence of longitudinal SII measurements limits evaluation of temporal changes. Future multicenter prospective studies are warranted to validate these findings.
Across the globe, relapse into substance use after treatment is a common problem, and Botswana is no exception. The issue of youth substance use relapse remains significant, causing harm to their social, spiritual, financial, and overall health. The study aimed to explore and describe the perceptions of psychiatric nurses on the prevention of substance use relapse in Lobatse, Botswana. Qualitative, exploratory, and descriptive design. Semi-structured interviews were conducted with 15 psychiatric nurses working at a referral psychiatric hospital in Lobatse. Purposive sampling was employed to select participants. Thematic data analysis was used for this study. The study followed the Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines. Data analysis yielded four main themes and some subthemes. The study revealed that psychiatric nurses perceived youths' relapse into substance use to be due to psychological triggers, family problems, and societal issues. To reduce the likelihood of relapse, the participants proposed that interventions should address the individual, family, and community aspects of a person's life. Additionally, healthcare interventions to prevent relapse were proposed. Substance use relapse is a serious challenge for the youth. This study presents findings on the participants' perceptions of substance use relapse among youth and on preventive interventions. The relapse prevention strategies identified can be used in nursing practice to curb the relapses to substance use, for example, the findings can inform the development of relapse prevention programs. The perceptions of psychiatric nurses were elicited in this study.