To identify the continuous dose-response relationship between the duration of premature rupture of membranes (PROM) and the probability of neonatal and maternal infectious morbidity. This meta-analysis and systematic review synthesise data from 15 studies worldwide involving more than 70,000 mother-neonate pairs. A two-step random-effects model of PROM duration as a continuous dose, using restricted cubic splines, was used to estimate specific risk thresholds. The analysis established a progressive, non-linear escalation of risk. The onset of statistical risks at 16 hours is the early-onset pneumonia (Adjusted OR 1.86, 95% CI: 1.152.99). At the age of 18 hours, the incidence of culture-proven sepsis in neonates was 4.0%, and the odds ratio for maternal fever was significantly higher (AOR 36.6). The analysis of the ROC curves revealed a critical mathematical pivot point at 37 hours, after which complications escalate exponentially. Latency greater than 48 hours was the most significant independent predictor of culture-proven sepsis, with an increased risk of 8.2 (p < 0.001). Histologic chorioamnionitis was detected in 39% of mothers, and in many cases, they are clinically silent. Considerable heterogeneity (I2 > 60%) was mainly caused by gestational age disparities in cohorts of extremely preterm and term babies. PROM latency risk is not a threat but accelerates with time. Although 18 hours will be an acceptable early warning level, the range of 37 to 48 hours is a high-risk period that needs aggressive treatment. International guidelines need to be reviewed to reflect this non-linear trend, especially regarding pregnancy, where the risks of delivery are low compared to the rising risk of latency.
To explore the risk factors and predictive value of Mycoplasma pneumoniae (MP) infection in children. A total of 2042 children with suspected Mycoplasma pneumoniae infection who were treated for the first time at Civil Aviation General Hospital from October 2023 to December 2023 were selected as the study subjects. Among them, 1637 cases were confirmed as Mycoplasma pneumonia-infected and were included in the pneumonia group, while the remaining 405 cases were non-Mycoplasma pneumonia-infected and were included in the non-Mycoplasma pneumonia group. The clinical data of the 2 groups of children (including gender, age, initial symptoms, laboratory indicators, etc) and the risk factors of MP infection in children were compared, and the receiver operating characteristic curve was analyzed. This study showed that the percentage of neutrophils in the non-MP infection group was significantly lower than that in the MP infection group, and the difference was statistically significant (P < .001). When comparing the percentages of lymphocyte percentage (LY) and hemoglobin in the 2 groups of children, the Mycoplasma pneumonia-infected group was lower than the non-Mycoplasma pneumonia-infected group, and the differences were both statistically significant (P < .05). Logistic regression analysis revealed that white blood cell and neutrophil-to-lymphocyte ratio (NLR) might be valuable markers for predicting MP infection. The Spearman correlation indicated that LY was collinear with the occurrence of MP infection, and Least Absolute Shrinkage and Selection Operator (LASSO) regression analysis demonstrated that both LY and NLR might be valuable markers for predicting MP infection (P < .05). Receiver operating characteristic curve analysis revealed that the area under the curve of the NLR for diagnosing MP infection was 0.624, with a cutoff value of 1.36 (sensitivity of 0.798 and specificity of 0.558). In the diagnosis of MP infection, the consistency between the RNA method and the immunogold colloidal method was poor (Kappa = 0.108, P < .05). The consistency between the RNA method and the immunogold colloidal method in the diagnosis of MP infection is poor. Both the white blood cell and NLR are valuable markers for MP infection.
Circadian rhythms affect cardiovascular function, and the timing and severity of stroke and myocardial infarction (commonly known as a heart attack). Afternoon cardiac surgery may improve outcomes by reducing ischaemia-reperfusion injury (i.e. reducing tissue damage caused when blood supply returns to tissue (reperfusion) after a period of oxygen deprivation (ischaemia)). However, the evidence is conflicting. This systematic review assessed the impact of surgical timing on clinical outcomes after cardiac surgery. To assess the effects of early versus late surgical start times for on-pump cardiac surgery on mortality, cardiac outcomes, and quality of life. We searched CENTRAL, MEDLINE, Embase, and Web of Science Conference Proceedings Citation Index - Science, along with ClinicalTrials.gov and the World Health Organization International Clinical Trials Registry Platform trials registers. We also conducted reference checking, citation searching, and contacted study authors to identify studies for inclusion. The latest search date was 26 January 2025. We included randomised controlled trials (RCTs) in adults undergoing cardiac surgery comparing late with early surgical start times. We excluded non-randomised studies and studies in children. Our critical outcomes were short-term mortality (≤ 30 days postoperative), long-term mortality (> 30 days postoperative), and perioperative myocardial infarction. Other important outcomes were perioperative myocardial injury, postoperative atrial fibrillation, left ventricular ejection fraction, lengths of intensive care unit (ICU) and hospital stays, and quality of life. We used the Cochrane Risk of Bias 2 tool to assess bias in the included RCTs. As only one study met the inclusion criteria, we did not perform meta-analysis. We synthesised results descriptively, and used GRADE to assess the certainty of the evidence for specified outcomes. We included one study with 88 participants. The included study was conducted in France, and reported on differences in outcomes between morning and afternoon on-pump elective aortic valve replacement in adults. Critical outcomes No study reported short-term or long-term mortality data for early versus late surgical start times for on-pump cardiac surgery. In the included study, there was no evidence of a difference regarding in-hospital mortality between groups, with no deaths in both groups (risk ratio (RR) and 95% confidence interval (CI) not estimable; 1 study, 88 participants). The evidence is very uncertain about the effect of early versus late surgery on perioperative myocardial infarction (RR 0.29, 95% CI 0.06 to 1.30; 1 study, 88 participants, very low-certainty evidence). Important outcomes There was evidence of lower perioperative myocardial injury as measured by cumulative troponin release over 72 hours in those undergoing late surgery compared to early surgery (MD -46 ng/L × 72 h, 95% CI -79 to -13; 1 study, 88 participants). In the included study, there was no evidence of a difference in new-onset postoperative atrial fibrillation during hospital stay between groups (RR 0.75, 95% CI 0.40 to 1.40; 1 study, 88 participants). No study reported differences in left ventricular ejection fraction at discharge as a continuous variable for early versus late surgical start times for on-pump cardiac surgery. In the included study, there was no evidence of a difference in left ventricular ejection fraction < 45% at discharge between groups (RR 0.40, 95% CI 0.08 to 1.95; 1 study, 88 participants). No study reported differences in length of ICU admission for early versus late surgical start times for on-pump cardiac surgery. There was no evidence of a difference in need for inotropic support between groups in the included study (RR 0.25, 95% CI 0.03 to 2.15; 1 study, 88 participants). The evidence is very uncertain about the effect of late surgery on length of hospital stay (MD 0.00, 95% CI -1.48 to 1.48; 1 study, 88 participants, very low-certainty evidence). No study reported on the outcome of quality of life. The evidence is very uncertain about the effects of early versus late surgical start time for the outcomes of perioperative myocardial infarction and length of hospital stay. We found no data for the outcomes of short-term or long-term mortality, left ventricular ejection fraction, length of ICU stay, or quality of life. Late surgical start time could reduce the risk of perioperative myocardial injury as estimated by cumulative troponin release over 72 hours. More research is needed to determine whether scheduling heart surgery later in the day improves patient outcomes. This Cochrane review had no dedicated funding. Protocol (2022) DOI: 10.1002/14651858.CD014901.
The aim of this study was to estimate the economic burden of radiotherapy (RT) from the perspectives of payers, the healthcare system, patients, and society, and to assess associated quality-of-life (QoL) outcomes. The analysis examined direct medical and non-medical costs, as well as QoL, before, during, and up to six months after RT. Given the inclusion of multiple cancer types, the study reflects a heterogeneous real-world population. An exploratory comparison across RT techniques was also conducted to provide contextual economic insight. This analysis included data from 301 cancer patients undergoing RT using various techniques, including two-dimensional radiotherapy (2D), 3D conformal radiotherapy (3D-CRT), volumetric-modulated arc therapy (VMAT), and intensity-modulated radiotherapy (IMRT), at the University General Hospital of Heraklion, Crete, Greece. Clinical and cost data were collected retrospectively, while QoL data were collected prospectively using validated instruments at baseline, end of treatment, and six months post-treatment. Quality-adjusted life years (QALYs) were estimated. The primary analysis compared RT with a hypothetical "no RT" comparator derived from published evidence, while comparisons across RT techniques were conducted as exploratory analyses. Costs and QALYs were evaluated over a 6-month time horizon; therefore, discounting was not applied. Incremental cost-effectiveness ratios (ICERs) were calculated, and probabilistic sensitivity analysis was performed to account for parameter uncertainty. The cost per QALY gained with RT compared with the hypothetical "no RT" comparator varied substantially across techniques and cancer types. In the primary analysis, 2D radiotherapy yielded the lowest ICER (€13,043.27/QALY), while VMAT demonstrated an ICER of €29,945.12/QALY. In contrast, IMRT was associated with a substantially higher ICER (€135,529.51/QALY), suggesting limited cost-effectiveness under commonly accepted willingness-to-pay thresholds, whereas 3D-CRT was found to be dominant. Subgroup analyses revealed marked heterogeneity, with ICERs ranging from €3234.45 to €30,232.50 per QALY gained across cancer types. In certain subgroups, RT was either cost-saving or dominant, particularly in breast cancer (cost-saving with similar QALYs) and in skin cancer and sarcoma (dominant strategies). Sensitivity analyses highlighted considerable uncertainty, especially for 2D radiotherapy, primarily driven by small sample sizes and variability in QALY estimates. This study provides short-term, real-world evidence on the cost-effectiveness and quality-of-life outcomes of radiotherapy in a Greek healthcare setting. While simpler techniques such as 2D radiotherapy may appear economically favorable, their limited effectiveness and substantial uncertainty may reduce their overall value. In contrast, advanced techniques-particularly VMAT-demonstrate a more consistent balance between cost and clinical outcomes, supporting their role within value-based, patient-centered oncology care. However, the findings should be interpreted with caution due to population heterogeneity, small subgroup sizes, the short (6-month) time horizon, and the use of a hypothetical comparator. Further research with longer follow-up and disease-specific analyses is warranted.
Sickle cell disease (SCD) is a severe, inherited hemoglobin disorder characterized by chronic hemolysis, vaso-occlusive crises (VOCs), and systemic inflammation. Hydroxyurea is the standard conventional pharmacotherapy for SCD, but it has certain limitations, necessitating the need to explore other safe and effective treatment options for SCD. Ayurveda interventions offer a potential therapeutic approach complementary to conventional medicine for SCD management, with anti-inflammatory, immunomodulatory, and hematopoietic properties. This randomized controlled trial will evaluate the efficacy and safety of an Ayurvedic therapeutic regimen as an adjunct to hydroxyurea in SCD management, assessing its impact on hematological parameters, inflammatory biomarkers, VOC frequency, and overall quality of life. A PROBE (Prospective, Randomized, Open-Label, Blinded End Point) study will be conducted on individuals of any gender aged 18 years or older and diagnosed with SCD (with hemoglobin S levels more than 60% and a history of at least 1 VOC per year over the past 3 y). Individuals with acute VOC or any severe infection requiring hospitalization, a history of significant comorbidities, or hematopoietic stem cell transplantation will not be considered. The study will be conducted at the All India Institute of Medical Sciences, Bhopal, India. A total of 100 participants will undergo random assignment in a 1:1 ratio to receive either an Ayurveda regimen (Dadimadi Ghrita, Punarnavadi Mandura, and Vasaguduchyadi Kwatha) as an add-on to hydroxyurea or hydroxyurea alone for 8 months. The primary outcome will be a change in hemoglobin electrophoresis parameters (hemoglobin S, fetal hemoglobin, and adult hemoglobin) and the frequency of VOC episodes over 8 months. The secondary outcome measures include changes in the levels of proinflammatory markers (interleukin-6, interleukin-8, C-reactive protein, and transforming growth factor-β) and lactate dehydrogenase, frequency of hospitalization for VOCs and blood transfusions, and health-related quality of life (Short Form-8 Health Survey questionnaire). Safety will be evaluated by recording the incidence of adverse events and changes in liver and kidney function tests from baseline. The recruitment of study participants was initiated on November 1, 2023. By the second week of February 2025, 83 participants had been enrolled in the study. The final study is expected to be complete by December 31, 2025. We will start the analysis of the study outcomes in February 2026, and the publication of the final results is expected by August 2026. This randomized controlled trial protocol outlines a rigorous study design aimed to explore the potential benefits of an integrated therapeutic regimen comprising Ayurveda interventions and standard conventional care in the long-term management of SCD through validated clinical and laboratory parameters. The outcomes of this study can address the needs and challenges associated with SCD management and inform future management protocols.
Quantitative detection of human immunodeficiency virus-type 1 (HIV-1) and hepatitis C virus (HCV) RNA plays a crucial role in the diagnosis, monitoring of the therapy and evaluation of the treatment response. The ELITe BeGenius® platform (ELITechGroup, Turin, Italy) is a fully automated sample-toresult molecular system integrating extraction, amplification and detection within a single workflow. The HIV-1 ELITe MGB® and HCV ELITe MGB® assays are real-time polymerase chain reaction tests designed for plasma viral-load quantification. This study aimed to verify their analytical performance under routine clinical laboratory conditions. Verification included assessments of accuracy, intra- and inter-assay precision, linearity and method correlation. A total of 70 plasma samples for HIV-1 RNA and 52 for HCV RNA were analyzed using previously tested and stored patient specimens, reference materials, and external quality controls. Results were compared with established reference assays used in accredited laboratories. Statistical analyses included positive, negative, and overall percent agreement (PPA, NPA, OPA), coefficients of variation (CV%), correlation and regression analyses and Bland-Altman bias estimation. For HIV-1 RNA, 19 of 20 positive and all 20 negative plasma samples were correctly identified by the ELITe MGB® assay, yielding a PPA of 95.0%, NPA of 100.0% and OPA of 97.5% (κ= 0.95). Intra-assay precision showed strong repeatability, with CVs of <1-3.9% for low-positive, 0.4-6.4% for medium-positive and <2% for high-positive specimens. Inter-assay reproducibility was consistent with CVs of 12.8% at low, 2.4% at medium, and 1.4% at high viral loads. Correlation analysis showed excellent concordance with the reference assay (p= 0.975, p< 0.001; R²= 0.95) and a mean bias of -0.40 log10 copies/mL in Bland-Altman analysis. Linearity was strong (R²= 0.97), confirming accurate quantification across the dynamic range with minor underestimation at higher dilutions. For HCV RNA, all 14 positive and 14 negative samples were correctly classified (PPA, NPA, and OPA= 100%; κ= 1.00). Intra-assay precision was excellent, with CVs around 2% for both low- and medium-positive samples, confirming consistent repeatability within a single run. Inter-assay reproducibility was equally robust, with CVs of 0.4-3.3% for low positives, 1.0-2.5% for medium and <1.1% for high-titer specimens. Correlation with the comparator method was strong (r= 0.956, p< 0.001; R²= 0.91) with a mean bias of -0.38 log10 IU/mL. Linearity analysis confirmed high proportionality between expected and measured concentrations (R²= 0.96). Deviations were negligible at low titers and slightly elevated at high loads but remained within acceptable limits. The HIV-1 and HCV ELITe MGB® assays on the BeGenius® platform demonstrated high accuracy, reproducibility and linearity, showing excellent correlation with reference methods. These results confirm that the ELITe BeGenius® system provides reliable and clinically valid viral-load measurements suitable for routine diagnostic use. Comprehensive laboratory verification of molecular assays under real-world conditions is crucial to ensure consistent performance, cross-platform comparability and reliable viral-load monitoring.
Early parent-child interactions are crucial for child development. Existing assessment scales have several limitations: intensive training often exceeding 40 hours, administration time up to two hours, and unbalanced distribution of items to the detriment of the dyadic dimension of interactions. Our study aims to develop and validate the MIPPE (Measure of Early Parent-Child Interactions), a scale adapted for daily clinical practice that assesses the quality of early interactions while integrating the dyadic dimension. The MIPPE scale was developed within the PERL program in Eastern France. After several revisions, the final version includes 9 items scored from 0 to 3. Validation is based on the analysis of 228 parent-child interaction videos from 123 dyads at 4 and 24 months, divided between intervention (N = 62) and control groups (N = 61). The MIPPE demonstrates acceptable internal consistency (Cronbach's α = 0.92, McDonald's ω = 0.93). Exploratory factor analysis reveals a unidimensional structure explaining 66% of the variance, confirmed by confirmatory factor analysis (CFI = 1.000, TLI = 1.002, RMSEA = 0.000). Inter-item correlations range from 0.34 to 0.67, indicating satisfactory cohesion. Clinical thresholds have been established: 15 and 20 (optimal sensitivity 87%). The MIPPE constitutes an accessible, rapid, and psychometrically validated assessment tool for early childhood professionals. It facilitates early screening of interactional difficulties and referral to interventions, thus contributing to the prevention of developmental disorders and parental support.
Crimean-Congo hemorrhagic fever (CCHF) is a zoonotic viral hemorrhagic disease with a wide clinical spectrum ranging from mild clinical presentations to fatal outcomes. Reported case-fatality rates vary between 3% and 30%. Therefore, early identification of high-risk patients and referral to appropriate centers are of critical importance in reducing mortality. In this study, we aimed to evaluate the performance of the CURB-65+B score in predicting mortality in patients with CCHF and to compare it with the Acute Physiology and Chronic Health Evaluation II (APACHE II), Sequential Organ Failure Assessment (SOFA), Severity Grading System (SGS) and Severity Scoring Index (SSI) scoring systems. Data from 254 adult CCHF patients who were followed in a tertiary care center and whose diagnosis were confirmed by reverse transcription polymerase chain reaction and/or IgM positivity were retrospectively analyzed. Demographic, clinical and laboratory data were recorded and CURB-65, CURB-65+B, SOFA, APACHE II, SGS and SSI scores were calculated at admission. Survivors and non-survivors were compared; the predictive performance of each scoring system for mortality was assessed using receiver operating characteristic (ROC) analysis and multivariable logistic regression analysis was performed. The overall mortality rate was 2.8% (n= 7). Non-survivors were older and more frequently presented at admission with confusion, hypotension, tachycardia, tachypnea and bleeding. In laboratory analyses, platelet counts and fibrinogen levels were lower in non-survivors compared with survivors, whereas aspartate aminotransferase, alanine transaminase, lactate dehydrogenase, activated partial thromboplastin time, international normalized ratio, D-dimer and C-reactive protein (CRP) levels were higher. In ROC analysis, the most powerful laboratory predictors of mortality were CRP [>14.6 mg/L; area under curve (AUC)= 0.938], prothrombin time (>13.5 s; AUC= 0.996), creatinine (>1.2 mg/dL; AUC= 0.882), and D-dimer (>4770 µg/L; AUC= 0.662). The highest overall accuracy, however was obtained with the CURB-65+B score (AUC= 0.997; 100% sensitivity; 98.8% specificity; p< 0.001). In multivariable analysis, only the CURB-65+B score was identified as an independent predictor of mortality (Odds ratio≈ 14; 95% confidence interval= 2.7-72.6; p= 0.002). Moreover, mortality was markedly higher in patients with a CURB-65+B score ≥3, whereas no deaths were observed among patients with a score <3. With its simple and easily applicable structure, the CURB-65+B score represents a powerful tool for predicting mortality in CCHF. It may facilitate the rapid identification of high-risk patients, recognition of those requiring intensive care and timely referral to appropriate centers in emergency departments and endemic settings. Compared with existing mortality scoring systems, CURB-65+B appears to be more practical, distinctive and to possess a higher predictive accuracy and thus may fill an important gap in the clinical management of CCHF.
Human adenoviruses (HAdVs) may cause and are responsible for a broad range of pediatric illnesses. Although several cases are self-limiting, certain children may develop complications requiring pediatric intensive care (PICU). To address the role of HAdVs in children, we aimed to characterize the 10-year clinical spectrum of pediatric HAdVs infections and to identify factors associated with hospitalization and severe disease. This retrospective observational cohort study analyzed the clinical course of children (< 18 years) with laboratory-confirmed HAdV infection episodes treated at a tertiary pediatric center between 2014 and 2024. Demographic features, comorbidities, laboratory results, respiratory support requirements, PICU, and clinical outcomes were collected. The severe outcome was defined as a composite endpoint including PICU admission, invasive mechanical ventilation, and/or 30-day mortality. A total of 877 HAdV infection episodes were included, whereas 276 (31.5%) of the patients were hospitalized. Among hospitalized patients, 32.2% required respiratory support, 18.8% required PICU care, and the 30-day mortality was 2.5%. Overall, 104 children (11.9%) developed severe disease. Underlying medical conditions were significantly more frequent among children with severe disease, while those without underlying disease were more common in the non-severe group (69.8% vs. 22.1%; p < 0.001). Viral coinfections were detected in 35.8% of patients but were not associated with increased hospitalization or severity. In multivariable analysis, male sex (OR 2.30, 95% CI 1.23-4.29; p = 0.009), neurologic disease (OR 2.12, 95% CI 1.01-4.42; p = 0.045), cardiac disease (OR 3.27, 95% CI 1.49-7.17; p = 0.003), and hematopoietic stem cell transplantation (HSCT) (OR 4.32, 95% CI 1.36-13.7; p = 0.013) were independently associated with severe outcome.Conclusion: Human adenovirus infection represents a notable clinical burden in children, with a considerable proportion requiring hospitalization and developing severe disease. Underlying neurologic and cardiac diseases, as well as HSCT, were key risk factors for severe outcome, whereas viral coinfections were common but not associated with worse clinical outcome.
Speech audiometry is widely used in routine clinical settings to assess auditory function in children. Appropriate test materials are available in languages such as English or German; however, formally validated translations do not exist in less widely spoken languages such as Hungarian. This protocol aims to describe an evidence-based process of translation, cultural adaptation and validation of a paediatric speech audiometry test from its original language into another. The Mainzer Audiometric Test for Children (MATCH) is a picture-pointing speech audiometry test for children aged 3-6 years. It will be translated from the original German into Hungarian in six phases: (1) identification of MATCH test items and validation of MATCH picture recognisability among children; (2) confirmation of linguistic conformity by comparing phoneme distribution of Hungarian test vocabulary to spontaneous Hungarian speech reference data; (3) recording of Hungarian speech material in a sound-treated environment complying with ISO 8253-3:2022 standards; (4) evaluation of the homogeneity of the intelligibility of the recorded Hungarian test items through speech recognition testing in adults; (5) standardisation of the Hungarian MATCH test on a cohort of normal-hearing children aged 3-6 years whose dominant language is Hungarian and (6) assessment of the diagnostic validity of the Hungarian version of the MATCH by comparing MATCH speech recognition thresholds to pure-tone audiometry results. To determine sensitivity, specificity and optimal cut-off points for the Hungarian test in detecting hearing loss among children aged 3-6 years, receiver operating characteristic analysis will be used. The protocol has been registered on ClinicalTrials.gov (NCT07156825), complies with the Declaration of Helsinki, and has been approved by the medical ethics committee at Semmelweis University Budapest (Ref. nr.: SE RKEB 312/2021). The study using the speech test is currently in progress. The results and conclusions will be shared with the scientific community through publication in a peer-reviewed journal. NCT07156825.
Primary hepatic cancer (PHC) is still a significant burden worldwide. The efficacy of diagnosis, treatment response monitoring, and prognostication using laboratory biomarkers is still a challenge. This study analyzed multi-laboratory parameters pre- and post-treatment and their associations with clinical and pathological features and clinical decisions in PHC patients to evaluate their clinical usefulness. This study analyzed laboratory data from PHC patients by comparing changes between pre- and post-treatment and the efficacy of commonly used tumor markers and their associations with clinical and pathological features. Data from 441 PHC patients were analyzed. Serum alpha-fetoprotein (AFP) and alpha-L-fucosidase (AFU), alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyltransferase (γ-GT), indirect bilirubin (IBIL), and hepatitis B virus DNA (HBV DNA) were reduced after treatment; AFP, carcinoembryonic antigen (CEA), and cancer antigen 19-9 (CA-199) showed value in assisting the judgment of surgical resection; serum ferritin increased significantly along with stage advanced. AFP, CEA, CA-199, and ferritin were lower in surgical resection patients than non-surgical patients. A significant association was displayed between AFP levels and multiple biomarkers and clinical features when AFP cut-off was adjusted to 140 ng/mL. AFP was increased in PHC patients with chronic hepatitis B infection and liver cirrhosis, while CA-199 was decreased in both situations. Tumor marker AFP, CEA, CA-199, serum chemistry test panel AFU, ALT, AST, γ-GT, bilirubin, protein, and HBV DNA are useful indicators in monitoring therapeutic response and disease progression and may be used as indicators in assisting the judgment of surgical resection of PHC.
Patients with head and neck cancer (HNC) frequently experience functional impairments and psychological distress following surgery or radiotherapy. While mobile health (mHealth) interventions are increasingly integrated into clinical care to support patient self-management and home-based recovery, evidence of their effectiveness in HNC remains inconsistent. This study aims to evaluate the effectiveness of mHealth interventions on quality of life (QoL), psychological symptoms, physical symptoms, and functional recovery among HNC survivors. This systematic review and meta-analysis followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Overall, 12 electronic databases were searched for randomized controlled trials published up to December 1, 2025. Two reviewers independently performed study selection, data extraction, and risk of bias assessment using the Cochrane Risk of Bias 2.0 tool. Certainty of evidence was evaluated using the GRADE (Grading of Recommendations, Assessment, Development, and Evaluation) approach. Pooled effects were calculated as standardized mean differences (SMDs) with 95% CIs. The primary outcome was QoL; secondary outcomes included anxiety, depression, fatigue, pain, swallowing function, and maximal interincisal opening (MIO). A total of 26 randomized controlled trials involving 2385 participants were included. mHealth interventions significantly improved QoL (SMD 0.64, 95% CI 0.41-0.88; P<.001), anxiety (SMD -0.75, 95% CI -1.42 to -0.08; P=.03), depression (SMD -0.89, 95% CI -1.37 to -0.40; P<.001), and fatigue (SMD -0.85, 95% CI -1.19 to -0.51; P<.001). Pain showed a small reduction (SMD -0.37, 95% CI -0.49 to -0.24; P<.001). However, the improvement in swallowing function reached only borderline significance (SMD 0.66, 95% CI 0.28-1.04; P=.04), suggesting that current evidence for this outcome is fragile. No significant effect was found for MIO (P=.68). Subgroup analysis revealed that interventions featuring home practice support, self-monitoring, and shorter durations (<3 months) yielded stronger clinical effects. The overall certainty of evidence ranged from low to very low. mHealth interventions effectively enhance QoL and alleviate psychosocial distress in patients. However, evidence for improving swallowing function and MIO remains insufficient. Future research should prioritize standardized protocols and high-quality trials to validate long-term clinical value.
Background: Chronic spontaneous urticaria (CSU) presents with unpredictable, often debilitating symptoms, yet detailed clinical characterization, particularly in pediatric populations, remains limited. Improved understanding of clinical profiles and comorbidities may guide management strategies. Objective: The objective was to characterize clinical presentation, comorbid type 2 inflammatory diseases, and treatment responses in a large cohort of pediatric and adult patients with CSU across multiple allergy practices. Methods: A retrospective observational cohort study was conducted across four U.S. allergy clinics, including two pediatric- and two adult-focused practices. Medical records of 400 patients with CSU (189 pediatric patients, 211 adults) with ≥ 6 months of follow-up between 2013 and 2023 were reviewed for demographic data, comorbidities, clinical presentation, clinician-documented disease severity, and treatment outcomes. Results: Disease severity spanned mild (24%), moderate (28%), and severe (48%) disease; adults more often presented with severe disease (65% versus 32% in pediatric patients). Atopic comorbidities were common, including allergic rhinitis (72%), asthma (38%), and atopic dermatitis (17%), with 43% of patients having two or more type 2 inflammatory comorbidities. H1-antihistamines were the most frequently used therapies, with cetirizine being most common. Omalizumab showed effectiveness in most patients, although ∼31% experienced partial or no response. Systemic corticosteroids and cyclosporine were used infrequently. Sparse documentation of physical findings and laboratory analysis limited further objective analysis. Conclusion: This U.S. multicenter study highlights the substantial burden of atopic disease and refractory CSU in both pediatric and adult populations. These findings emphasize the need for prospective studies with standardized clinical documentation to better understand CSU phenotypes and optimize individualized treatment strategies.
Mediastinal and/or hilar lymphadenopathy (MHL) has become more frequently identified. Despite advances in endobronchial ultrasound (EBUS)-guided sampling, evidence on the optimal modality is lacking. This systematic review and network meta-analysis aimed to compare the diagnostic performance and safety of different EBUS-guided sampling methods in MHL. This study was performed on published studies reporting the diagnostic performance and complications of EBUS-guided transbronchial needle aspiration (EBUS-TBNA), EBUS-guided transbronchial mediastinal cryobiopsy (EBUS-TBMC), EBUS-guided transbronchial forceps biopsy (EBUS-TBFB) to sample MHL. The outcomes included the diagnostic yield, diagnostic sensitivity, negative predictive value and complications. A frequentist random-effects model was employed to rank the diagnostic performance and safety for network meta-analysis. Subgroup analysis was performed with stratification by disease. Twenty-two studies with 2357 patients were included. EBUS-TBMC had the highest diagnostic yield, at 88.0%, surpassing EBUS-TBFB (77.1%) and EBUS-TBNA (67.7%). For lymphoma, EBUS-TBMC achieved a diagnostic sensitivity of 94.1%, substantially higher than EBUS-TBNA (40.8%). For benign conditions, EBUS-TBMC (87.9%) also outperformed EBUS-TBNA (55.2%). In the network meta-analysis, the combination of EBUS-TBMC with EBUS-TBNA was the most effective approach and was significantly better than EBUS-TBNA alone. EBUS-TBMC was the best single method for diagnosing sarcoidosis. EBUS sampling can be considered acceptably safe, with the vast majority of complications being grade 1-2 bleeding. EBUS-TBNA remains the established standard for diagnosing lung cancer among patients with MHL. EBUS-TBMC demonstrates superior overall performance, particularly in the diagnosis of benign entities and rare carcinomas. EBUS-TBFB plays a supplementary role. All of the techniques exhibit an acceptable safety profile.
While sustained remission is the treatment goal for rheumatoid arthritis (RA), its long-term remission remains challenging. This real-world study aimed to assess the role of clinical deep remission (CliDR) in sustained remission and the risk of relapse, as well as its interaction with drug tapering. Of the 541 patients enrolled in the cohort, 145 who achieved a Disease Activity Score in 28 joints using C reactive protein (CRP) remission with 5 years of follow-up after remission were included in this study. Participants were stratified by CliDR status (defined as no tender/swollen joints with normal CRP/erythrocyte sedimentation rate). Kaplan-Meier analysis was used to compare sustained remission rates. Multivariable Cox regression with time-dependent covariates was performed to identify independent predictors and to assess the interaction between CliDR and drug tapering. The sustained remission rate declined over time but was significantly higher in the CliDR group (62.5%; 95% CI 45.3% to 77.1%) than the non-CliDR group (38.1%; 95% CI 29.6% to 47.3%) at 5 years (p=0.014). In the non-CliDR group, tapering was associated with an 8.47-fold increase in the hazard of relapse (HR=8.47, 95% CI 5.01 to 14.32, p<0.001). The interaction between group and tapering was statistically significant (HR=0.26, 95% CI 0.07 to 0.98, p=0.046), indicating that the effect of tapering on relapse risk was significantly attenuated in the CliDR group compared with the non-CliDR group. Our findings suggest that achieving CliDR is associated with significantly higher sustained remission rates in RA and that the safety of treatment tapering is contingent on prior achievement of CliDR.
Influenza viruses are one of the key pathogens causing acute respiratory infections in humans, which lead to significant economic burdens. From August 2023 to May 2024, hospitalized SARI cases were recruited from Shouguang People's Hospital in Weifang, Shandong Province. Reverse Transcription-Polymerase Chain Reaction was used to detect influenza virus nucleic acids, including influenza A and B viruses. Meanwhile, clinical information and detailed expense bills of these hospitalized SARI cases were collected to conduct analyses on demographics and clinical characteristics. In addition, cost analysis and investigation of factors influencing the economic burden of influenza were carried out from the perspectives of third-party payers and society. A total of 1,772 SARI cases were included in the study, with an influenza positivity rate of 7.5%. Among these cases, the proportion of hospitalized patients aged 70 y and above was the highest. Influenza viruses were more likely to result in severe illness or death, while influenza vaccination could significantly reduce the influenza positivity rate. The total medical insurance expenditure was $65,844.62, accounting for 64.3% of the total economic burden. For influenza-positive cases, the average direct medical burden was $783.95 per person, and the average out-of-pocket expenditure per patient was $288.88. Among the costs, laboratory testing fees and medication treatment fees accounted for the highest proportions. The study found that age, length of hospital stay, and vaccination status were factors influencing the direct medical economic burden of influenza. Influenza vaccination was significantly associated with the influenza positivity rate.
Coronary artery disease (CAD) remains one of the leading causes of death worldwide. This study aimed to evaluate the predictive value of combining the serum to high-density lipoprotein cholesterol ratio (TG/HDL-C) with the Controlling Nutritional Status (CONUT) score for assessing lesion severity and short-term prognosis in CAD patients. A total of 138 patients undergoing their first coronary angiography were enrolled and divided into CAD (n=88) and non-CAD (n=50) groups. Baseline characteristics and laboratory parameters were collected, and TG/HDL-C ratio and CONUT score were calculated. Lesion severity was assessed using the Gensini score, and CAD patients were followed for 6 mo. Multivariate logistic regression was performed to identify independent risk factors, evaluated the predictive performance. Both TG/HDL-C ratio (odds ratio [OR]=3.58) and CONUT score (OR=1.70) were independent predictors of CAD severity and showed significant positive correlations with Gensini scores. ROC analysis demonstrated that the combined model of TG/HDL-C and CONUT score achieved an area under the curve (AUC) of 0.86 for predicting CAD severity, outperforming individual markers. The combined model also exhibited stable predictive value for short-term prognosis (AUC=0.80). Serum TG/HDL-C ratio and CONUT score are independent and effective biomarkers for evaluating lesion severity and short-term prognosis in CAD patients. The combined predictive model enhances accuracy and may serve as a clinical tool for risk assessment and intervention decisions.
Cardiovascular disease (CVD) remains the leading cause of morbidity and mortality worldwide. Ectodysplasin A2 receptor (EDA2R), a newly identified member of the tumor necrosis factor receptor superfamily, has been implicated in metabolic and inflammatory processes, but its role in CVD is unknown. To examine the associations of plasma EDA2R levels with incident CVD and all-cause mortality. A total of 45,305 UK Biobank participants with baseline plasma proteomics measured by Olink were included. EDA2R expression levels in the UKB have been converted to normalized protein expression (NPX). Cox proportional hazards models were used to assess the relationships between EDA2R and CVD, as well as all-cause mortality. Temporal trajectories of EDA2R before events were examined using a nested case-control design with LOESS smoothing. Causal mediation and GO enrichment analyses were performed to identify mediating proteins and underlying pathways. Over a median follow-up of 15 years, 8667 participants (19.1%) developed CVD, and 3988 (8.8%) died. After fully adjusted, each 1 NPX increase in plasma EDA2R was associated with a 74% higher risk of CVD and a 177% higher risk of all-cause mortality, with risks increasing monotonically across the entire distribution. Mediation analysis identified 302 proteins for CVD and 482 for mortality, enriched in death receptor activity, tumor necrosis factor receptor activity, cytokine and growth factor binding, and immune receptor activity. Elevated plasma EDA2R is strongly associated with long-term risks of CVD and all-cause mortality, suggesting its potential as a novel prognostic biomarker and therapeutic target.
The objectives of the study are to evaluate whether quantitative NS1 antigen and IgM/IgG antibody titres predict clinical severity in paediatric dengue and to determine whether the IgM/IgG ratio reliably differentiates primary from secondary infection. A 1-year cross-sectional study was conducted at a tertiary care centre in South India, enrolling children aged 1 month to 18 years with laboratory-confirmed dengue. Quantitative NS1 antigen, IgM, and IgG titres were measured at point-of-care and correlated with clinical outcomes. Severity indices included bleeding manifestations, third spacing, thrombocytopenia, hepatic and renal involvement, shock, and duration of hospitalization. Statistical analysis was performed using chi-square and Mann-Whitney U tests, with receiver operating characteristic (ROC) and area under the ROC curve (AUROC) analyses using IBM SPSS version 29. High NS1 titres (60-100 and ≥ 100) were significantly associated with bleeding, renal dysfunction, elevated SGOT, haemoconcentration, and thrombocytopenia. IgM titres < 4 correlated strongly with third spacing (82.1%, p = 0.019), while IgG titres ≥ 35 predicted third spacing (75%), thrombocytopenia, and elevated transaminases. An IgM/IgG ratio < 0.85 differentiated secondary dengue with superior sensitivity and specificity compared with higher cutoffs (AUROC 0.793). Infantile dengue demonstrated greater severity, with hypoalbuminemia (62%) and central nervous system involvement (25%), despite lower NS1 detection rates, and showed reversal of the typical SGOT > SGPT pattern.  Quantitative NS1 antigen and IgM/IgG antibody titres provide early severity-predictive information in paediatric dengue. An IgM/IgG ratio < 0.85 reliably identifies secondary infection. Incorporating these markers into routine clinical evaluation may enhance early risk stratification, particularly in resource-limited settings. • NS1 antigen and IgM/IgG antibody titters are widely used for diagnosis, and IgM/IgG ratio is commonly used to classify primary versus secondary dengue. • Dengue severity is higher in secondary infection and is usually assessed using haemoconcentration, thrombocytopenia, and clinical warning signs which often appear late. • Quantitative titters-High NS1 and low IgM antibody titres predict paediatric dengue severity early, allowing identification of high-risk children at presentation. • Infants show lower NS1 titters but higher severity; IgG titres in secondary dengue do not correlate with day of illness; an IgM/IgG ratio < 0.85 outperforms higher cutoffs (1.59) in identifying secondary dengue.
This protocol outlines a systematic review with meta-analysis to evaluate the evidence that hearing aid use versus non-use enhances the health-related quality of life (HRQoL) of adults with sensorineural hearing loss (SNHL). The study has three aims: (1) to provide a graded recommendation regarding hearing aid use and its impact on HRQoL in adults with SNHL; (2) to assess the effects of hearing aid use on mental health, cognition, and balance within this group; and (3) to examine, through subgroup analyses, whether outcomes are influenced by variables such as patient age, degree of hearing loss, sex, amplification type, duration since fitting, or verification method. Participants will be adults over 18 years with mild to profound SNHL. Only those living independently or in assisted living facilities will be included. Excluded are individuals in acute care, or living in skilled nursing facilities, or who are incarcerated.Amplification options include the following: hearing aid styles (behind the ear, in the ear, etc.), power sources (battery, rechargeable, solar), signal processing (analog or digital), microphone type (omnidirectional or directional), fitting (monaural or binaural), service delivery (audiologist fit or direct to consumer), and payment type (self-pay or free).The following databases will be searched: CINAHL (via EBSCOhost), Cochrane Library, EMBASE (via Ovid SP), MEDLINE (via Ovid SP), PubMed, Scopus, Citations Indexes of Web of Science, ISRCTN Registry, ClinicalTrials.gov, and WHO International Clinical Trials Registry Platform.The Covidence online platform will be utilized for evidence synthesis, which will include the selection of studies, quality assessment (using Cochrane's Revised Risk-of-Bias Tool for Randomized Trials [RoB 2]), and data extraction for the meta-analyses. No ethical issues are expected. Systematic reviews do not require Institutional Review Board (IRB) approval because data for meta-analyses are available to the public and included studies have already gone through an IRB review. The protocol and findings of the systematic review will be presented at professional meetings and published in scientific journals.