Skin neglected tropical diseases (NTDs) such as cutaneous leishmaniasis, lymphatic filariasis, mycetoma, and podoconiosis affect millions in endemic regions, but are under-recorded despite causing significant burdens. Predictive modelling has been used to estimate the distribution and prevalence of some of these diseases, and predictions may be useful for identifying at-risk populations and guiding interventions. This review synthesises the literature on modelling approaches to predict skin NTD distributions, aiming to identify prevalent methodologies, evaluate their strengths and limitations, highlight research gaps, and provide recommendations for enhancing their utility. We conducted a systematic literature review from three databases and included studies published from 2000-2024. Studies were included if they employed statistical models or machine learning algorithms to predict the distribution of skin NTDs. Two independent reviewers screened titles, abstracts, and full texts. Data extracted included disease, study region, source of epidemiological data, model types and predictors. From 2,870 retrieved records, 68 met the inclusion criteria. The most modelled skin NTDs were cutaneous leishmaniasis (n = 26) and lymphatic filariasis (n = 18). Geostatistical modelling was the most common approach, followed by ecological niche modelling, with MaxEnt and generalised linear models constituting the predominant model types. Common environmental covariates included climate, land cover and land use, elevation, and soil data. The types of epidemiological data varied, with many studies relying on passive surveillance and pseudoabsence data. The risk of bias was high among ecological niche models. Environmental and geostatistical models can inform targeted interventions for skin NTDs, aiding efficient resource allocation and public health planning. However, data limitations, especially the absence of true absence data, underreporting and variations in surveillance sensitivity, can reduce model accuracy and undermine decision-makers' confidence. Future studies should focus on incorporating information about case identification into modelling frameworks, including a broader spectrum of environmental and socio-economic determinants, and ensuring validation across diverse geographic regions.
High-risk corneal transplantation in patients with glaucoma, keratoconus, or prior graft failures is frequently associated with suboptimal outcomes. Although individual determinants such as age, gender, diabetes, and intraocular pressure (IOP) are recognized risk factors, their collective impact on graft survival has not been systematically quantified. This meta-analysis aims to clarify the role of glaucoma, keratoconus, and key clinical factors in shaping the prognosis of high-risk corneal transplantation. A systematic search of PubMed, Cochrane, Embase, and Web of Science databases was conducted to identify studies on high-risk corneal transplantation involving glaucoma and keratoconus. Data from 11 observational studies (10,558 cases) were analyzed. Across the included studies, transplant success was generally defined as graft survival with preserved corneal clarity. Subgroup analyses were performed to evaluate the impact of demographic and clinical determinants on transplant success. Glaucoma was found to significantly impact transplant success, with an odds ratio (OR) of 1.43 (95% CI [1.26, 1.63], p < 0.001), while keratoconus also represented a risk factor (OR = 1.22, 95% CI [1.13, 1.31], p < 0.001). Subgroup analysis for glaucoma patients indicated that younger age (<60 years), female gender, absence of diabetes, and preoperative IOP ≤25 mmHg were favorable factors for transplant success. In keratoconus patients, those under 30 years old and female had better transplant outcomes. Sensitivity analysis confirmed the robustness of these results, and minimal publication bias was observed. This study demonstrates that advanced age, male gender, diabetes, and elevated preoperative IOP significantly compromise the success of high-risk corneal transplantation. These findings provide robust evidence to guide patient selection, preoperative optimization, and surgical decision-making in high-risk populations. Incorporating these determinants into clinical practice may enhance graft survival and visual outcomes, while informing the development of tailored management strategies and future clinical guidelines.
Chikungunya virus (CHIKV) and dengue virus (DENV) frequently co-occur in Myanmar and present with overlapping symptoms, complicating diagnosis. During the 2019 dengue outbreak in Yangon, Myanmar, molecular data on CHIKV were limited among dengue-suspected patients and there were no publicly available CHIKV genome sequences from Yangon in international databases. To address this gap and potential diagnostic overlap, we investigated the prevalence of CHIKV infection and described the genomic characteristics of detected strains. Serum samples from 267 dengue-suspected patients collected in 2019 were screened for anti-CHIKV IgM and IgG by in-house ELISA and 211 samples with sufficient remaining volume were further analyzed by RT-qPCR, isolation of the virus, and whole-genome sequencing for mutation analysis. CHIKV antibodies were found in 24.7% (66/267) of samples (IgM 3.4%, IgG 21.3%), and viral RNA was detected in 10.9% (23/211) of samples. Fifteen viral isolates were successfully obtained (7.1% of those tested), including two co-detections with DENV-2 by RT-PCR. All isolates belonged to the East/Central/South African genotype, Indian Ocean Lineage (ECSA-IOL), and clustered with strains from Thailand, China, and Mandalay, Myanmar. Whole-genome analysis identified 33 non-synonymous mutations across nonstructural and structural proteins, including mutations previously reported in regional ECSA-IOL strains such as E1:K211E and E2:V264A, with 11 amino acid changes not previously reported in available Myanmar reference sequences. Serological and molecular findings indicate CHIKV circulation during the 2019 dengue outbreak in Yangon and highlight the limitations of single-target testing. Serological evidence indicate the presence of anti-CHIKV IgM and IgG antibodies, reflecting CHIKV exposure within the study population. Notably, all RNA-positive cases were seronegative for both IgM and IgG, a pattern consistent with the temporal dynamics of infection and the inherent constraints of serological detection in co-endemic settings. Molecular co-detection with DENV-2 and genomic findings highlight the potential value of multiplex diagnostic approaches in co-endemic settings. This study documents CHIKV detection and genomic characterization in dengue-suspected patients in Yangon and highlights the potential value of multiplex diagnostic approaches and continued genomic surveillance as broader public health considerations for arboviral detection in Myanmar.
Simultaneous ipsilateral elbow and shoulder dislocations without associated fracture or other injury are extremely rare injuries. To date, there have been no such recorded injuries that include a posterior shoulder dislocation per a review of PubMed databases. This case involves an otherwise healthy 28-year-old male who sustained a simultaneous posterior left elbow dislocation with a posterior left shoulder dislocation without any other associated injury/fracture in a softball incident. Both joint dislocations were managed conservatively with bedside reduction of the elbow, followed by the shoulder and subsequent sling application. Given that the pain from the elbow can mask shoulder injuries, we recommend a high index of suspicion for concomitant injuries of adjacent structures when patients present with an elbow dislocation.
BackgroundDetecting breast cancer, especially identifying microcalcifications in mammograms, is challenging due to the need for high sensitivity and efficient processing. This study presents a novel algorithm, Sigmoidal Slope Analysis and Aspect Ratio Evaluation (SAAR), designed for real-time application on edge devices. By employing a multi-step adaptive process with sigmoidal functions, SAAR enhances intensity contrast and prioritizes regions of interest, enabling fast, accurate detection of microcalcifications.ObjectiveThis study aims to develop and validate an efficient, edge-device-compatible method for detecting microcalcifications in mammographic images. The goal is to provide a tool that enhances diagnostic efficiency through real-time processing, thereby supporting early breast cancer detection in both clinical and remote settings.MethodsThe SAAR algorithm utilizes an adaptive slope detection technique based on the sigmoid function, dynamically adjusting to local intensity features. This approach allows for greater adaptability to image variations. The algorithm prioritizes regions of interest through a multi-step adaptive process, enhancing intensity differences to focus on potential microcalcifications.ResultsTesting on established mammography databases, such as MIAS, demonstrates the algorithm's effectiveness, with improved sensitivity compared to conventional methods. Designed for edge devices, the algorithm leverages their real-time processing capabilities, offering lower latency and enhanced privacy.ConclusionsThe integration of SAAR with edge devices represents a promising advancement in breast cancer detection. The adaptive nature of SAAR, coupled with the real-time processing capabilities of edge devices, provides a robust solution for enhancing microcalcification detection efficiency and sensitivity in mammography.
Crimean-Congo haemorrhagic fever (CCHF) is a severe, widespread, tick-borne viral zoonotic infection. It is caused by an orthonairovirus that is transmitted by ticks. Sero-epidemiological studies in humans and livestock are valuable indicators of viral circulation and infection risk. This study aimed to investigate the seroprevalence and factors associated with CCHF virus exposure in humans and livestock in mixed crop-livestock farming households in rural Burkina Faso. A cross-sectional animal-human linked study was conducted in 149 rural households across 16 randomly selected villages in two administrative regions of Burkina Faso. Human socio-demographic, livestock biodata, and serum samples were collected from household members and their livestock (cattle, sheep, and goats). Additional ecological and climatic data were extracted from online databases and merged with the field data. Serological testing was performed on human and animal samples using the ID Screen® CCHF Double Antigen Multi-species ELISA (IDvet, Grabels, France). Descriptive statistics and multivariable multilevel analyses were used to assess factors associated with exposure of cattle and small ruminants to CCHF virus, while the Fisher's exact test was applied to assess the risk factors for human exposure. The study included 717 livestock farmers and 2,295 animals, comprising 666 cattle, 659 sheep and 970 goats. The overall CCHF virus (CCHFV) seroprevalence was 3.1% (95% CI: 1.9-4.6) in humans and 54% (95% CI: 50.2-57.7) in cattle. In small ruminants, the overall seroprevalence was 5.2% (95% CI: 4.2-6.4), with 9.1% (95% CI: 7.1-11.5) in sheep, and 2.5% (95% CI: 1.7-3.8) in goats. Farmers with inadequate livestock management-related biosecurity behaviour exhibited higher seroprevalence rates and an increased risk of CCHFV seropositivity. In cattle, seropositivity was positively associated with older age, female sex, longer grazing distances, and tick infestation. Seropositivity in small ruminants was associated with older age, being of the sheep species, and longer grazing distances. Ecological factors, including a higher aridity index in both cattle and small ruminants, and steeper slopes in cattle, were significant in univariate and multivariable analysis, respectively. The seroprevalence in both cattle and small ruminants showed significant clustering within households, with intra-cluster correlation (ICC) rates of 39% and 62%, respectively. This study highlighted that CCHFV is circulating among humans and their livestock in rural Burkina Faso. Individual and household-related risk factors, including socio-demographic, livestock management practices, and ecological characteristics, were identified. These findings provide valuable insights for designing tailored public health interventions towards strengthening CCHF surveillance and prevention among rural households.
Renin-angiotensin system inhibitors (RASIs) have been reported to exert anticancer effects. However, there is still a lack of persuasive evidence for their role in improving postoperative long-term oncologic outcomes of colorectal cancer. This retrospective cohort study emulated a hypothetical randomized controlled trial to evaluate the efficacy of RASIs in improving postoperative long-term oncologic outcomes of patients with stage II/III colon cancer and hypertension. Patients were consecutively enrolled from multicenter databases, which contained data from medical centers in Shanghai, China. Eligible criteria were adults with radical resected stage II or III colon adenocarcinoma and hypertension. Eligible patients were classified into the RASI group or the no-RASI group, and propensity score was matched at a 1:1 ratio. The primary outcome was the 3-year disease-free survival (DFS) rate. From 2,640 eligible patients, 2,292 were included in the primary analysis after matching: 1,146 in the RASI group and 1,146 in the no-RASI group. The median follow-up time was 46.3 months. The RASI group had a higher 3-year DFS rate (83.4% v 78.3%; P = .001; hazard ratio [HR], 0.736 [95% CI, 0.617 to 0.878]). The RASI group also had a lower 3-year distant metastasis rate (15.6% v 20.5%; P < .001; HR, 0.717 [95% CI, 0.596 to 0.863]) and a higher 3-year overall survival rate (92.4% v 89.6%; P = .001; HR, 0.682 [95% CI, 0.539 to 0.864]). RASIs may improve the postoperative long-term oncologic outcomes for patients with stage II/III colon cancer and hypertension.
Healthy aging has emerged as a global priority. However, older adults' participation in health promotion programs remains low, and traditional health promotion models have achieved limited success in fostering sustained engagement among this population. Mobile health (mHealth)-based gamification interventions offer a promising way to address these challenges. However, no published reviews support or oppose the use of mHealth-based gamification interventions as health promotion strategies in older adults. The study aimed to identify mHealth interventions using gamification to promote health among older adults. Our scoping review was conducted following the Joanna Briggs Institute recommendations for scoping reviews and Arksey and O'Malley's framework. The process followed PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines and PRISMA-S (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Literature Search Extension) checklist. A comprehensive literature search was conducted across 8 databases: PubMed, Scopus, Web of Science, Embase, Cochrane Library, CINAHL, PsycARTICLES, and IEEE Xplore Digital Library, from their inception to December 10, 2025. Two reviewers independently screened titles, abstracts, and full texts via Rayyan, with disagreements resolved by a third reviewer. This scoping review identified 11 studies. Only 1 article was published before 2022. The interventions were found to improve enjoyment and motivation (n=5), cognitive function (n=3), physical activity (n=2), and digital literacy (n=2). Individual studies also reported improvements in mental health (n=1) and adherence (n=1), a reduction in suicidal ideation (n=1), improvements in physical function (n=1), the promotion of social engagement (n=1), and the identification of mild cognitive impairment (n=1). Game elements used were ranked by frequency as progress, challenges, goals, levels, reward, sensation, storytelling or narration, leaderboard, surprise, and avatar. No research was found to use the game element of "social sharing." mHealth types included augmented and virtual reality-based training systems, wearable devices, mobile phones, tablets, and Windows platforms and devices. Notably, only 4 studies applied theoretical frameworks, and 3 omitted the concrete approach to gamification. As the first scoping review to identify and map mHealth-based gamification interventions for older adults, this study highlights their potential as an innovative approach to health promotion. By systematically synthesizing evidence regarding intervention designs, gamification strategies, and preliminary health outcomes, it establishes a foundation for future inquiry. However, this review is limited by the small number of included studies, precluding broad generalizations. Future research should assess long-term impacts, integrate theoretical frameworks, establish reporting guidelines, design personalized social-interactive interventions, and expand to broader health domains. Ultimately, these insights provide targeted guidance for developing age-appropriate digital health solutions, contributing to the realization of active aging.
Effective pest management requires accurate and continuous monitoring. This monitoring helps assess population dynamics and guides the development of integrated pest management strategies. Traps used to capture insects are an alternative applied to various crops. However, the identification and manual counting of specimens are time-consuming, require taxonomic knowledge, and depend on the expertise of specialists. Automation could reduce costs, increase accuracy, and enable scalable analyses. Current computer vision and artificial intelligence techniques can quickly and accurately identify objects in digital images. This study presents a systematic review of literature retrieved from multidisciplinary and specialized databases (Scopus, ACM, Web of Science, IET, DBLP, Springer, and ScienceDirect), focusing on the intersections of agriculture, ecology, and computer science. We found 284 studies published between 2020 and 2025. Among them, 57 fulfilled the eligibility criteria, considering applied computing solutions for insect identification and counting using digital images of specimens collected via traps or photographed in situ on plants, in both field and laboratory settings. The findings highlight the use of electronic traps for real-time data collection and improvements in convolutional neural networks, with visual transformers and attention mechanisms for multi-species and fine-grained recognition. They also indicate opportunities to leverage microscopy resources, overcome limitations in the large-scale deployment and integration of electronic trap networks, and integrate real-time monitoring data with forecasting models using weather predictions to promote early warning systems for integrated pest management.
The COVID-19 pandemic necessitated unprecedented changes to clinical education and assessment. Restrictions on in-person clinical encounters led to rapid adoption of digital assessment modalities, raising concerns about graduate preparedness and the validity of pandemic-adapted assessment approaches. Wojniusz, [40]. This scoping review aimed to: (1) map approaches to assessing clinical competence in medical students during COVID-19; and (2) evaluate the validity, reliability, feasibility, and acceptability of pandemic-adapted assessments. Following Arksey and O'Malley's framework, we systematically searched six databases (PubMed/MEDLINE, Scopus, Web of Science, EMBASE, ERIC, Google Scholar) for peer-reviewed empirical studies published March 2020-March 2023 examining clinical competence assessment in final-year medical students. Studies were analyzed using Van der Vleuten's utility framework (validity, reliability, feasibility, acceptability) and Miller's Pyramid to categorize competency levels assessed. From 1,247 references, 13 studies met inclusion criteria, representing 8 countries. Virtual OSCEs were the predominant assessment method (n = 8, 62%). Study designs were predominantly weak: surveys (n = 6), evaluative case studies (n = 4), and observational studies (n = 2), with only three cohort studies providing stronger evidence. Most studies (92%) focused on feasibility and acceptability rather than validity or psychometric properties. Physical examination and procedural skills assessment remained a critical unresolved limitation across all digital modalities. No studies employed generalizability theory or Kane's validity framework. A paucity of rigorous research on pandemic-adapted assessment validity exists. Published studies employed weak designs producing non-generalizable findings focused on feasibility over validity. Urgent research priorities include: longitudinal cohort studies comparing pandemic-trained graduates to pre-pandemic cohorts; psychometric validation of hybrid assessment models now institutionalized; application of contemporary validity frameworks to digital assessments; and equity analysis of differential impacts across student populations. The field requires systematic validity evidence before permanently adopting pandemic-adapted approaches.
Pure autonomic failure (PAF) can be the prodromal presentation of Parkinson disease (PD), dementia with Lewy bodies (DLB), and multiple system atrophy (MSA), although phenoconversion rates and predictors have not been systematically reported. To estimate phenoconversion rates for MSA, PD, and DLB separately and grouped as central α-synucleinopathies and identify clinical predictors of phenoconversion in patients with PAF. PubMed and Embase databases from inception to June 2025. Longitudinal studies including patients with confirmed PAF reporting data on incidence and/or predictors of phenoconversion. Studies were screened and data extracted by 2 independent investigators according to PRISMA guidelines. A meta-analysis was performed using generic inverse-variance random-effects models. PD, DLB, MSA, and central α-synucleinopathy phenoconversion incidence rates as per 100 person-years were the main outcomes. Incidence rates were log transformed and pooled using a random-effects meta-analysis. Clinical predictors of phenoconversion were reported as secondary outcomes. Prediction intervals and meta-regression explored study-level moderators. A total of 9 studies comprising 900 individuals with PAF (mean [SD] age at onset, 63.1 [4.3] years; 63.8% male) were included. During the mean (SD) 6.4 (2.0) years of follow-up, 270 of 900 individuals with PAF (30%) experienced phenoconversion to a central α-synucleinopathy (12% to MSA, 11% to DLB, 7% to PD) with a pooled incidence rate of 5.09 per 100 person-years (95% CI, 3.79-6.85; approximately 5% per year). Phenoconversion rates for MSA (pooled incidence rate, 1.96; 95% CI, 1.29-2.99) were highest in the first years of follow-up, whereas Lewy body disorders showed more constant phenoconversion rates (DLB pooled incidence rate, 1.56; 95% CI, 0.94-2.61; PD pooled incidence rate, 1.35; 95% CI, 0.75-2.41). Hyposmia was the only predictor with diagnostic value to distinguish between those with phenoconversion to PD and DLB (hyposmia pooled risk ratio, 1.88; 95% CI, 1.26-2.97) and MSA, although rapid eye movement sleep behavior disorder (RBD) and subtle motor signs were consistent predictors of phenoconversion to any central α-synucleinopathy. Heterogeneity was partly explained by follow-up duration. Findings of this systematic review and meta-analysis suggest that PAF may be a prodromal presentation of PD, DLB, or MSA with phenoconversion incidence rates similar to those of RBD. A combination of clinical (RBD, subtle motor signs, hyposmia) and in-development biomarkers may help refine the phenoconversion trajectories of people with PAF providing an invaluable opportunity for early diagnosis and intervention.
The systemic inflammation response index (SIRI) is an emerging inflammation-immune indicator calculated from counts of neutrophils, monocytes, and lymphocytes. The potential prognostic value of SIRI in various tumors has been reported in several studies. However, updated and comprehensive evidence regarding its prognostic value in lung cancer (LC) remains insufficient. This study, through a systematic review and meta-analysis, intends to comprehensively analyze the relationship of SIRI with overall survival (OS) and progression-free survival (PFS) among individuals experiencing LC. Cochrane Library, Web of Science, Embase, and PubMed were systematically searched from the commencement of the databases to October 2025. Cohort studies reporting the relation of SIRI with OS or PFS were included. Hazard ratios (HRs) alongside their 95% confidence intervals (CIs) were obtained, and a random-effects model was employed for data synthesis. Subgroup analysis, sensitivity analysis, and assessment of publication bias were conducted to evaluate the robustness of the findings. In total, 27 studies involving 6195 patients with LC were incorporated into this meta-analysis, which revealed that high SIRI levels were significantly associated with poorer OS (HR = 1.83, 95% CI 1.58-2.13) and PFS (HR = 1.53, 95% CI 1.31-1.79). Differences in age, region, and cutoff value for SIRI were identified as the primary sources of heterogeneity. The sensitivity analysis indicated stable results, while the Egger's test demonstrated publication bias. A higher SIRI level is significantly associated with adverse survival outcomes among individuals suffering from LC. As a simple and low-cost hematological marker, SIRI is potentially applied in the risk stratification and prognosis assessment of LC. In the future, prospective studies based on multicenter data and with a larger sample size, and a unified cutoff standard for SIRI are warranted to further corroborate the clinical utility of SIRI.
Chimeric antigen receptor (CAR) T-cell treatment has developed among major substantial improvements for modern cancer treatment, providing sustained responses in patients with otherwise resistant blood cancers. This method comprises of patients reprogramming or donor's T lymphocytes to interpret tumor associated antigens self-sufficiently of major histocompatibility multifaceted presentation, thereby circumventing a key limitation of natural immune surveillance. The approval of CD19- and BCMA-targeted therapies demonstrated remarkable clinical impact and validated the approach. Over successive generations, CAR constructs have been refined with additional costimulatory elements, cytokine support, and multifunctional signaling domains, improving both their persistence and therapeutic activity. Despite such progress, important challenges remain, including risks of relapse, toxicity including neurotoxicity and cytokine release syndrome with limited efficacy in solid tumors. Current research is focused on strategies, such as armored CARs, gene editing, and combination therapies to expand clinical benefit. A comprehensive literature search was conducted using PubMed, Scopus, and Web of Science databases, covering publications from 2000 to 2026 till date. Relevant peer-reviewed articles were selected based on their relevance to CAR T-cell therapy, including preclinical and clinical studies. Detailed search strategy, inclusion criteria, and screening methods are described in the main manuscript. This review explores the evolution, applications, and future outlook of CAR T-cell rehabilitation. CAR T-cell therapy in cancer: training the body’s immune system to fight cancer more effectivelyCancer is a major cause of illness and death worldwide. Traditional treatments such as chemotherapy, radiation, and surgery can be effective, but they often have serious side effects and may not work for all patients. This has led to the development of newer treatments that help the body’s own immune system fight cancer. CAR T-cell therapy is one such advanced treatment. It works by collecting a patient’s immune cells (called T-cells) and modifying them in a laboratory so they can better recognize and attack cancer cells. These modified cells are then multiplied and returned to the patient’s body, where they seek out and destroy cancer cells. This therapy has shown very promising results, especially in certain types of blood cancers that are difficult to treat with standard therapies. Some patients have experienced long-lasting responses. However, there are still challenges. These include side effects caused by an overactive immune response, the risk of cancer returning, and limited success in treating solid tumors such as lung or breast cancer. Researchers are working to improve this therapy by making it safer, more effective, and suitable for more types of cancer. New approaches include combining CAR T-cell therapy with other treatments and using advanced technologies like gene editing. Overall, CAR T-cell therapy is a rapidly developing and promising approach that could improve outcomes for many cancer patients in the future.
Europe faces the dual challenges of population ageing and increasing migration, resulting in a growing demographic of older immigrants with complex healthcare needs. Despite extensive research on ageing and migration, regional evidence on healthcare provision for older immigrants remains fragmented. Participatory approaches that integrate the voices and experiences of older immigrants can improve cultural sensitivity, accessibility, and health equity, ultimately leading to better outcomes. This scoping review seeks to contribute to filling the existing knowledge gap by systematically mapping the literature on healthcare provision using participatory approaches for older immigrants in Europe. This scoping review followed Arksey and O'Malley's methodological framework and the PRISMA-ScR reporting guidelines. A comprehensive search of five electronic databases was conducted in February 2025. Eligible studies included empirical research focusing on immigrants aged 60 years and older in Europe that used participatory approaches to healthcare provision. Data were charted and synthesized thematically to identify barriers and facilitators of healthcare utilization discussed in the context of participatory approaches, as well as gaps in the literature. From 2,411 records, 23 studies published between 2011 and 2025 met the inclusion criteria. Most were conducted in the United Kingdom, Denmark, and Sweden, employing diverse qualitative, quantitative, and mixed-methods designs. Common participatory strategies included bilingual/bicultural staff, partnerships with community organizations, and the involvement of peer researchers. These approaches enhanced trust, relevance, and access to healthcare services. Key barriers were language and communication difficulties, cultural stigma, and distrust of services. Enablers included culturally adapted interventions, continuity of care, and trusted community engagement. However, many studies reported the use of superficial participatory methods, underrepresented certain migrant groups, and rarely assessed long-term outcomes or compared participatory versus non-participatory models. Participatory approaches demonstrate strong potential to enhance healthcare provision for older immigrants in Europe by improving cultural competence, accessibility, and trust. To achieve equity, participatory practices must be embedded into mainstream healthcare systems through sustainable funding, workforce training, and policy reforms. Future research should prioritize comparative evaluations, long-term impact assessments, and inclusion of underrepresented immigrant populations.
Few systematic reviews or meta-analyses have addressed on the incidence of Bacille Calmette-Guérin (BCG) lymphadenitis in healthy children. A systematic literature search was independently conducted by two researchers using four English and Korean databases (PubMed, Scopus, KMbase, RISS). The risk of bias was assessed. A total of 52 peer-reviewed journal articles (about 1.8 million children) were included in meta-analyses. The incidence proportion of BCG lymphadenitis ranged from 0.0% to 25.6%. The pooled incidence proportion was 2.5% (95% CI, 1.6-3.6; I2 = 99%). Studies employing active (vs. passive) surveillance reported a significantly higher incidence proportion of lymphadenitis, as did randomized controlled trials (vs. prospective and retrospective cohort studies). Also, the incidence significantly differed according to study quality, publication period, and BCG strain. This study highlights substantial variability in the incidence of BCG lymphadenitis among diverse studies and underscores the influence of surveillance methods, study design, and vaccine strain on reported outcomes.
BackgroundThere is an evident interrelationship between stroke and Alzheimer's disease (AD). Post-stroke cognitive impairment (PSCI) is a frequently encountered and potentially disabling outcome of stroke. Memory impairment is an important component of the post-stroke cognitive syndrome, and high-frequency repetitive transcranial magnetic stimulation (HF-rTMS) has been widely used for memory in patients with PSCI.ObjectiveIn this study, we systematically evaluated the therapeutic effects of HF-rTMS on memory function in patients with PSCI, offering insights that may also inform the treatment of AD.MethodsAll relevant publications in Chinese and English were systematically searched from ten databases up to March 20, 2025. Retrieved articles were carefully screened. The quality of the included studies was assessed using the Cochrane Collaboration's risk of bias tool. The Review Manager 5.4 software was adopted for meta-analysis.ResultsTwenty-one studies of 1746 participants with PSCI were included. Meta-analysis revealed that HF-rTMS ameliorated memory of PSCI patients according to several outcome indicators: Rivermead Behavioural Memory Test [mean difference (MD) = 2.59, 95% confidence interval (CI) (2.08, 3.11), p < 0.00001], forward digit span [MD = 1.79, 95% CI (1.36, 2.22), p < 0.00001] and backward digit span [MD = 1.18, 95% CI (0.77, 1.59), p < 0.00001] of digit span test, Delayed Recall of the Montreal Cognitive Assessment [MD = 0.53, 95% CI (0.47, 0.59), p < 0.00001]; all p < 0.05.ConclusionsThe HF-rTMS might enhance memory in patients with PSCI, with the left dorsolateral prefrontal cortex being the most common stimulation site.
To analyze the lexicon related to nursing care for pregnant women with syphilis, based on the scientific literature. A scoping review conducted according to the framework proposed by Arksey and O'Malley and the PRISMA-ScR guidelines. Searches were performed in national and international databases (PubMed, CINAHL, Web of Science, LILACS, SciELO, CAPES Theses and Dissertations Portal, Open Access Scientific Repository of Portugal - RCAAP, National ETD Portal, and Theses Canada), with no restrictions on language or year. Primary studies and gray literature available in full text that addressed the study objective were included; editorials, opinion pieces, and reviews were excluded. The selected studies were submitted to lexical analysis using the IRaMuTeQ software, supported by Roy's Adaptation Theory. A total of 10,875 records were identified, of which only 11 met the study objective and comprised the final corpus, totaling 641 segments analyzed. Seven classes emerged: Policy and Access; Management and Service; Community Network; Reception and Bond; Maternal-Child Care; Early Detection; and Diagnosis and Treatment. These categories highlight the complexity and interdependence among the clinical, social, and organizational dimensions of care. Lexical analysis revealed a representative vocabulary for nursing care of pregnant women with syphilis.
Effective diagnosis and treatment of rare genetic disorders requires the interpretation of a patient's genetic variants of unknown significance (VUSs). Today, clinical decision-making is primarily guided by gene-phenotype association databases and DNA-based scoring methods. Our web-accessible variant analysis pipeline, VUStruct, supplements these established approaches by deeply analyzing the downstream molecular impact of variation in context of 3D protein structure. VUStruct's growing impact is fueled by the co-proliferation of protein 3D structural models, gene sequencing, compute power, and artificial intelligence. Contextualizing VUSs in protein 3D structural models also illuminates longitudinal genomics studies and biochemical bench research focused on VUS, and we created VUStruct for clinicians and researchers alike. We now introduce VUStruct to the broad scientific community as a mature, web-facing, extensible, High-Performance Computing (HPC) software pipeline. VUStruct maps missense variants onto automatically selected protein structures and launches a broad range of analyses. These include energy-based assessments of protein folding and stability, pathogenicity prediction through spatial clustering analysis, and machine learning (ML) predictors of binding surface disruptions and nearby post-translational modification sites. The pipeline also considers the entire input set of VUS and identifies genes potentially involved in digenic disease. VUStruct's utility in clinical rare disease genome interpretation has been demonstrated through its analysis of over 175 Undiagnosed Disease Network (UDN) Patient cases. VUStruct-leveraged hypotheses have often informed clinicians in their consideration of additional patient testing, and we report here details from two cases where VUStruct was key to their solution. We also note successes with academic research collaborators, for whom VUStruct has informed research directions in both computational genomics and wet lab studies.
Diabetes mellitus (DM) and cancer are major global health challenges that increasingly coexist due to shared risk factors including aging, obesity, sedentary behavior, and chronic low-grade inflammation. Beyond being a common comorbidity, DM-particularly type 2 diabetes-has emerged as an important modifier of cancer risk, progression, treatment tolerance, and survival. Epidemiological studies consistently associate DM with a higher incidence of several malignancies, including pancreatic, liver, colorectal, breast, and endometrial cancers, as well as increased cancer-specific and overall mortality. The biological link between dysglycemia and cancer is complex and multifactorial. Chronic hyperglycemia, hyperinsulinemia, and insulin resistance promote tumor development and progression through altered cellular metabolism (Warburg effect), activation of insulin and insulin-like growth factor pathways, systemic inflammation, oxidative stress, immune dysfunction, and changes in the tumor microenvironment and gut microbiota. This review summarizes current evidence on the interplay between dysglycemia and cancer and explores how integrating continuous glucose monitoring (CGM)-based strategies into multidisciplinary oncology care may improve both metabolic and oncologic outcomes. A comprehensive search of online databases, including PubMed, ISI Web of Science, and Scopus, was conducted to identify studies assessing the impact of glycemic disturbances and glycemic control on cancer outcomes. Poor glycemic control and increased glucose variability are associated with worse oncologic outcomes, higher rates of treatment-related complications, reduced adherence to therapy, and diminished efficacy of chemotherapy, targeted agents, and immune checkpoint inhibitors. Severe hypoglycemia has also emerged as an independent predictor of poor prognosis. Although HbA1c has long been the cornerstone of glycemic assessment, it incompletely captures the dynamic glucose fluctuations commonly observed during cancer therapy. CGM provides a more comprehensive and clinically meaningful assessment of glycemic control, with the potential to reduce hypoglycemia, improve glycemic stability, and enhance tolerance and adherence to anticancer treatments. Current evidence indicates that diabetes and dysglycemia are key modifiers of cancer risk, progression, treatment tolerance, and survival. Optimizing glycemic control may therefore contribute to improved cancer outcomes. CGM represents a promising tool for personalizing diabetes management in oncology settings.
Diabetic ketoacidosis (DKA) increasingly occurs in patients with end-stage renal disease (ESRD), in whom standard management strategies may not be appropriate. Prior studies evaluating outcomes of DKA in ESRD are limited and yield inconsistent results. We compared in-hospital outcomes and healthcare utilization among patients hospitalized with DKA with and without ESRD using a national database. We performed a retrospective cohort study using the National Inpatient Sample (2016-2022) of adult hospitalizations with a primary diagnosis of DKA, comparing patients with and without ESRD. Propensity score matching was used to balance demographics and comorbidities; multivariable regression was used to estimate adjusted odds ratios (aORs) for in-hospital mortality, major in-hospital complications, length of stay, and inflation-adjusted hospitalization costs. After propensity score matching, 78,470 hospitalizations were included (39,235 with ESRD and 39,235 without ESRD). In-hospital mortality was similar between patients with and without ESRD (0.9% vs. 1.0%; aOR 0.90, 95% CI 0.65-1.24; p = 0.524). However, ESRD was associated with significantly higher odds of vasopressor use (aOR 1.56), invasive mechanical ventilation (aOR 1.74), non-invasive ventilation (aOR 1.62), septic shock (aOR 1.71), seizures (aOR 1.67), and sudden cardiac arrest (aOR 1.65) (all p < 0.05). ESRD was also associated with longer hospital length of stay (+ 1.42 days) and higher inflation-adjusted hospitalization costs (+$24,686) compared with matched non-ESRD patients. Among patients hospitalized with DKA, ESRD was not associated with increased in-hospital mortality after adjustment but was linked to substantially greater morbidity and healthcare resource utilization. These findings highlight the need for ESRD-adapted DKA management strategies aimed at reducing complications rather than mortality alone.