暂无摘要(点击查看详情)
Optimal control theory in epidemiology has been used to establish the most effective intervention strategies for managing and mitigating the spread of infectious diseases while considering constraints and costs. Using Pontryagin's Maximum Principle, indirect methods provide necessary optimality conditions by transforming the control problem into a two-point boundary value problem. However, these approaches are often sensitive to initial guesses and can be computationally challenging, especially when dealing with complex constraints. In contrast, direct methods, which discretise the optimal control problem into a nonlinear programming (NLP) formulation, hold potential for automation and could offer suitable, adaptable solutions for real-time decision-making. However, despite their potential, the widespread adoption of these techniques has been limited. Several factors may contribute to this challenge, including limited access to specialised software, a perception of high computational costs, or a general unfamiliarity with these methods. This study investigates the feasibility, robustness, and potential of direct optimal control methods using nonlinear programming solvers on compartmental models described by ordinary differential equations to determine the best application of various interventions, including non-pharmaceutical interventions (NPIs) and vaccination strategies. Through case studies, we demonstrate the use of NLP solvers to determine the optimal application of interventions based on single objectives, such as minimising total infections, "flattening the curve", or reducing peak infection levels, as well as multi-objective optimisation to achieve the best combination of interventions. While indirect methods provide useful theoretical insights, direct approaches may be a better fit for the fast-evolving challenges of real-world epidemiology. By integrating newly available data more quickly, direct methods can enhance the ability to make informed and timely decisions for managing outbreaks effectively.
 No universally accepted model exists for predicting bleeding risk in patients receiving low-molecular-weight heparin or fondaparinux.  This study leveraged seven machine learning algorithms to build a short-term bleeding risk prediction platform for this population.  This retrospective real-world observational study included hospitalized patients who received low-molecular-weight heparin or fondaparinux between January 2022 and December 2023. After applying predefined criteria, the cohort were randomly split into training (70%) and validation (30%) sets. Predictors were identified using LASSO regression. Seven machine learning models, including Logistic Regression (LR), Support Vector Machine (SVM), Gradient Boosting Machine (GBM), Neural Network (NN), extreme gradient boosting (XGBoost), adaptive boosting (AdaBoost), and CatBoost, were developed and evaluated. The best-performing model was implemented as an internal web-based bleeding risk prediction tool.  Among 1,691 hospitalized patients receiving low-molecular-weight heparin or fondaparinux, 126 (7.5%) experienced bleeding events. The cohort was randomly split into training (n = 1,184) and validation (n = 507) sets. LASSO regression identified 12 predictors, including surgical site, pre-medication INR, hemoglobin, platelet count, renal function, body mass index (BMI), indication, and comorbidities. Seven machine learning models were developed and evaluated. In the validation cohort, CatBoost achieved the best discrimination (AUC = 0.659), followed by XGBoost (AUC = 0.651) and LR (AUC = 0.622). CatBoost also demonstrated the highest accuracy (86.0%) and F1 score (0.297), with strong specificity (89.2%) but limited sensitivity (42.9%). Although all models showed robust negative predictive performance (PR-AUC > 0.93), positive predictive capacity was modest (PR-AUC < 0.20) in validation. Based on its overall performance, CatBoost was deployed as an internal web-based bleeding risk calculator.  CatBoost emerged as the optimal model among those tested for predicting bleeding risk in patients receiving low-molecular-weight heparin or fondaparinux, demonstrating modest but superior discrimination, acceptable calibration, and favorable clinical utility. However, the model had limited ability to correctly identify patients who experienced bleeding, as indicated by low positive predictive performance. Given its high negative predictive value, it was better suited for ruling out rather than confirming bleeding risk. A web-based risk calculator based on CatBoost has been developed for internal use. Nevertheless, prospective multicenter validation is required before clinical implementation.
Recent studies have applied machine learning (ML)-based limited sampling strategies (LSS) to predict drug exposure (AUC), achieving low prediction error and performance comparable to or better than multiple linear regression and population pharmacokinetics LSS. This study aimed to develop and validate a machine learning-based limited sampling strategy capable of predicting raltegravir (RAL) exposure. Four machine learning algorithms (XGBoost, Random Forest, GLMNet, and SVM) were trained using pharmacokinetic profiles generated via Monte Carlo simulation from a population pharmacokinetic (POPPK) model. Data were divided into training (75%) and test (25%) sets. All possible combinations of sampling times, pairs and triplets, in steady-state, up to 12 h post-dose were evaluated. Model performance was assessed by the lowest root mean square error (RMSE) in the cross-validation, and the best performing model was evaluated in the test set and externally validated using simulated PK profiles from an independent POPPK model and patient data from a clinical study. XGBoost trained with concentrations at 0.5, 2, and 4 h showed the best predictive performance. The model achieved excellent accuracy in the test set (bias/RMSE: 0.8%/8.7%) and in the independent simulation (1.9%/14.3%). Performance decreased in real patient data (5.0%/24.1%), highlighting the need for caution when extrapolating predictions to populations whose characteristics differ from those represented in the training datasets. A machine learning model using only three sampling timepoints has been developed and validated in different datasets, enabling accurate estimation of RAL AUC₀-₁₂. This approach provides a tool for pharmacokinetic and PK/PD studies and reduces intensive sampling need in clinical settings.
Harmful algal blooms (HABs) pose growing risks to drinking-water supplies and ecosystems, yet routine monitoring remains dependent on manual light microscopy. Deep learning offers a potential rapid alternative for automating microscopy, but progress has been constrained by idealised datasets and limited evaluation under real-world conditions. To address these gaps, this study introduces a multi-region microscopy dataset comprising 105 microalgae/cyanobacteria taxa collected from three Australian regions, capturing diversity and challenges of operational monitoring. Given dataset complexity, a state-of-the-art object detection model (YOLOv12) was adopted, and a structured data-centric workflow was developed on one regional dataset, providing a systematic evaluation of dataset design choices in automated microscopy under real-world conditions. The best-performing configuration used an 80/10/10 train/validation/test split, genus-level taxonomic granularity, inclusion of priority taxa, static geometric augmentation, and an input resolution of 640, achieving a mean Average Precision at 0.5 Intersection-over-Union (mAP@0.5) of 0.73 in-domain evaluation. To assess transferability, the best-performing model was evaluated across regions without retraining, where performance declined substantially (mAP@0.5 ≤ 0.10). Limited target-domain fine-tuning provided partial improvement, but performance remained substantially below in-domain levels, indicating cross-region generalisation challenges. To mitigate this, new models were trained on merged datasets from multiple regions, recovering performance to mAP@0.5 of 0.62-0.67 depending on the regions included. Collectively, these results demonstrate that a structured data-centric workflow can substantially enhance automated microscopy, yet domain generalisation remains a critical bottleneck for deployment. This study provides both methodological innovation and empirical insight, advancing progress toward scalable AI systems for rapid HAB monitoring.
Water is essential component of life; thus access to clean and safe water is crucial for human consumption, agriculture and other life-sustaining activities. However, water contamination remains a major global concern. Among various pollutants, heavy metals pose significant threat to environment and health due to their high toxicity and carcinogenic nature which has attracted considerable attention of researchers. In this study, CeO2/coconut shell nanocomposites were synthesized and characterized using X-ray Diffraction, Scanning electron microscopy, Energy Dispersive X-ray Spectroscopy, Fourier Transform Infrared Spectroscopy, TGA, and UV-Visible spectroscopy to investigate their structural, morphological, and chemical properties. The objective of this study is to investigate efficiency of the synthesized nanocomposites as an economical and environmental friendly adsorbent to eliminate Cr(VI) from water. Adsorption experiments in batch mode were conducted to determine the effect of key parameters including initial Cr(VI) concentration, pH, adsorbent dosage, and contact time. The analysis of Cr(VI) was performed using UV-Visible Double Beam Spectrophotometer. The synthesized nanocomposites exhibited significant Cr(VI) elimination efficiency. Equilibrium adsorption data were best described by the Langmuir isotherm model, indicating monolayer adsorption on a relatively homogeneous surface, while the Freundlich model suggested the presence of limited surface heterogeneity. To understand the adsorption behavior, five kinetic models (pseudo first order, pseudo second order, intraparticle diffusion, fractional power, and Elovich) were used to analyze the experimental results. Among these, pseudo second order showed the best correlation with the experimental data (R2 = 0.96088) indicating that chemisorption was the dominant mechanism governing Cr(VI) uptake. The Elovich model also demonstrated a reasonably high correlation (R2 = 0.88706), further supporting the presence of heterogeneous surface interactions and activation energy barriers. The findings suggest that CeO2/coconut shell nanocomposites offer an efficient, eco-friendly and cost-effective solution for Cr(VI) removal from water.
The estimation of chronological age based on bone mineral density (BMD) metrics for specific anatomical sites is a critical task in forensic anthropology. Although dual-energy X-ray absorptiometry (DXA) scans of the distal 1/3 of radius and ulna are widely used in large-scale osteoporosis screenings, forensic studies leveraging such data remain scarce. This study utilized a retrospective dataset (spanning ages 12-96) of 5,134 DXA scans from the distal 1/3 radius and ulna. We analyzed these DXA scans with metadata, including sex, body mass index (BMI), and osteoporosis diagnoses, to train machine learning models. Linear regression (LR), support vector regression (SVR), random forest regression (RFR), XGBoost (XGB), and LightGBM (LGBM) models were optimized via Bayesian cross-validation. Results indicate that the simplest model constructed solely based on BMD data + Diagnoses showed good performance with a mean absolute error (MAE) of 2.40 years. The best-performing model was the RFR model built using the combination of Female + Diagnoses, with an MAE of 2.18 years. When only considering BMI, the best model was the RFR model for the Normal weight + Diagnoses combination, with an MAE of 2.54 years. These models have been integrated into the AgeMiner tool (https://github.com/Rarapie/AgeMiner), allowing forensic users to select the optimal model according to metadata of the tested person, thereby enabling fast and end-to-end chronological age estimation. In summary, AgeMiner and its integrated ML models provide an efficient, accurate, and customizable tool for forensic age estimation in adults and the elderly.
The 210Pb dating technique is widely applied for reconstructing sediment accumulation in aquatic environments. However, its reliability depends strongly on appropriate age model and independent validation. In this study, previously reported sedimentation rates for the highly dynamic coastal environment of Sorsogon Bay, Philippines, derived using the Constant Initial Concentration (CIC) model, were reassessed to validate and improve chronological robustness. Guided by statistical and physical assumptions, and supported by the results of CF:CS and CRS age model, the evaluation of regression intervals of excess 210Pb activity concentration profiles, the average sedimentation rates of the three sediment cores SO-01 (CAS), SO-03 (CAD), and SO-07 (SAM) were determined to be the best fit. Results of Mann-Whitney U test revealed the influence of natural disturbances, such as eruptions of Mt. Bulusan and major typhoon events that hit the Bicol Region, to sediment characteristics particularly dry bulk density (DBD) and calculated mass accumulation rate (MAR). This further strengthen the qualitative analysis of peak association of DBD and MAR to the documented natural disturbances. This study demonstrates the importance of multi-proxy validation and Mann-Whitney U test in strengthening 210Pb-derived chronologies, which are critical for a more robust foundation for investigating land-use change, coastal evolution, pollution histories, climate variability, and other natural and anthropogenic drivers of sediment dynamics over the past century.
Arrhythmogenic cardiomyopathy (ACM) is a genetic form of heart failure that affects 1 in 5,000 people globally and is caused by mutations in cardiac desmosomal genes including PKP2, DSP, and DSG2. Individuals with ACM suffer from ventricular arrhythmias, sudden cardiac death, and heart failure. There are few effective treatments and heart transplantation remains the best option for many affected individuals. Here we performed single nucleus RNA sequencing and spatial transcriptomics on myocardial samples from patients with ACM and control donors. We identified disease-associated spatial niches characterized by coexistence of fibrotic and inflammatory cell types and failing cardiac myocytes. The inflammatory-fibrotic niche colocalized to areas of cardiac myocyte loss and comprised FAP (fibroblast activation protein) and POSTN (periostin) expressing fibroblasts, macrophages that expressed NLRP3, and nuclear factor κB activated genes. Using homozygous Dsg2 mutant (Dsg2mut/mut) mice, we identified analogous populations of Postn-expressing fibroblasts and inflammatory macrophage populations that co-localized within diseased areas. Detailed single nucleus RNA-sequencing analysis of inflammatory macrophage subsets that were increased in ACM samples revealed high levels of Il1b expression. To delineate the possible benefit of targeting IL1B in ACM, we treated Dsg2mut/mut mice with an anti-IL1B neutralizing antibody and observed attenuated fibrosis, reduced levels of inflammatory cytokines and chemokines, preserved cardiac function, and diminished conduction slowing and automaticity, key mechanisms of arrhythmogenesis. These results suggest that currently approved therapeutics that target IL1B or IL1 signaling may improve outcomes for patients with ACM.
To develop clinically practical, sex-specific prediction models for identifying TR-ROP, irrespective of fundus photography, and evaluate its generalizability, efficiency, productivity, and interpretability. We selected premature infants who suffered risk of TR-ROP and received fundus examination between 2012 and 2022. Logistic Regression (LR) Model, Random forest-LR Model and LASSO-LR Model were constructed and the model with the best performance was chosen for predictions of the occurrence of TR-ROP. Among 7,235 preterm infants received ROP screening, 408 (5.63%) were TR-ROP. The median follow-up time was 24 months. Male and female shared some modifiable risk and protective factors, but they also presented independent risk factors. The sex-specific model based on birth weight, gestational age, hypoxic ischemic encephalopathy, multiple births, blood transfusion (male) and birth weight, gestational age, head circumference, cesarean delivery, blood transfusion (female) were selected by LR showed more promising results in the prediction of TR-ROP in the internal validation cohort (male: AUC 0.855-0.981, specificity 0.895; female: AUC 0.950-0.995, specificity 1.000). The performance of the above sex-specific models also demonstrated performance in the external validation cohorts (male: AUC 0.806-0.951, specificity 0.824; female: AUC 0.625-0.919, specificity 0.727). The C-index showed the sex-stratified models displayed better clinical predictive utility than the overall model. Our study provides a sex-specific clinical risk prediction tool for TR-ROP, which may help preterm infants identify their potential risk profile, reduce unnecessary fundus examination and provide guidance to prevent disease progression.
A prospective observational cohort study. To determine whether machine learning models using radiomic features derived from preoperative MRI, clinical variables, or their combination can predict achievement of the minimum clinically important difference (MCID) in function and quality of life after surgery for degenerative cervical myelopathy (DCM). Predicting surgical outcomes in DCM remains challenging, as conventional MRI and clinical scores incompletely reflect spinal cord pathology. Radiomics quantifies voxel-level intensity and texture patterns from routine MRI, providing quantitative measures of tissue heterogeneity that may serve as imaging biomarkers of recovery potential. Forty-six patients with DCM underwent preoperative 3D T2-weighted MRI and surgical decompression. Spinal cord radiomic features (Shape3D, First-Order, GLCM, and GLSZM) were extracted using PyRadiomics. Baseline clinical variables included age, sex, duration of symptoms, T2 hyperintensity, and functional scores assessed with the baseline mJOA and SF-36 PCS scores. Three-month MCID achievement was defined using established thresholds. Predictive models were developed using radiomic features, clinical variables, or their combination. For mJOA MCID, the combined radiomics-clinical model achieved the best performance (AUC = 0.88 ± 0.13). For SF-36 PCS MCID, the combined model achieved an AUC = 0.78 ± 0.17 and an AUCPR of 0.82 ± 0.14. SHapley Additive exPlanations identified texture-based radiomic features and age as dominant predictors for mJOA MCID, whereas first-order radiomic features and baseline SF-36 PCS were most influential for SF-36 PCS MCID. MRI-based spinal cord radiomics improves prediction of meaningful postoperative recovery beyond clinical data, supporting their potential as imaging biomarkers for individualized prognostication in DCM.
This article highlights best practices in molecular pathology, emphasizing the impact of preanalytical variables on nucleic acid quality and test performance. It covers optimal specimen selection, handling, and preservation strategies for tissue, cytology, and liquid biopsy, and discusses assay selection based on clinical context, specimen adequacy, and intended use. The importance of standardized, guideline-driven molecular reporting and variant classification is emphasized. By integrating these principles, laboratories can maximize diagnostic yield, ensure clinically meaningful results, and support precision medicine.
Crossed cerebellar diaschisis (CCD) is characterized by reduced perfusion and metabolism in the cerebellar hemisphere contralateral to a supratentorial lesion. In large-vessel occlusion acute ischemic stroke (LVO-AIS), CCD may result from hemodynamic impairment, structural injury, or both. From a blood-oxygenation-level-dependent cerebrovascular reactivity (BOLD-CVR) imaging database, we identified patients with anterior-circulation LVO-AIS who underwent BOLD-CVR MRI within 7 days of symptom onset. Patients were stratified into those with persistent occlusion (non-endovascular thrombectomy, non-EVT) and those imaged after successful reperfusion (EVT). CCD was defined by a cerebellar asymmetry index > 12%. Associations between CCD and imaging markers of structural injury (infarct lesion volume) and hemodynamic impairment (steal phenomenon volume) as well as associations with 90-day functional outcome were assessed using logistic regression models. Sensitivity analyses included multiple imputation and best-/worst-case scenarios for missing outcomes. Seventy-nine patients were included (23 EVT, 56 non-EVT). CCD was present in 35% of EVT and 41% of non-EVT patients. In non-EVT patients, CCD was independently associated with larger steal phenomenon volumes (adjusted OR 1.99; 95% CI 1.12-3.73), but not infarct size. In EVT patients, CCD was associated with larger infarct lesions (adjusted OR 5.75; 95% CI 1.41-68.92) but not steal phenomenon volume. CCD predicted poorer 90-day outcome only in non-EVT patients in complete-case analysis, but this association was not robust in sensitivity analyses. CCD in acute LVO-AIS reflects different mechanisms depending on occlusion status: hemodynamic impairment under persistent occlusion and structural injury after reperfusion. BOLD-CVR imaging provides insight into CCD, though larger studies are needed to clarify its prognostic value.
BackgroundPeople living with dementia (PLWD) with advanced illness are prone to respiratory distress yet often cannot self-report dyspnea, delaying recognition and treatment. Near-field radio-frequency (NFRF) sensors offer touchless, covert cardiopulmonary monitoring that may be better tolerated than tethered devices.ObjectivesTo assess the feasibility and acceptability of NFRF bed sensor for home monitoring of PLWD and to estimate machine-learning (ML) performance for detecting respiratory distress.MethodsIn a 48-hour pilot study, PLWD were recruited from a geriatrics practice. A lab-designed NFRF bed sensor recorded cardiopulmonary waveforms. Recorded video enabled minute-level Respiratory Distress Observation Scale scoring as reference. Feasibility outcomes included adverse events, acceptability, and percentage of usable data. ML classifiers (eg, random forest, k-nearest neighbors) were evaluated using 5-fold cross-validation, and class imbalance was addressed through data augmentation.ResultsTen patient-legally authorized representative dyads were enrolled. No adverse events were reported, and no participants intentionally removed the sensor. Usable data averaged 52% (range 34-68%). Caregivers reported minimal burden and no patient distress. With augmented data, the random forest performed best, achieving 74.6% sensitivity and 95.5% specificity in detecting RDOS scores.ConclusionsNFRF bed sensors were feasible and acceptable to implement in the home setting with PLWD, with promising ML-based detection of respiratory distress. Larger, longer studies with a broader range of RDOS severity are needed to validate performance and refine deployment. As this technology develops and matures, it could provide a method for non-invasive continuous monitoring to detect respiratory distress in PLWD in palliative care settings.
Sex-specific associations between adiposity and death from primary liver cancer (PLC) remain poorly characterized, particularly those related to emerging anthropometric indices and populations with high abdominal adiposity. In two prospective cohort studies, the sex-specific associations between 16 anthropometric indices (six traditional, 10 emerging) and PLC death were examined in 72,691 women and 59,892 men in China. Cox proportional hazards regression models with restricted cubic spline functions were applied to evaluate the associations between adiposity indices and PLC death. After a median follow-up time of 22.0 years for women and 16.1 years for men, 300 women and 485 men died from PLC. We observed distinct risk patterns: women demonstrated positive linear associations for 11 indices (five traditional, six emerging) with PLC death, with highest-quartile individuals showing a 46-82% elevated death risk compared with lowest-quartile counterparts, especially in premenopausal women. Conversely, men displayed U-shaped associations for 10 indices, indicating that death risk was minimized at moderate adiposity levels. Notably, the combined model of the Clínica Universitaria de Navarra Body Adiposity Estimator (CUN-BAE) and BMI best predicted PLC death in both sexes, highlighting the need for sex-specific adiposity management. Sex-specific differences exist in the association between adiposity and PLC death. The nonlinear pattern observed in men warrants mechanistic investigation for potential hormonal or metabolic mediators of the adiposity-PLC death relationship. This study represents the first comprehensive investigation into the sex-specific associations between 16 anthropometric indices (traditional/emerging) and liver cancer death in two Chinese cohorts. Findings show distinct patterns: positive linear in women (especially premenopausal women) and U-shaped associations in men, with the lowest death risk observed at moderate adiposity levels. Models incorporating CUN-BAE and BMI improved prediction for liver cancer death in both sexes. These findings highlight adiposity as a crucial modifiable risk factor for PLC, urging policymakers to adopt sex-specific adiposity strategies, encouraging healthcare professionals and researchers to investigate the underlying mechanisms of the observed nonlinear associations in men, and utilizing the superior predictive power of combined models, such as CUN-BAE and BMI, to identify high-risk individuals.
Lynch syndrome (LS) is a hereditary condition associated with an increased susceptibility to developing cancer, primarily colorectal and gynaecological cancer (endometrial cancer and ovarian cancer). The European Society of Gynaecological Oncology (ESGO) nominated fifteen practicing multidisciplinary clinicians with expertise in this field and ten gynaecological and oncological fellows with interest in the topics to develop evidence-based statements, sharing and standardizing the management of LS carriers. Published evidence was integrated with clinical experience to reach Consensus Statements through anonymous voting. In this Consensus, thirty-one statements based on the best available evidence and expert agreement are offered. They focused on genetic and cancer risk counseling principles, screening procedures, risk-reducing surgical and medical strategies, and address emerging topics such as reproductive issues for LS carriers, which are important in current practice. This manuscript reports the Statements that reached a consensus, their voting results, and a summary of supporting evidence.
Person-centered care is considered best practice in dementia care, emphasizing autonomy, dignity, and relationship-based individualized care. However, little is known about how person-centered dementia care (PCDC) is implemented in low-resource long-term care (LTC) settings. This study identified PCDC strategies used by staff providing care for residents with dementia in low-resource LTC settings and key facilitators supporting the use of PCDC strategies. We conducted a qualitative analysis of semi-structured interviews with 27 staff (20 direct care staff and 7 administrators) from four LTC facilities (nursing homes and assisted living) in urban Maryland and rural New Hampshire. Participants were drawn from a larger study in federally designated medically underserved areas. Template analysis was used to analyze data and identify themes related to PCDC strategies and facilitators. LTC staff described PCDC strategies for residents with dementia across three domains: communication-based interactional approaches, preserving dignity and autonomy, and tailoring care to individual preferences. Key facilitators identified included fostering communication, responsiveness to residents' needs, organizational support, and resource optimization. Despite limited resources, information-sharing systems, teamwork, engagement with care partners, positive attitudes, motivation, empowerment, adaptability, and dementia training facilitated PCDC implementation, highlighting that multilevel facilitators are key to delivering quality dementia care. Findings emphasize the importance of communication and teamwork, responsiveness to residents' needs, supportive organizational structures, and resource optimization in implementation of PCDC in low-resource settings. Future research should incorporate the perspectives of residents and care partners and examine PCDC implementation across broader contexts.
Despite the widespread use of optimization-based classification methods in medical data analysis, many existing approaches suffer from premature convergence and limited robustness when dealing with complex and heterogeneous datasets. To address these limitations, this study presents a chaos-enhanced, fox-inspired classification framework derived from the Fox Optimization Algorithm. The proposed method employs a Gauss/Mouse chaotic map to regulate the exploration-exploitation balance through the control variable, while preserving the original algorithmic structure without introducing additional parameters. The framework adopts a clustering-based classification strategy in which cluster centers are optimized using the proposed method, and class labels are assigned via distance-based nearest-neighbor analysis. The approach was evaluated on six publicly available medical datasets, including Breast Cancer Wisconsin Diagnostic, Breast Cancer Wisconsin Original, Dermatology, Thyroid, Hepatitis, and Heart, using accuracy, precision, sensitivity, and specificity as evaluation metrics. Experimental results demonstrate that the proposed framework achieves statistically significant and consistent classification performance, attaining the best overall average rank (1.16) in the Friedman test (p = 0.0012) and outperforming several baseline methods. Performance improvements over benchmark methods were observed across multiple datasets, while comparable results were obtained on others. The incorporation of chaotic dynamics effectively enhances search behavior by mitigating premature convergence. Statistical analyses, including the Friedman test, further confirm the significance of the observed improvements. Overall, the findings indicate that the proposed framework provides stable and reproducible classification performance across benchmark medical datasets. Future studies may extend this work through external clinical validation and alternative methodological integrations.
Carlevale and retropupillary iris-claw Artisan intra-ocular lenses (IOLs) treat aphakia without capsular support, but their relative performance is uncertain. PubMed, Embase and the Cochrane Library were searched to March 2025. best-corrected visual acuity (BCVA), surgically induced astigmatism (SIA), mean absolute refractive error (MARE) and mean refractive error (MRE). operating time and postoperative complications. Random-effects meta-analysis with I² and incision-type subgroups was performed. Five studies comprising 631 eyes (229 Carlevale, 402 iris-claw) met inclusion criteria. Mean age was 70.1 ± 14.1 years, 61.18% were male; follow-up ranged from 1.3 to 11.5 months. BCVA did not differ between groups (- 0.01 logMAR; 95% CI - 0.13 - 0.11; p = 0.91; I²=43%). Carlevale reduced SIA (- 0.53D; 95% CI - 1.03 to - 0.04; p = 0.03; I²=73.7%), however, the benefit was confined to corneal-incision iris-claw comparators, not scleral-incision. MARE showed no overall difference, yet corneal-incision iris-claw cases were less predictable (MD-0.32; 95% Cl: -0.62-0.19, p = 0.30, I² =81.7%). Carlevale produced a myopic shift relative to iris-claw (- 0.66D; 95% CI - 0.87 to - 0.46; p < 0.01; I²=31.3%). Carlevale procedures were 11.9 min longer (95% CI 5.2-18.6; p < 0.01; I²=80.2%). Complication rates were comparable overall except for fewer IOL dislocations with Carlevale (OR 0.16; 95% CI 0.03-0.87; p = 0.034; I²=0%). Both lenses provide similar visual acuity and safety in aphakic eyes lacking capsular support. Carlevale confers lower dislocation risk and greater refractive predictability relative to corneal-incision iris-claw implantation, at the expense of a longer operating time. Incision-related heterogeneity highlights the need for standardised surgical and reporting frameworks.