Robotic-assisted CABG is a minimally invasive alternative to conventional sternotomy. The purpose of this study was to evaluate the short and mid-term outcomes of our experience and to enrich the literature with quality comparisons of these procedures. From 1 October, 2020 to 1 July, 2025, 927 CABG procedures were performed at a single institution. Two groups of patients were analyzed: the robotic assisted minimally invasive CABG group (RA-CABG) and the conventional sternotomy CABG group (CS-CABG). This was a retrospective comparison of all consecutive patients undergoing conventional CABG and RA-CABG with the use of propensity score matching with 24 preoperative covariates. Of the 927 cases, 480 patients were matched with 240 patients each in the RA-CABG and CS-CABG groups. The matching successfully eliminated all preoperative differences between the groups. There were three conversions to median sternotomy in the robotic group. Number of distal anastamoses per patient was 2.84 in the RA-CABG group and was 3.05 in the CS-CABG group. The cross clamp time and cardiopulmonary bypass time were statistically significantly shorter in the CS-CABG group (cross clamp: 59 vs. 47, minutes and CPB: 95 vs. 67, minutes, p = 0.001). The RA-CABG group had significantly (p < 0.001) lower 24-h postoperative blood loss (400 ml vs. 800 ml), fewer blood transfusion, shorter mechanical ventilation time (6 vs. 10, hours), shorter length of ICU stay (20 vs. 28, hours), shorter length of hospital stay (6 vs. 7, days). Early mortality (in-hospital and operative) rates were similar between the groups, 0.8% for RA-CABG and 1.3% for CS-CABG (p = 0.683). In like manner, there was no significant difference in mid-term mortality rates between the two groups (p = 0.258). The RA-CABG is safe and feasible alternative to the conventional sternotomy CABG. This method offers advantages such as avoiding sternotomy, providing greater comfort to the surgeon during LIMA harvesting, and enabling longer LIMA harvesting.
In the context of the aging of the global population, the prevalence of knee joint disorders continues to rise. Concurrently, the integration of robotic systems and intelligent implants represents an inevitable trend in orthopedic surgery. A comprehensive evaluation of the safety and effectiveness of robot-assisted total knee arthroplasty (RA-TKA) is therefore urgently needed to inform clinical decision-making. To explore the advantages of 9 RA-TKAs across 8 outcomes. A systematic literature search was conducted in the PubMed, Web of Science, Embase, Cochrane Library, CBM, CNKI, Wanfang, and VIP databases from inception to December 1, 2025. The risk of bias and methodological quality were assessed via Review Manager (version 5.4). Network meta-analysis was performed via RStudio (version 4.4.1). A total of 36 studies involving 2841 patients were included. In direct comparisons, conventional TKA (C-TKA) yielded shorter operative times than MAKO, HURWA, SkyWalker, ROSA, and Brainlab Knee did. CORI also had a shorter operative time than Brainlab Knee did. Compared with the C-TKA, MAKO, HURWA, SkyWalker and TiRobot groups, the ROSA group presented higher KSS-knee scores. In addition, C-TKA, HURWA, and CORI presented higher KSS-knee scores than did SkyWalker. For the KSS-function scores, the C-TKA and ROSA scores were higher than the HURWA score. C-TKA demonstrated a greater postoperative ROM than HURWA did. For HKA angle deviation, C-TKA resulted in greater deviation than MAKO, HURWA, SkyWalker, TiRobot, and EPMEDBOT did. In the comprehensive best probability ranking, C-TKA (93%) ranked highest in terms of operative time. SkyWalker (87%) ranked highest in terms of blood loss. SkyWalker (91%) ranked highest in terms of the KSS-knee scores. HURWA (87%) ranked highest in terms of the KSS function scores. MAKO (85%) ranked highest for HSS. The YUANHUA (76%) ranked highest for the WOMAC. The CORI (69%) ranked highest for ROM. SkyWalker (87%) ranked highest for HKA angle deviation. Overall, RA-TKA demonstrated superior safety and effectiveness compared with C-TKA, with different robotic systems exhibiting distinct advantages across outcome measures. Nevertheless, C-TKA retains a significant advantage in reducing the operative time, highlighting an important area for further optimization of robotic-assisted TKA.
To compare perioperative outcomes between the 48-h short-stay pathway and traditional inpatient management for patients undergoing robot-assisted partial nephrectomy (RAPN), and to evaluate the feasibility, safety, recovery efficiency, and economic benefits of the 48-h short-stay pathway. This retrospective study included 175 patients who underwent RAPN between February 2022 and June 2024. Patients were assigned to a 48-h short-stay group (n = 60) or a traditional inpatient group (n = 115). A 1:1 propensity score matching (PSM) was conducted to balance baseline characteristics, including age, sex, BMI, comorbidities, tumor features, surgeon identity, and surgical year. Perioperative outcomes, recovery indicators, complications, and medical costs were compared. After PSM, 53 matched pairs were analyzed. The short-stay group showed significantly shorter operative time, less intraoperative blood loss, shorter warm ischemia time, earlier mobilization, earlier oral intake, faster bowel function recovery, and shorter bed rest (all P < 0.05). The short-stay group had 71.7% of patients discharged on postoperative day (POD) 1 and 100% within 48 h, while the traditional group had 22.6% on POD1, 33.96% on POD2, and 43.4% on POD ≥ 3 (P < 0.001). Both total and postoperative hospital stays were significantly shorter in the short-stay group (2.00 vs. 6.00 days, P < 0.001), with lower hospitalization costs (P < 0.001). Postoperative creatinine was lower in the short-stay group (P = 0.023), while creatinine change was comparable (P = 0.063). Complication rates, emergency department visits, and 30-day readmission rates were similar between groups (all P > 0.05). The short-stay group had a significantly lower drain placement rate (P = 0.002) without increased adverse events. The 48-h short-stay pathway for selected patients undergoing RAPN is feasible and safe. It accelerates postoperative recovery, shortens hospital stay, reduces medical costs, and optimizes healthcare resource utilization, without compromising safety or oncological early outcomes.
The accuracy of the non-invasive prenatal testing (NIPT) for common trisomies remains unclear in pregnancies following assisted reproductive technology (ART), and the adoption of NIPT as a prenatal screening test in ART pregnancies has been cautious due to the absence of clear recommendations. To estimate the accuracy of NIPT for screening for common chromosomal abnormalities compared with conventional karyotype or microarray testing in ART pregnancies. A comprehensive search of the following: PubMed, Scopus, and Embase. A systematic review and meta-analysis was conducted, and cross-sectional and cohort studies with antenatal women who conceived following ART and opted for prenatal testing with NIPT were included. Pooled sensitivity and specificity of NIPT for common chromosomal abnormalities (trisomy 21, 18, and 13). We identified a total of 548 records through electronic searches and finally included 13 studies for quantitative synthesis. The pooled sensitivity and specificity for combined abnormalities in singleton ART pregnancies were 88.2% (95% confidence interval, CI 61.0-97.3%) and 99.6% (95% CI 98.4-99.9%), respectively. Similarly, for twin ART pregnancies, the pooled sensitivity and specificity for combined abnormalities were 88.2% (95% CI 66.4-96.6%) and 99.8% (95% CI 99.6-99.9%), respectively. The pooled sensitivity and specificity for trisomy 21 in singleton ART pregnancies were 87.2% (95% CI 59.0-97.0%) and 99.7% (95% CI 98.8-99.9%), respectively, while in ART twin pregnancies, the pooled sensitivity was 86.9% (95% CI 63.4-96.2%) and the specificity was 99.8% (95% CI 99.6-99.9%). While the specificity of NIPT is high in ART-conceived pregnancies, its sensitivity in the detection of common fetal chromosomal aneuploidy, in particular trisomy 21, is substantially lower than in naturally-conceived pregnancies.
Paris polyphylla Smith var. yunnanensis (Franch.) Hand.-Mazz. (P. polyphylla var. yunnanensis) is a perennial herb of the genus Paris. As an important medicinal resource, P. polyphylla var. yunnanensis is facing exhaustion due to the high demand and its specific growth characteristics. To efficiently utilize its resources, the response surface methodology (RSM) was utilized to optimize the pectinase-assisted extraction process of polyphyllins from its rhizome, with the total extraction content of polyphyllin I, II, and VII as the evaluation index. The optimal conditions were as follows: extraction temperature of 52 °C, extraction time of 34 min, and solid-to-liquid ratio of 1:19 g/mL. Under these conditions, the total content of the three polyphyllins was 29.70 mg/g, which was close to the predicted value of 29.90 mg/g and represented an increase of 27.63% over the control group. The analysis of variance (ANOVA) showed that the RSM model exhibited a good fit, and the Box-Behnken design (BBD) could be applied to optimize the extraction process of polyphyllins. This study provides a theoretical basis and a reference approach for the efficient utilization of P. polyphylla var. yunnanensis resources.
This study aimed to compare the accuracy and efficiency of implant-site preparation and placement between conventional serial drilling and trephine drilling, with assistance from an implant robot, on varying bone surface inclinations. A total of 80 implants were placed in a standard polyurethane model using the two drilling methods (conventional serial drilling and trephine drilling) with four different bone surface inclinations (0°, 15°, 30°, 45°), resulting in eight sub-groups. The implant robot was utilized for implant-site preparation and placement. The global coronal deviation, global apical deviation, angular deviation, and implant placement time were recorded as outcome measures. In line with the experimental objectives, implants were placed using conventional serial drilling and trephine drilling techniques under conditions of different bone surface inclinations. While no significant difference was observed in implant accuracy, the trephine drilling method showed significantly higher implant efficiency. Under the present in vitro conditions, implant robot-assisted trephine drilling for implant-site preparation did not compromise implant precision compared with conventional serial drilling and improved implant efficiency. In addition, it may offer the practical advantage of harvesting autogenous bone material.
Point-of-care testing (POCT) platforms frequently suffer from a fundamental bottleneck: while advances in molecular amplification improve signal intensity, the reliability of signal readout in complex clinical matrices remains poorly controlled. Here, we present an integrated biosensing framework that treats readout reliability as an explicit engineering objective rather than a post hoc correction problem. The platform integrates three complementary components: (i) a heptameric nanobody probe employed as a multivalent recognition element for target capture, (ii) a DNA-assisted clustering interface that spatially organizes gold nanoparticle reporters for robust signal amplification, and (iii) a few-shot learning module based on Prototypical Networks that enables robust classification with minimal training data while providing interpretable decision-making through metric-based reasoning. Alpha-fetoprotein was selected as the model analyte because it remains a clinically important biomarker for hepatocellular carcinoma screening and follow-up, while also representing a realistic POCT challenge in which clinically meaningful detection must be achieved with low instrumentation burden and reliable readout under matrix variability. In this setting, the system achieves a visual limit of detection of 2 ng/mL and demonstrates quantitative consistency across representative clinical serum samples. Importantly, the AI module functions as an integral system component, identifying diagnostically relevant regions and mitigating readout uncertainty arising from matrix effects and imaging variability. By jointly engineering the sensing interface and the interpretive layer, this work establishes a generalizable strategy for constructing trustworthy POCT systems in which chemical signal generation and digital interpretation are co-designed.
An accurate and precise analytical method was established for the determination of fenuron in carrot juice samples. Salt-assisted switchable solvent-liquid phase microextraction (SA-SS-LPME) was performed to achieve a lower detection limit. Three different calibration methods were compared to quadruple isotope dilution (ID4) in terms of accuracy and precision. Under the optimum SA-SS-LPME-GC-MS conditions, the linear range, coefficient of determination, limit of detection (LOD), and limit of quantitation (LOQ) were calculated as 0.12-2.85 mg/kg, 0.9988, 0.05 mg/kg and 0.15 mg/kg via the external standard calibration method, respectively. In addition, the internal calibration strategy was also implemented and the linear range, coefficient of determination, LOD and LOQ were found to be 0.12-5.13 mg/kg, 0.99996, 0.05 mg/kg and 0.17 mg/kg, respectively. Percent recovery results were recorded as 104.5%-118.1% (± 1.9-7.2) via the external standard calibration, 75.1%-115.0% (± 1.7-6.2) via the matrix-matching calibration, 89.7%-100.7% (± 1.8-6.3) via the internal standard calibration and 99.6%-102.0% (± 1.2-2.3) via the ID4 strategy. The recovery results obtained using the ID4 strategy demonstrate the advantage of integrating SA-SS-LPME-GC-MS with this approach.
This study developed an integrated process coupling dielectric barrier discharge (DBD) pretreatment with a membrane bioreactor (MBR) for treating tetracycline (TC)-containing wastewater. The results demonstrated that DBD pretreatment effectively degraded more than half of TC, reduced transmembrane pressure, and decreased the average membrane fouling rate by 16%. Systematic analysis revealed that the sludge in the DBD-MBR system maintained stable mixed liquor suspended solids and sludge volume index, while the contents of extracellular polymeric substances and quorum-sensing signal molecules (C6-HSL and C8-HSL) were significantly reduced, which contributed to the mitigation of membrane fouling. Furthermore, DBD pretreatment markedly alleviated the oxidative stress induced by TC, as evidenced by the decreased reactive oxygen species generation, lactate dehydrogenase and superoxide dismutase activities. Microbial community analysis indicated that the DBD-MBR system maintained a more stable and diverse microbial structure compared to the conventional MBR. These findings confirm that the integration of DBD with MBR provides a sustainable and efficient strategy for the treatment of antibiotic wastewater by simultaneously enhancing biodegradation and controlling membrane fouling.
Chronic ankle conditions often lead to persistent functional limitations. Blood flow restriction (BFR) training is a potential adjunct to rehabilitation, but its specific efficacy for chronic ankle conditions remains to be synthesized. To systematically evaluate the effects of BFR-assisted rehabilitation on primary outcomes (dynamic balance and patient-reported ankle stability) and secondary outcomes (ankle range of motion and muscle strength) in individuals with chronic ankle conditions (including chronic ankle instability, chronic ligamentous injury, and tendinopathy). A systematic search of PubMed, Web of Science, Embase, CNKI, and Wanfang databases was conducted up to April 13, 2025. We included randomized controlled trials (RCTs) involving adults with chronic ankle instability (CAI) defined by a history of sprain and/or Cumberland Ankle Instability Tool (CAIT) score < 24. Grey literature was excluded. The protocol was registered on PROSPERO (CRD420251249207). Methodological quality was assessed using the Cochrane Risk of Bias (RoB) 1.0 tool. Data were pooled using random- or fixed-effects models. Seven RCTs (n = 204) were included. The overall risk of bias across the included studies was generally low to moderate. Meta-analysis of post-intervention values indicated that BFR-assisted rehabilitation significantly improved the primary outcome of dynamic balance (MD = 5.75; 95% CI [2.10, 9.40]; P < 0.01; I2 = 34%, P = 0.22) compared with conventional rehabilitation. Significant improvements in the other primary outcome, CAIT scores were also observed (MD = 3.68; 95% CI [0.26, 7.11]; P = 0.05). However, secondary outcomes for dorsiflexion and plantarflexion range of motion exhibited high heterogeneity and unstable pooled estimates, showing no significant benefit. Muscle strength data were insufficient for meta-analysis. BFR-assisted rehabilitation appears to enhance dynamic balance and perceived ankle stability in patients with chronic ankle conditions. However, evidence regarding its effect on joint range of motion remains inconclusive because of data instability. Current evidence supports BFR as a functional intervention, though standardized protocols are needed to further validate its clinical utility.
Tanzania has adopted artificial intelligence (AI)-assisted chest X-ray screening for tuberculosis (TB), including the use of CAD4TB version 6, which is registered by the Tanzania Medicines and Medical Devices Authority (TMDA). While GeneXpert, practical reference standard used in routine practice, remains the primary bacteriological confirmatory test in routine practice, there is currently no established national threshold for CAD4TB use in either active case finding (ACF) or passive case finding (PCF) settings. This study evaluates the implementation and operational use of CAD4TB version 6 within mobile TB screening units in Tanzania and highlights challenges affecting its effective use. We conducted a retrospective analysis of screening data from 11,923 individuals collected from mobile clinics equipped with digital X-ray, CAD4TB version 6, and GeneXpert systems. Comparisons were made between manual chest X-ray interpretation, CAD4TB scores, and GeneXpert results within the subset of individuals who underwent confirmatory testing. The findings reveal substantial inconsistencies in screening workflows, including non-uniform use of CAD4TB prior to GeneXpert testing, missing radiological records, and deviations from intended protocols across sites. Descriptive analysis showed that CAD4TB scores generally aligned with GeneXpert-positive cases within the tested subset; however, due to selective application of GeneXpert and incomplete data, these observations cannot be interpreted as measures of diagnostic accuracy. This study should be interpreted as an implementation and operational assessment of AI-assisted TB screening rather than a diagnostic accuracy or threshold-setting study. The findings highlight important gaps in protocol adherence, data completeness, and workflow standardization, underscoring the need for prospective, protocol-driven studies to establish validated national thresholds for CAD4TB use in Tanzania.
Rapid bacterial detection remains a critical need in clinical diagnostics, environmental monitoring, and food safety, yet conventional approaches are often limited by the small size and physiological variability of bacteria. To address this, we have developed a label-free impedance cytometry platform that combines a planar double-differential electrode configuration with an upper sheath fluid-assisted vertical compression. This design actively positions bacteria toward the bottom of the microchannel, where the electric field is strongest, thereby substantially improving signal stability and detection sensitivity. Through systematic optimization, a sheath-to-sample flow rate ratio of 2:1 was established as the optimal operating condition, providing an approximately 37% enhancement in impedance amplitude for bacteria while sustaining chip stability. The platform enables comprehensive evaluation of bacterial species, thermal viability, and antimicrobial susceptibility through real-time, dual-frequency (1.5 and 9 MHz) electrical profiling at the single-cell level. It effectively discriminates Escherichia coli (E. coli) from Bacillus subtilis (B. subtilis) based on their distinct amplitude and phase opacity profiles, assesses thermal viability in E. coli by detecting an approximately 11% increase in diameter after heat treatment, and performs rapid antimicrobial susceptibility testing-capturing an approximately 15% increase in average diameter (from 1.326 μm to 1.532 μm) within 20 min of polymyxin B (PMB) exposure. These results confirm the platform's ability to deliver multi-functional and high-precision bacterial characterization, offering a versatile tool for rapid microbiological analysis in clinical diagnostics and research.
Coccidioidomycosis is a fungal infection endemic to the southwestern United States, particularly Arizona. Fine needle aspiration biopsy (FNAB) is established as effective in diagnosing infectious diseases. However, existing literature evaluating FNAB of thoracic coccidiomycosis remains limited. We present a large single institution series of thoracic coccidioidomycosis cases diagnosed through FNAB. The Mayo Clinic Arizona Pathology database was searched for all FNAB cases with a diagnosis of coccidioidomycosis from January 1, 2013, to December 31, 2025. Electronic medical records were reviewed to tabulate demographics, clinical history, serology, and radiologic findings. All key cytologic, histologic, and special stained slides were reviewed. One-hundred and one FNAB samples were obtained from 100 patients: 37 cases combined endobronchial ultrasound-guided transbronchial needle aspirate (EBUS-TBNA) and robotic-assisted bronchoscopy (RAB), 31 cases RAB, 20 cases EBUS-TBNA, 12 cases percutaneous computed tomography-guided, 1 case endoscopic ultrasound-guided. Coccidioides organisms were identified during rapid on-site evaluation in 34 cases, saving 18 patients from unnecessary procedures. Coccidioides organisms were identified in 94.1% (95/104) of cytology slides and in 67.1% and 66.7% of tissue biopsies and cell blocks, respectively. For the 6 cases without Coccidioides organisms on cytology slides, concurrent tissue biopsies or cell blocks with or without Grocott's methenamine silver stains helped confirm the diagnosis. Of the 41 patients with a malignancy history, one had both Coccidioides and malignancy in the same specimen. FNAB is accurate at diagnosing thoracic coccidioidomycosis when combined with EBUS-TBNA and RAB. Rapid on-site evaluation is critical during interventional procedures and can eliminate the need for unnecessary procedures.
Labeo catla is a commercially important Indian Major Carp, yet its genetic improvement has been hindered by a lack of high-resolution genomic resources. We report the construction of the first high-density SNP-based genetic linkage map for this species and present the associated genomic datasets deposited in public repositories. Mapping populations derived from five full-sib families (CF1, CF5, CF7, CF8, and CF11) comprising 196 individuals (10 parents and 190 offspring) were genotyped via Genotyping-by-Sequencing (GBS), generating 133.94 GB of raw sequence data. Using 782 informative SNP markers and Kosambi's mapping function, we resolved 25 complete linkage groups (LGs), consistent with the haploid karyotype of L. catla (n = 25). The resulting map spans a total genetic distance of 1279.8 cM, achieving a high-resolution average marker interval of 1.63 cM. Beyond linkage mapping, these markers were utilized to anchor 668 previously unplaced scaffolds, effectively organizing 502.33 Mb (49.29%) of the L. catla draft genome into pseudo-chromosomes. LG2 was identified as the largest pseudo-chromosome (213.83 Mb), while LG4 was the smallest (1.8 Mb). The average chromosome size was approximately 36.39 Mb. Comparative genomic analysis confirmed extensive chromosomal colinearity and conserved synteny between L. catla and Labeo rohita, with significantly higher structural divergence observed against the Danio rerio model. This high-density map and the subsequent chromosome-level scaffold anchoring provide a critical genomic framework for positional cloning, Marker-Assisted Selection (MAS), and the identification of Quantitative Trait Loci (QTLs) for traits such as rapid growth and disease resistance, accelerating the genetic enhancement of this species in Asian aquaculture.
Ureteral stents are routinely used following endourological procedures to ensure adequate drainage and prevent obstruction. However, stent-related morbidity remains common, and optimal stent dwell time and removal methods are not well defined. This systematic review aimed to evaluate clinical and procedural factors influencing ureteral stent dwell time and the methods used for stent removal after endourological interventions. A systematic review was conducted in accordance with PRISMA guidelines and registered on PROSPERO. MEDLINE and Embase were searched from inception to October 2025. Randomized controlled trials and comparative observational studies evaluating ureteral stent dwell time and/or removal methods in adults undergoing endourological procedures were included. Risk of bias was assessed using RoB 2 and ROBINS-I tools. Thirty-two studies encompassing 4,373 patients were included. Reported stent dwell times varied widely, most commonly ranging between 10 and 14 days in uncomplicated cases, with longer durations associated with increased rates of encrustation and removal difficulty. Removal techniques included rigid cystoscopy (48.7%), flexible cystoscopy (19.9%), extraction strings (23.5%), and device-assisted methods (7.9%). Less invasive approaches, particularly flexible cystoscopy and extraction-string removal, were consistently associated with reduced pain scores and improved patient comfort, although extraction strings carried a small risk of premature dislodgement. While practice patterns vary, the evidence suggests that a 10-14 day dwell time might be the optimal window to balance healing with the prevention of encrustation. Less invasive removal approaches, particularly flexible cystoscopy and extraction-string techniques, were generally associated with lower pain scores and high procedural success rates in selected patients. While these methods are safe and better tolerated, extraction strings carried a small, reproducible risk of premature dislodgement. High-quality prospective studies are needed to define determinant-based, individualized stent management strategies.
Leg length inequality following total hip arthroplasty (THA) is a major cause of patient dissatisfaction. This study evaluated the accuracy and feasibility of a workflow combining computed tomography (CT)-based preoperative planning with a single intraoperative linear distance measurement for leg length restoration in primary THA. A consecutive series of 40 patients undergoing primary, mixed reality-assisted THA using a minimally invasive superior capsulotomy approach was analyzed. Preoperative three-dimensional (3D) planning calculated the distance between the prosthesis shoulder and the greater trochanter. This measurement was replicated intraoperatively to predict leg length change. Actual leg length change was assessed using pre- and postoperative standing electronic optical scan (EOS) images and compared with the intraoperative prediction. Preoperative leg length inequality averaged -2.4 ± 5.9 mm (range, -13 to 9). Predicted leg length change was 3.6 ± 3.1 mm (range, -2 to 15), whereas actual postoperative change measured 3.7 ± 3.1 mm (range, -2 to 12). This resulted in a mean absolute error of 0.1 ± 1.8 mm (range, -3.0 to 3.2). Leg length was restored within ± three mm in 95% of patients (38/40) and within ± five mm in 100% of patients (40 of 40). Patient-specific 3D planning combined with a single intraoperative measurement achieved highly accurate leg length restoration in primary THA. This streamlined workflow may improve leg length control compared with intraoperative image assessment, conventional navigation, or robotics.
Falls are the leading cause of accidental injury among older adults, 30% of community-dwelling adults aged 65 and over fall each year, with nearly half occurring outdoors. These falls are complex, understudied, and insufficiently addressed in current age-friendly cities or walkability frameworks. This study aimed to build interdisciplinary consensus on risks, preventive actions, and barriers to fall prevention in outdoor public spaces through a Delphi process. A three-phase Delphi study was conducted with 64 participants in round 1, 60 in round 2, and 49 in round 3, including four expert groups: older adults who had fallen outdoors, health and research professionals, urban planners, and decision-makers (local and regional policy-makers, elected officials, and public-space managers involved in urban planning). Phase one collected open responses on risks, preventive actions (modification of physical layout, public-space management, and behavior-related factors), and barriers to these actions. Responses were synthesized using AI-assisted analysis with systematic human validation. In phases two and three, the relevance of 124 propositions were rated on a 10-point Likert scale. Consensus was defined as ≥ 70% of ratings ≥ 7/10 and interquartile range ≤ 2.5. Consensus was reached for key intrinsic factors such as gait and balance impairments, visual and vestibular deficits, cognitive decline, and polypharmacy, as well as for environmental factors including irregular or inappropriate surfaces, obstacles, or signage, and crowding. Highly relevant preventive actions included integrating fall prevention into street and sidewalk design, training urban planning professionals, awareness campaigns, systematic maintenance, safer crossings, participatory co-design public-space adaptations and urban design features involving older adults and local stakeholders, and improved data monitoring through surveillance, mapping, and sharing of fall-related and environmental risk information. Main barriers were insufficient budgets, high costs, limited integration of fall prevention into planning priorities, and lack of evaluation of the impact of implemented actions. Outdoor fall prevention is a transversal challenge requiring integration of public health and urban planning. This Delphi highlights actionable priorities to embed fall prevention in local and national strategies, in particular in rapidly aging regions.
Primary cutaneous nocardiosis is often challenging to diagnose in a timely manner due to its diverse and nonspecific clinical manifestations, compounded by the difficulty in culturing and identifying Nocardia species. This case report describes a patient with primary cutaneous nocardiosis caused by Nocardia brasiliensis, who presented with normal immune function and no significant history of trauma. The patient initially developed blood blisters on the ankle, which then spread to papules and pustules throughout the entire left lower limb. Through repeated secretion of bacterial cultures and matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF-MS) technology, the Brazilian Nocardia species was identified and subsequently treated with trimethoprim-sulfamethoxazole (TMP-SMX). This case underscores the importance of repeated microbiological testing and the use of advanced diagnostic techniques, including MALDI-TOF-MS, when confronted with atypical infectious symptoms. It also highlights the critical role of early diagnosis and treatment in improving patient outcomes.
This systematic review and meta-analysis compared the live birth rate (LBR) after assisted reproductive technology (ART) with donor semen among single women, lesbian couples and heterosexual couples. Searches of PubMed, EMBASE and the Cochrane Library up to September 2025 identified seven eligible studies including 19,457 women. Separate analyses were performed for intrauterine insemination (IUI) and IVF. For IUI, single women had a significantly lower LBR than heterosexual couples [risk ratio (RR) = 0.70, 95% CI 0.66-0.74] and lesbian couples (RR = 0.67, 95% CI 0.63-0.72), whereas no significant difference was observed between heterosexual couples and lesbian couples. The clinical pregnancy rate was lower in single women compared with lesbian couples, but the pregnancy loss rate was similar across both groups. Sensitivity analyses confirmed these findings. Importantly, meta-regressions indicated that differences in LBR were no longer significant after adjusting for female age. Results from IVF studies were consistent with those from IUI analyses. Overall, single women using donor semen for ART showed lower success rates, largely explained by older maternal age. These findings highlight the importance of age adjustment, and provide evidence-based data for counselling individuals and couples considering ART with donor semen.
The complexity and rapidly evolving nature of critical patient care in Intensive Care Units underscore the importance of the accuracy and timeliness of nursing decisions, further highlighting the significance of nursing education. This study aims to examine the accuracy of four generative artificial intelligence tools (ChatGPT 5.0 Plus, ChatGPT 5.0, DeepSeek, and Google Gemini) in answering multiple-choice questions related to the intensive care nursing exam, a fundamental area in nursing education. In the study, the ChatGPT 5.0 Plus, ChatGPT 5.0, DeepSeek, and Google Gemini models were evaluated using a test data set consisting of 55 questions. The questions were classified according to their difficulty levels as easy (n = 16), medium (n = 17), and difficult (n = 22). The models' correct response rates and standard or unique correct/incorrect response distributions were examined. Computer-assisted statistical analysis used the Chi-square, one-way ANOVA, and Post-hoc Tukey tests. The study was reported according to STROBE. According to the study results, the success rates of all models were similar for easy and medium-level questions (70-82%), and the difference between them was not statistically significant (p > 0.05). Under difficult questions, however, the performance of the models diverged significantly, with Google Gemini achieving the highest success rate at 77.27% and DeepSeek showing the lowest performance at 45.45%. The chi-square analysis revealed no statistically significant difference in the correct/incorrect distribution among the models (χ²=3.69; p = 0.296), but at the observational level, Google Gemini had a higher number of unique correct answers (n = 6) compared to the other models. ChatGPT 5.0 was found to have no unique errors. In conclusion, while AI models generally showed similar levels of success in intensive care nursing exam questions, Google Gemini demonstrated superior performance in difficult questions, and DeepSeek showed the lowest level of success among the models. The study provides an essential comparative framework regarding the usability of AI-based learning and assessment tools in nursing education. It offers guidance for the future development of AI-based educational technologies. Not applicable.