Purpose: To evaluate the accuracy and precision of both femoral and tibial bone resections in unrestricted kinematic alignment total knee arthroplasty (uKA TKA) performed with manual instrumentation, using postoperative digital caliper measurements. Methods: A retrospective study analyzing prospectively collected data on femoral and tibial resection thickness in 73 patients undergoing primary uKA TKA. Femoral cuts were performed with manual KA-optimized instrumentation in all cases. Tibial cuts were performed manually in 58 cases and with patient-specific instrumentation (PSI) in 15; PSI tibial resections were excluded from tibial analyses. Postoperatively, resection thickness was measured using a digital vernier caliper (0.2 mm resolution) at predefined sites: distal medial femur (DMF), distal lateral femur (DLF), posterior medial femur (PMF), posterior lateral femur (PLF), medial tibial plateau (MTP), and lateral tibial plateau (LTP). Resection error was defined as measured minus target thickness (mm). Accuracy was reported as mean signed error; precision as SD of signed error; absolute errors and error class distributions were also reported. Postoperative measurements reflect the accuracy and precision of the initial manual tibial resections, excluding any subsequent corrective cuts. Results: A total of 408 measurements were analyzed (292 femoral, 116 tibial). Mean signed error across resections was low and consistently negative (-0.15 to -0.31 mm), with infra-millimetric precision (SD 0.45 to 0.73 mm). Mean absolute errors remained low across sites (0.35 to 0.62 mm). The proportion of errors outside ±0.5 mm ranged from 21.1% (PLF) to 44.4% (LTP) and those outside ±1.0 mm from 1.4% (DMF) to 18.5% (LTP). No errors exceeded ±2.0 mm. Conclusions: Manual caliper-verified unrestricted KA TKA achieved high accuracy and precision for both femoral and tibial resections. However, these findings do not establish superiority over other techniques and do not account for final implant position, soft-tissue balance, or clinical outcomes. This study provides quantitative data on tibial resection accuracy in uKA TKA and may serve as a benchmark for evaluating the performance of technology-assisted techniques.
The Nutrition Care Process (NCP) is a standardized model designed to improve the quality and consistency of nutrition care. However, its implementation remains variable across settings, influenced by factors such as time constraints, training, peer support, and technological infrastructure. This systematic review aims to synthesize the available evidence on barriers and facilitators influencing the implementation of the NCP/NCPT and to explore how different documentation formats may influence its adoption. This systematic review was conducted in accordance with PRISMA 2020 guidelines and included peer-reviewed studies published between 2009 and 2024 in English or Greek. Searches were conducted in MEDLINE, EMBASE, Scopus, CINAHL, and the Cochrane Library. Study quality was assessed using the National Heart, Lung, and Blood Institute (NIH) tool. A total of 11 reports representing eight studies were included, comprising cross-sectional, cohort, qualitative, and pilot designs. The most commonly reported barriers to NCP implementation were lack of training, time constraints, and limited technological infrastructure. Key facilitators included support from national dietetic associations, peer collaboration, and access to electronic health records (EHRs). Electronic formats were more frequently described as supporting improved documentation practices, practitioner confidence, and workflow efficiency, whereas manual approaches were commonly reported as time-consuming and less structured. Digital integration of the NCP may support more consistent documentation practices and improved workflow processes; however, the current evidence is largely observational and heterogeneous. Evidence regarding patient-level outcomes remains limited, and definitive conclusions regarding the comparative effectiveness of implementation formats cannot be drawn. Further high-quality research is needed to evaluate the long-term clinical impact of NCP implementation.
Post-market surveillance (PMS) under the European Union In Vitro Diagnostic Regulation (IVDR) demands proactive, literature-based evidence, but mature assays like QuantiFERON TB Gold Plus (QFT-Plus) generate volumes of peer-reviewed and other literature that can strain manual workflows. We ran a comparative study of an AI-enabled literature-surveillance platform (jointly developed with Huma.ai called the Huma.ai Platform) versus manual search for QFT-Plus PMS. PubMed and PubMed Central were queried for publications in 2024; human studies published in English underwent duplicate screening and full-text appraisal. Outcomes were yield, precision, overlap/unique entries, and reviewer time. The Huma.ai Platform retrieved 673 records, with 661 relevant to screening (98.21% precision). Manual searching retrieved 111, with 106 relevant to screening (95.50% precision): there were 103 shared and three manual-only items (metadata gaps). The Huma.ai Platform contributed 561 unique papers, 5 of which were excluded after full-text appraisal. In total, 664 articles were evaluated; no new safety signals were identified. Screening time averaged ∼16 s per article with Huma.ai Platform versus ∼60 s manually; full-text time (∼15 min per article) was similar. AI-assisted surveillance substantially increases coverage and reduces screening effort while maintaining high precision. Thus it supports efficient, reproducible PMS for QFT-Plus.
Efficient phenotyping is essential for accelerating genetic improvement in turfgrass breeding, where manual measurements are labor-intensive. This study evaluated hyperspectral imaging (HSI) as a high-throughput tool for assessing Zoysia spp. breeding populations consisting of 464 genotypes. HSI data (400-1000 nm) were processed through a user-in-the-loop hybrid segmentation pipeline integrating UMAP dimensionality reduction, DBSCAN clustering, Random Forest classification, and pseudo-RGB refinement. To independently assess vegetation classification performance, 10,000 manually annotated reference points from 50 pseudo-RGB images were compared with the automated module, yielding an overall accuracy of 0.9697, a precision of 0.8830, a recall of 0.9240, a specificity of 0.9779, an F1-score of 0.9030, and Cohen's kappa of 0.8851. A Combined Ranking Score (CRS) integrating five vegetation indices and vegetation pixel count was significantly associated with aerial shoot count (r = -0.445, p < 0.001) and runner count (r = -0.207, p < 0.001). The highest-ranked genotype showed a 9370.3-pixel increase in vegetation area between 6 and 16 weeks after transplanting, compared with 1417.7 pixels for the lowest-ranked genotype. Classification performance declined under high-coverage conditions, indicating increased mixed-pixel ambiguity in dense canopies. These results suggest that HSI-based CRS can support rapid, objective, and non-destructive relative ranking of density-related vegetative growth in turfgrass breeding. Because the study was conducted at a single location and season and correlations with manual traits were moderate, the framework is best interpreted as a screening and ranking tool rather than a direct predictive model.
Attention-deficit/hyperactivity disorder (ADHD) and borderline personality disorder (BPD) frequently co-occur. However, evidence on the clinical effects of stimulant treatment in ADHD-BPD comorbidity remains limited. This prospective study aimed to investigate the longitudinal effects of methylphenidate (MPH) on borderline personality features in adults with ADHD-BPD. Thirty-six adults diagnosed with ADHD who also met the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition criteria for BPD were treated with MPH and followed for at least 16 weeks. Clinical ratings of the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition BPD criteria and psychometric measures assessing symptom severity and personality functioning were obtained at baseline and follow-up. Twenty-four participants (66.7%) completed the follow-up. Follow-up duration ranged from 3.9 to 12.3 months, with a mean duration of 7.8 ± 2.48 months. The number of BPD criteria significantly decreased after treatment (r = 0.82, P < 0.001). Nineteen participants no longer met the diagnostic threshold for BPD, and 10 achieved remission (≤2 BPD criteria). Baseline anger dysregulation (P = 0.009) and mood stabilizer use (P = 0.029) were associated with continued MPH treatment. Our findings preliminarily suggest that MPH, especially combined with mood stabilizers, may be associated with clinical benefits and acceptable tolerability in patients with comorbid ADHD-BPD. While causal conclusions cannot be drawn, replication in randomized controlled trials is warranted.
Background: Progesterone receptor (PR) status plays an important role in guiding hormone therapy decisions in breast cancer. In current practice, PR expression is assessed manually from immunohistochemistry (IHC) slides, which can be time-consuming and may vary between pathologists. This study aims to develop an automated and interpretable framework for PR-IHC analysis to improve consistency and efficiency. Methods: In this work, we developed an AI-assisted pipeline that combines nuclei segmentation, classification, and scoring for PR-IHC images. A fine-tuned Cellpose model was used to segment individual nuclei. The segmented nuclei were then analyzed using a DAB intensity-based approach to classify them into four categories: negative, weak, moderate, and strong. These results were further combined to generate Allred scores. The system was evaluated on 250 PR-IHC images with annotations provided by expert pathologists. Results: The framework achieved strong segmentation performance (F1-score = 0.85, IoU = 0.74) and high classification accuracy (macro F1-score = 0.95). The method also performed well when applied to ER-IHC images without additional retraining. Conclusions: The proposed framework provides a reliable and interpretable approach for automated PR-IHC scoring. It helps reduce manual effort, improves consistency in evaluation, and shows potential for practical use in digital pathology settings.
Accurate characterization of fibrous caps in coronary arteries allows more precise estimations of coronary plaque rupture risk. While intravascular imaging techniques offer high-resolution imaging, limitations in contrast and reliance on manual interpretation hinder large-scale, volumetric assessments of cap morphology. In this study, we present CapSeg, a fully automated computational pipeline for in vivo fibrous cap segmentation and thickness measurement, leveraging intravascular polarization-sensitive OCT (IV-PS-OCT). By incorporating reflectance, birefringence, and depolarization metrics, CapSeg differentiates fibrous caps from lipid-rich cores, enabling automated extraction of minimum cap thickness and polarization properties across entire vessel segments. Using a dataset of 200 cross-sectional coronary IV-PS-OCT images, CapSeg's minimum cap thickness measurements were validated against manual annotations from two expert observers. Automated cap thickness measurements showed strong agreement with manual assessments (mean ± SD: CapSeg 131 ± 80 µm; Observer 1: 137 ± 84 µm; Observer 2: 144 ± 83 µm) demonstrating comparable limits of agreement relative to the inter-observer variability. The pipeline was applied to volumetric IV-PS-OCT data of 38 coronary lesions from patients with acute coronary syndrome (ACS, n = 23) or chronic coronary syndrome (n = 15). This analysis revealed decreased birefringence (3.7·10-4 vs. 4.5·10-4) and increased depolarization (9.6·10-2 vs. 8.6·10-2) in the fibrous caps of patients with acute disease. Overall, CapSeg enables fast, reproducible, and fully automated fibrous cap evaluation, laying the foundation for large-scale clinical studies and real-time intravascular imaging applications.
Optical coherence tomography (OCT) plays a crucial role in diagnosing retinal diseases, such as diabetic retinopathy (DR) and age-related macular degeneration (AMD), as well as in identifying neurodegenerative biomarkers. Despite advancements in U-Net-based convolutional networks for OCT image segmentation, there is a lack of systematic reviews comparing their performance with expert manual segmentations. This review aims to assess the efficacy of these automated networks in segmenting retinal fluid and pathology in OCT images. By searching three different databases, PubMed, Web of Science, and Scopus, over the past five years, we conducted this systematic review and meta-analysis using data from 16 diagnostic-accuracy studies. Study quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. The analysis used mean and standard deviation for the continuous outcomes and employed a random-effects model. Analyses were performed using Review Manager software version 5.4 (The Cochrane Collaboration, London, UK, 2020). Artificial intelligence (AI) and human Dice scores did not differ significantly (standardized mean difference (SMD) = -0.08; 95% CI: -1.16 to 0.99; p = 0.88), nor did intraclass correlation coefficient (ICC) values (SMD = -0.13; 95% CI: -5.70 to 5.45; p = 0.96). However, very high heterogeneity (I² > 90%) limits the reliability of these pooled estimates. AI achieved expert-level Dice scores for subretinal fluid (0.88-0.96) and geographic atrophy (0.94). Intraretinal fluid was more challenging (Dice 0.79-0.89). Volumetric reliability was strong (ICCs > 0.94). Device-dependent variability was substantial; kappa was 0.37 for ZEISS versus 0.73 for Spectralis, indicating a need for device-specific optimization. Volumetric analyses revealed minor systematic overestimation (mean difference: -0.05 mm²). Processing times ranged from 100 milliseconds per B-scan to several seconds per volume, representing substantial time savings versus manual segmentation. Fully automated U-Net pipelines reach expert-level accuracy for subretinal fluid and geographic atrophy but remain limited for intraretinal fluid and show marked device-dependent variability. Clinical translation requires four priorities: standardized multi-device benchmarks, domain adaptation for cross-platform robustness, hybrid AI-human workflows pairing automated pre-segmentation with expert oversight, and prospective clinical trials. These steps are needed to move AI segmentation from a research tool to a clinical decision-support system.
To address the key limitations of traditional automated landslide detection methods-namely their reliance on large training datasets, insufficient detection accuracy, and high false positive rates-this study proposes an InSAR-based automated landslide detection approach integrating multi-weight factor coupling, referred to as an Improved Hot Spot Analysis (IHSA) method. Built upon InSAR-derived surface deformation data, the proposed method optimizes the hotspot detection model through a spatial weighting matrix that incorporates multi-feature fusion. Morphological processing is further applied to refine landslide boundaries. Validation against manually interpreted ground truth data demonstrates that the proposed method achieves a precision of 90.20%, representing an improvement of 53.61 percentage points over the conventional hotspot analysis method, while maintaining a stable recall rate of 92.00%. The extracted landslide boundaries exhibit high consistency with manual interpretation results, effectively overcoming common issues in traditional approaches such as fragmented outputs and internal voids. This study provides an efficient, training-free solution for large-scale early identification of potential landslides, offering critical methodological support and data foundations for regional landslide detection and hazard mitigation.
Background/Objectives: Coronal Plane Alignment of the Knee (CPAK) classification enables individualized alignment assessment in total knee arthroplasty (TKA), yet manual evaluation is time-consuming and lacks preoperative-to-postoperative transition analysis. Methods: This retrospective, single-center study aimed to develop and validate a fully automated deep learning-based CPAK classification system using internal validation on a held-out test set (n = 92) and to investigate individual-level transition patterns and their association with short-term clinical outcomes using paired radiographic data from a large Chinese cohort. A total of 919 KOA patients undergoing TKA were analyzed. A keypoint detection model (HRNet-W32) was developed to automatically calculate the medial proximal tibial angle, lateral distal femoral angle, arithmetic hip-knee-ankle angle, and joint line obliquity, from which CPAK types were derived. Results: On the validation set (92 cases), the model achieved a Mean Radial Error of 1.22 ± 0.43 mm for keypoint detection; mean absolute errors for MPTA and LDFA were ≤0.74°, while for aHKA and JLO they were 0.91° and 1.12°, respectively, with intraclass correlation coefficients ≥0.96 compared to manual annotations. Automatic CPAK classification accuracy was 80.98% (kappa = 0.767). Transition matrix analysis showed that only 9.36% of all patients maintained their original type postoperatively, with most shifting to types IV, V, or VII. After inverse probability weighting, no significant differences in clinical outcomes were observed among transition groups (all adjusted p > 0.05). Conclusions: These results demonstrate that the proposed automated system enables efficient CPAK assessment, revealing substantial postoperative alignment transitions that were not associated with differential short-term outcomes, thereby supporting AI-assisted individualized alignment planning in TKA.
Mine disasters require urgent lifeline setup in confined tunnels, but manual rescue in unstable accident zones carries huge safety risks. Coal mine rescue robots (CMRRs) have become key equipment to replace manual rescue. However, traditional remote-controlled CMRRs suffer from low autonomy and weak environmental perception capability, which have become critical bottlenecks for field application. As an emerging technology in the mining field, digital twin enables high-precision virtual-real mapping and on-site operation guidance, providing a novel solution to the above problems. To realize autonomous navigation and digital twin visualization of the CMRR, this paper first carries out targeted hardware retrofits on the CMRR platform, upgrades environmental perception, communication transmission and motion control modules, and lays a solid hardware foundation for subsequent algorithm design and system implementation. Aiming at the complex post-disaster underground environment, a digital twin-integrated CMRR system is constructed. For intelligent autonomous navigation, this study investigates a 3D point cloud-based autonomous navigation framework and proposes a slope-fitting method as well as a maximum arrival probability obstacle avoidance method based on Bézier curve trajectories. For environmental visualization, a digital twin interactive interface is built to monitor gas and other environmental parameters in real time, and accurately reconstruct underground roadway structures based on point cloud data. This design not only ensures the robot's autonomous obstacle avoidance but also helps rescuers grasp underground conditions in advance. Field tests in a simulated post-disaster mine with complex terrain show that the system can stably complete autonomous navigation tasks, maintain stable motion control under dynamic interference, and provide accurate and reliable environmental data for rescue decisions, verifying its feasibility and effectiveness in harsh mine rescue scenarios.
Military occupational blast and impulse exposure (MOBE) is a potential risk factor for increased Anger, Aggression, or Violence (AAV). The objective of this study was to assess the association between MOBE and AAV-related content in clinical text notes in Veterans Health Administration (VHA) data. This matched cohort study investigated AAV-related content in clinical text data from Veterans across high and low-risk MOBE occupations. Veterans with documentation of high-risk MOBE occupations were sampled from a VHA population database and matched 1:1 with low-risk MOBE controls on age, sex, and race/ethnicity. An algorithm leveraging semantic similarity and large language models (LLMs) identified AAV content in millions of VHA clinical text notes. Model performance was assessed by manual review. Veteran outcomes were classified as AAV-positive or AAV-negative based on the content of their medical records. Logistic regression was used to estimate the association between MOBE and AAV. Among the MOBE cohort (n = 5,000) and matched controls (n = 5,000), 3.64 million clinical notes (Mean: 364 notes/person) were classified using an LLM pipeline that achieved 96% classification accuracy in manual review. Raw group differences were significant, with 17.2% of the MOBE cohort meeting AAV criteria, compared to 12.0% of matched controls (unadjusted Odds Ratio [OR]: 1.53 [1.37-1.71]). In adjusted models, the association between MOBE and AAV remained significant (OR: 1.22 [1.08-1.38]). Combat exposure (OR: 1.32 [1.11-1.58]) and traumatic brain injury (TBI) (OR: 1.47 [1.29-1.67]) were associated with increased AAV, while female sex was protective (OR: 0.33 [0.24-0.45]). In nested models, the OR for AAV ranged from 1.53 to 1.16 depending on the covariates considered, and posttraumatic stress disorder (PTSD) was found to be a significant confounder of the MOBE-AAV association. This matched cohort study found that individuals who served in occupations at high risk for MOBE were significantly more likely to have evidence of AAV in clinical text data. Neurological and affective changes potentially linked to MOBE may be interconnected with other military health factors, such as combat exposure, TBI, and PTSD.
Background/Objectives: This numerical (finite element analysis/FEA) study aimed to analyze the internal stress distribution patterns caused by a 4 N orthodontic force during intrusion, extrusion, rotation, tipping, and translation, using four common failure criteria, in intact periodontium. Additionally, based on these stress patterns, the study sought to establish correlations between these failure criteria to determine the most appropriate one-brittle-like or ductile-like. The orthodontically induced internal resorption was also assessed, along with the influence of orthodontic movements on the topography of the resorptive processes. Methods: A total of 180 numerical simulations on nine 3D anatomically accurate models containing the second lower premolar (manually reconstructed, CBCT-based) were performed. The brittle-like Maximum Principal, Minimum Principal, and ductile-like Von Mises and Tresca criteria were employed for the numerical analyses. Results: Translation and rotation more frequently cause internal pulp chamber resorption (vestibular, occlusal, lingual-mesial walls). In rotation, the stress was directly caused by the force applied to the bracket, while in translation, the origin of the stress was from the lingual third cervical area. Intrusion and extrusion movements are most likely to cause resorption in the root canal's cervical and middle thirds (vestibular and proximal walls) due to high stresses induced by movement at the external cervical vestibular region. Tipping seems to be least prone to internal resorption. Conclusions: A 4 N orthodontic force can induce internal resorption in the pulp chamber and in the middle and cervical thirds of the root canals. The ductile-like failure criteria appear to provide a more accurate assessment of internal orthodontically induced resorption than the brittle-like criteria.
Accurate lung cancer subtyping from CT images is essential for treatment planning. However, manual interpretation suffers from inter-observer variability across morphologically similar subtypes. To overcome this limitation, we propose a dual-branch cross-attention fusion network integrating ConvNeXt-Small and Swin Transformer-Small. Specifically, this architecture captures both local textures and global structural representations. A learnable cross-attention module then fuses these streams into a 512-dimensional unified descriptor. We train our network on a four-class dataset. It achieves 98.46% accuracy and 98.45% F1-score on the test set. Ultimately, our method significantly outperforms state-of-the-art baselines. The proposed framework demonstrates massive clinical potential for non-invasive subtyping, offering a robust tool for personalised treatment planning while reducing diagnostic subjectivity.
Background: This study evaluated the performance of a commercial offline adaptive radiotherapy system for systematic monitoring of breast cancer treatment with nodal irradiation using helical tomotherapy. Methods: Thirty patients treated for invasive unilateral breast carcinoma were analysed. For each patient, three megavoltage CT scans acquired at the first, middle, and last treatment sessions were processed through the PreciseART (Accuray, US) offline ART workflow. Automatically deformed structures were compared with manually delineated reference structures. Geometric accuracy was assessed using the Dice similarity coefficient (DSC), Hausdorff distance (HD95), mean distance to agreement (MDA), and barycentre distance (BD). The dosimetric parameters included D2% and V95% for targets and Dmean/Dmax/V20Gy for organs at risk. Results: Median DSCs exceeded 0.9 for the CTVbreast, PTVbreast, heart, and ipsilateral lung and were above 0.8 for the remaining structures, except the CTVn and oesophagus. Dosimetric differences between deformed and reference structures were within 5% for D2% across all targets and for V95% of the CTVbreast and PTVbreast in 90% of the sessions. The ipsilateral lung V20Gy differed by less than 5% in more than 90% of the sessions. Larger deviations (up to 10%) were observed for the nodal PTVs and mean heart dose, while the greatest inconsistencies were found for the oesophagus and spinal canal. Conclusions: The evaluated offline ART system demonstrates sufficient accuracy for automated monitoring of breast and lung structures. However, cautious interpretation remains necessary for nodal targets, heart, and oesophagus dosimetry prior to clinical implementation.
Oral leukoplakia (OL) is a precancerous condition typically assessed through histopathological examination of mucosal lesion biopsies. Identifying histological features of oral lichenoid lesions (OLL) within OL samples is clinically important, as they influence the risk of malignant transformation and may indicate oral lichen planus (OLP). However, interpretation is challenging, with substantial intra- and inter-observer variability. Artificial intelligence (AI) offers the potential to provide reproducible, objective support for histopathological classification. We developed an AI system to (a) segment histological layers and extract characteristics of the keratinization zone, (b) classify keratinization types, and (c) distinguish OL from OLL. A retrospective cohort of 240 histological slides from 192 patients was included. Of these, 175 transversely sectioned slides underwent manual segmentation of subepithelium, epithelium, keratinization zone, and nuclei in the keratinization zone. Measurements of keratin thickness and nuclei density were performed to classify the keratinization zone into (hyper)orthokeratosis, parakeratosis, or hyperparakeratosis. All 240 slides were labeled as OL or OLL and crops were extracted for diagnosis classification. Segmentation was evaluated with Dice-Sørensen coefficient (DSC), and classification was evaluated by accuracy. Segmentation of histological layers was highly effective (DSC > 0.92), with lower performance for nuclei (DSC = 0.68). Keratinization classification reached 0.92 accuracy: (hyper)orthokeratosis 0.98, hyperparakeratosis 0.93, parakeratosis 0.94. Lesion-level OL/OLL classification achieved 0.929 accuracy, with slightly better effectiveness in transverse sections than tangential sections (0.944 vs. 0.925). The AI system demonstrated strong segmentation and classification capabilities, supporting its potential to enhance diagnostic accuracy, reproducibility, and efficiency for the assessment of OL samples.
The application of machine learning to materials discovery is often constrained by the availability of large-scale, experimentally verified materials databases. This study presents an automatic, end-to-end framework that bridges this gap by training machine-learning predictors for materials properties on experimental data mined directly from the literature. We apply this framework to predict the photoluminescence (PL) wavelengths of thermally activated delayed fluorescence molecules. By integrating "chemistry-aware" natural language processing with automated chemical structure resolution, a dataset of 643 experimentally measured PL wavelengths was afforded. This experimentally grounded data were used to train a heterogeneous graph neural network and a ridge-regression model; both achieved mean absolute errors below 0.13 eV in less than 3 min on a personal laptop, effectively capturing complex structure-property relationships without manual feature engineering. These results demonstrate that our framework provides a fast, scalable, and generalizable pathway to generate experimentally grounded models for property predictions in organic optoelectronics.
Objectives: Ensuring an effective triage to treat patients with chest pain in emergency settings is critical, but it can often be challenging, particularly when patients wear face masks or are unable to clearly communicate their pain. To address this limitation, this study presents a real-time facial expression-based system for chest pain intensity assessment as an initial step toward realizing intelligent emergency triage. The proposed system integrates deep learning with real-time video analysis to provide objective and rapid pain level recognition. Methods: A YOLOv12-based facial expression recognition model was trained using annotated facial images of patients experiencing chest pain, and the model categorizes pain into three intensity levels: no pain, slight pain, and moderate to severe pain. Multiple YOLOv12 variants were systematically evaluated to identify an optimal configuration for potential clinical use. The developed system supports two operational modes: real-time recognition, which analyzes continuous video streams and delivers immediate visual feedback through an interactive interface, and a manual upload mode for offline video analysis, review of results, and playback. Additional usability features, including error prompts and data reset functions, were implemented to enhance system stability and user experience. Results: Among the evaluated models, the YOLOv12-L model achieved the best performance with an accuracy of 98.81%, sensitivity of 98.76%, specificity of 98.79%, precision of 98.04%, and an F1-score of 98.41%, demonstrating stable and accurate recognition. The proposed system is designed to support the triage process of assessing patients with chest pain, particularly in cases where patients wear masks or cannot clearly express their pain. By providing real-time and objective pain intensity assessment, the system shows potential to assist healthcare professionals in identifying patients who may require priority attention and to serve as a supportive tool for emergency triage workflows. Conclusions: Future work will incorporate edge computing with a lightweight model to enable real-time pain assessment in ambulances, facilitating faster intervention and treatment.
Antipsychotics are commonly recommended for the treatment of delirium; however, alternative options are warranted due to the limitations of oral and injectable formulations. The blonanserin (BNS) transdermal patch may improve treatment adherence; however, evidence regarding its efficacy in managing delirium remains limited. This retrospective case series describes 51 cases of delirium managed with BNS patches at Kyoto University Hospital between January 2020 and June 2022. The effectiveness of the BNS patch for delirium, the rationale for its selection, and the associated adverse events were retrospectively evaluated using electronic medical records. Delirium was diagnosed according to the Diagnostic and Statistical Manual of Mental Disorders, 5th edition criteria by physicians from the consultation-liaison or palliative care teams, and the symptoms were regularly monitored. Effectiveness was assessed based on non-standardized clinical judgment documented in the medical records. Hyperactive and mixed delirium were observed in 41.2% and 56.9% of patients, respectively. Overall, the therapeutic response rate across all patients was 84.3%. The main reasons for the selection of BNS patches were difficulty or inability to take oral medications (60.8%). Adverse events occurred in 29.4% of patients; all resolved after discontinuation of the BNS patch, and no serious or irreversible reactions were observed. These findings indicate that the BNS patch has potential as an effective treatment option for delirium, particularly in patients with challenges in the administration of medication. However, given the retrospective and exploratory nature of this study, the findings should be interpreted with caution.
Recent advancements in 3D reconstruction technologies have significantly transformed plant phenotyping, enabling precise, scalable, and automated trait extraction. Traditional manual phenotyping methods are increasingly being replaced by image-based approaches, such as photogrammetry, LiDAR, RGB-D sensing, and deep learning (DL)-based techniques. These tools allow for non-destructive, high-throughput measurements of plant morphology, structure, and physiological traits. This review synthesizes the state of the art in 3D reconstruction methods, including conventional geometric algorithms and emerging DL methods, and evaluates their application across diverse plant species. In addition, we discuss the sensing modalities, evaluation metrics, and crop-specific deployments. Although promising, current technologies still face challenges in terms of computational efficiency, scalability to outdoor environments, and generalizability across crop types. This review concludes by identifying research gaps and future directions for making real-time, field-deployable 3D phenotyping systems.