Brain amyloid accumulation is considered an early sign of Alzheimer's disease (AD) pathology, appearing decades before manifestation of clinical symptoms. However, very little is known about neurodegeneration during this early AD stage. Herein we use quantitative gradient recalled echo (qGRE) magnetic resonance imaging (MRI) to reveal preatrophic microstructural neuronal and myelin damage concurrent with amyloid pathology. qGRE MRI and positron emission tomography (PET) amyloid [11C] Pittsburgh Compound-B (11C-PiB) images were acquired concurrently from 141 study participants. Two qGRE-derived biomarker indices - neuronal density index (NDI) (a proxy for neuronal density) and myeline density index (MDI) (a proxy for myelin density) - are used here to quantify neuronal and myelin damage. qGRE detected significantly lower gray matter neuronal density and white matter myelination in the group of participants with amyloid level below the conventional amyloid-positive threshold (mean cortical standardized uptake value ratio of 1.42), despite the absence of atrophy. qGRE-based NDI and MDI in-vivo biomarkers identify microstructural brain preatrophic neurodegeneration and demyelination in early amyloid pathology.
Postpartum hemorrhage (PPH) is the leading cause of maternal mortality worldwide. While efforts to prevent and improve the management of PPH, including the use of emergency checklists during a hemorrhage event, have increased, there has been limited attention to the lived experiences of pregnant people. To describe patients' perceptions of their experiences with an inpatient multidisciplinary team response to PPH management when an emergency checklist was utilized. Individuals who experienced PPH, defined as a cumulative blood loss of 1,000 mL or more, were approached to participate in the study. Purposeful sampling was employed to ensure diversity by both self-reported racial and ethnic identity and severity of PPH. Participants completed remotely conducted semi-structured interviews between February and November 2024. A qualitative phenomenological approach was employed to establish a shared understanding of participants' perceptions and lived experiences. Twenty participants completed interviews. PPH imposed a substantial physical and emotional burden on patients. While patients often felt exposed and reported a loss of agency during their PPH, these concerns were eased when the care team maintained a steady presence and clearly articulated their management and interventions. Perception of the emergency checklist, when recalled, provided additional reassurance by reinforcing trust in systematic care and adherence to protocols. Participants emphasized the importance of ongoing education and guidance on PPH treatment and its implications after hospital discharge. Patients who experienced PPH described feelings of vulnerability; however, the care team's composed presence and clear communication were perceived by participants as sources of reassurance during these experiences. Awareness of emergency checklist use, when present, was described by some participants as fostering confidence in team management. Patient support should extend beyond medical management throughout the postpartum period.
Traditional approaches to the diagnosis of personality disorders, including a clinical interview and a self-report, are usually limited by subjectivity and time constraints. Recent developments in artificial intelligence have opened the possibility of more objective and data-driven psychological testing. This paper introduces an AI-powered system that forecasts personality disorders using natural language processing (NLP), speech recognition, and face recognition. The suggested method should help with the initial diagnosis and more tailored mental health solutions. Two benchmark datasets were used: myPersonality for text analysis and DAIC-WOZ for multimodal analysis of speech and facial expressions. The feature extraction methods were TF-IDF, Vader sentiment scores, Mel-Frequency Cepstral Coefficients, prosodic features, facial action units, and gaze tracking. BiLSTM, CNN, BERT, and GPT-3 models were analyzed through accuracy, precision, recall, F1 score, and AUC-ROC. GPT-3 was the most accurate at 89.1%, followed by BERT at 87.4% and CNN-based facial analysis at 85.6%. The findings indicate that multimodal fusion improves classification by leveraging holistic and complementary behavioral information. These results support the promise of multimodal systems with AI capabilities to make more precise predictions of personality disorders and underscore the need to consider interpretability, fairness, and data privacy in future applications. It should be mentioned that the current study deals with the problem of both normal personality traits prediction (through the Big Five framework) and features that can reflect psychological distress and be related to personality disorders. The myPersonality data set measures normative personality dimensions, whereas the DAIC-WOZ data set measures multimodal behavioral data that is relevant in clinical terms. The paper explains the connections between extreme trait profiles and clinical personality disorders to reconcile the paradigms of trait and disorder assessment.
Introduction: Colorectal cancer remains a leading cause of cancer-related morbidity and mortality, with adenomatous polyps representing a common precursor. Post-polypectomy polyp recurrence represents a significant risk of colorectal cancer, driving periodic colonoscopy surveillance and polypectomy as needed. In this study, we explore a multimodal machine learning approach that integrates endoscopic imaging with clinical and pathology data to improve recurrence risk prediction and support individualized surveillance planning. Methods: We developed and evaluated a multimodal artificial intelligence (AI) model to predict post-polypectomy colorectal polyp recurrence using the ERCPMP-v5 dataset. The cohort included 217 patients with 796 high-resolution endoscopic RGB images and 21 endoscopic videos; video data were converted to still frames at 2 frames per second. Images and frames were resized to 224 × 224 pixels and normalized. Patient-level demographic, morphological (Paris, Kudo Pit, JNET), anatomical, and pathological variables were encoded using standard scaling for continuous features and one-hot encoding for categorical features. Visual representations were extracted using a pretrained Vision Transformer backbone (ViT-Base-Patch16-224) with frozen weights. Structured metadata (79 variables) was encoded using a multilayer perceptron. A late fusion framework used image and metadata representations to generate a recurrence probability via a sigmoid classifier; probabilities were thresholded at 0.5 for binary prediction. Model performance was evaluated on a held-out test set using accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC). We additionally compared fusion performance with image-only and metadata-only baselines. Predicted probabilities were translated to surveillance recommendations using risk tiers: low risk (0.00 ≤ p < 0.20), moderate risk (0.20 ≤ p < 0.50), and high risk (p ≥ 0.50). Results: On the test set, the multimodal fusion model achieved 90.4% accuracy, 86.7% precision, 83.1% recall, 84.9% F1-score, and an AUC of 0.920. The image-only model achieved 84.6% accuracy (AUC 0.880), and the metadata-only model achieved 81.9% accuracy (AUC 0.850), indicating improved performance with multimodal fusion. Risk stratification enabled surveillance recommendations of 1-3 years for low risk, 6-12 months for moderate risk, and 3-6 months for high risk. Conclusions: A late-fusion multimodal model integrating endoscopic imaging with structured clinical and pathology variables demonstrated excellent performance for predicting post-polypectomy recurrence and generated actionable risk-based surveillance intervals. This approach may support individualized follow-up planning and more efficient allocation of surveillance resources, while prioritizing timely evaluation for patients at higher predicted risk.
Post-traumatic epilepsy (PTE) is a major long-term complication of traumatic brain injury (TBI), but early risk prediction remains imprecise. Radiomics enables quantitative analysis of subtle abnormalities on non-contrast head CT (NCCT) that are not readily visible on routine imaging and may improve early risk stratification. This pilot study assessed the performance of radiomic features from acute NCCT, alone or combined with clinical variables, to predict late post-traumatic seizures (PTS) within six months of injury, an early marker of PTE. Eighty-two patients with TBI were included, and two machine-learning approaches were employed: a radiomics-only model and a clinically augmented model incorporating demographics, admission Glasgow Coma Scale (GCS), and prophylactic antiseizure medication use. Radiomics-only models showed moderate discrimination in nested cross-validation (logistic regression AUC = 0.719). Frequently selected features reflected frontal and temporal lobe asymmetry and regional heterogeneity. Adding clinical variables significantly improved performance across all models. The best model, a clinically augmented logistic regression, achieved an AUC of 0.842 with improved accuracy, precision, recall, and F1 score. Admission GCS and antiseizure prophylaxis were the most influential clinical predictors. The findings of this pilot study support NCCT-based radiomics combined with clinical data as a promising framework to be further validated for early PTE risk stratification.
Background: The 2025-2030 Dietary Guidelines for Americans recommend that 6-12-month-old infants receive 11 mg iron/day. The contribution of iron-rich foods in meeting guidelines is unclear. Objectives: The aims were to: (1) determine the contribution of iron-fortified cereal, infant formula and heme-iron sources to infants' total dietary iron intake; (2) examine differences in iron adequacy by milk-feeding type; and (3) identify feeding patterns associated with meeting daily iron requirements through dietary sources. Methods: Mothers of infants were recruited from a pediatric clinic and 24 h feeding recalls were conducted to estimate infants' iron intake. Infants' milk-feeding types were: breastmilk only (BF), mixed (MF), or infant formula only (FF). Main outcomes were: meeting/not meeting daily iron requirement (11 mg) overall and by milk-feeding type; contribution of iron-fortified infant cereal, formula and meat to daily iron intake. Descriptive statistics, bivariate chi-square tests, and multivariate logistic regression analyses were conducted. Results: Most participants identified as African American or Hispanic (76%) and were enrolled in the Special Supplemental Nutrition Program for Women, Infants, and Children (84%). Thirty-nine percent consumed < 11 mg iron/day from dietary sources. By milk-feeding type, inadequate iron intake was significantly higher among the BF (72%) and MF (74%) groups vs. the FF group (24%, p < 0.05). Iron-fortified cereals were consumed by 46% of infants and provided a median iron intake of 6.75 mg. Among the FF group, infant formula provided 63% of the daily iron requirement. Conclusions: Inadequate dietary iron intake is common. Iron-fortified cereal is an important dietary iron source. Future research is warranted to understand the relations among infants' daily iron intake, iron sources (heme vs. non-heme), and iron status.
Background/Objectives: Micronutrient malnutrition, particularly deficiencies in calcium, vitamin D, iron, zinc, and iodine, remains a significant public health issue among school-aged children in Morocco. Processed cheese, such as "The Laughing Cow" (TLC), has potential as a vehicle for fortification due to its widespread consumption and accessibility. This study aimed to evaluate the impact of fortified TLC on micronutrient intake and adequacy relative to the Recommended Dietary Allowances (RDA), among Moroccan children aged 6-12 years, and to explore differences in effects by socioeconomic status (SES). Methods: Data from the Moroccan Household Budget Survey (2013-2014) included 9266 children (39.4% TLC consumers). Dietary intake was assessed using 24 h recalls, and nutrient composition was analyzed using Ciqual 2020 tables and specialized software. Fortification scenarios were modelled to estimate potential impacts on micronutrient intake and compliance with RDAs. Results: Under the modelling scenarios, consumption of one portion/day of fortified TLC significantly improved RDAs compliance for iron, iodine, and zinc (p < 0.05). There was also an increase in RDA compliance for calcium and vitamin D, but differences were not significant. The impact of fortification on micronutrient intake and RDA compliance increased with socioeconomic status. Consumers of more than one portion/day showed the highest compliance with RDAs (p < 0.001). Fortification effects were consistent across age subgroups. Conclusions: Fortifying processed cheese represents a feasible strategy to address micronutrient deficiencies among Moroccan schoolchildren. This study highlights the potential of targeted fortification programmes to improve public health outcomes, particularly in vulnerable populations. Further research is needed to optimize fortification approaches and ensure sustainability.
Landslides are widespread geohazards in mountainous regions and pose serious threats to human safety, infrastructure, and ecosystems. Accurate detection from high-resolution optical remote sensing imagery remains challenging because landslide targets often exhibit irregular morphology, large scale variation, weak boundaries, and strong background interference. To address these issues, this study proposes L-SAINet, a shape-adaptive and inner-scale interaction network for landslide detection in complex remote sensing scenarios. Built on a lightweight one-stage detection framework, the proposed method introduces an L-SAI module that integrates adaptive deformable convolution, channel-spatial attention, and inner-scale feature interaction. The shape-adaptive branch improves geometric alignment for irregular and elongated landslide bodies, while the attention branch enhances semantic discrimination under heterogeneous background conditions. The two branches are further fused at the same feature scale to construct a more unified landslide representation. Experiments on the Bijie Landslide Remote Sensing Dataset show that L-SAINet consistently outperforms the baseline detector and single-branch variants in Precision, Recall, mAP@0.5, and mAP@0.5:0.95. Additional analyses based on precision-recall curves, confusion matrices, convergence behavior, model complexity, and representative complex-scene examples further confirm its effectiveness and robustness. The results demonstrate that jointly modeling geometric adaptability and semantic refinement is an effective strategy for landslide detection in complex mountain environments.
This study addresses the need for intelligent condition monitoring in high-complexity medical imaging systems by proposing a smart sensing architecture for the Revolution EVO Computed Tomography (CT) scanner. Ensuring operational reliability and minimizing unexpected downtime remain critical challenges in advanced CT platforms, motivating the integration of distributed sensing and data-driven analytics. System logs spanning August 2024 to October 2025 were processed into 10-min intervals and converted into a structured dataset comprising 76 features derived from operational events, scanning parameters, and temporal dynamics. Two supervised learning models, the Support Vector Machine (SVM) and Artificial Neural Network (ANN), were trained to identify abnormal operating conditions. Both models delivered excellent classification performance, achieving an accuracy of 0.973. The SVM demonstrated balanced precision, recall, and F1-score metrics of 0.973, while the ANN outperformed in ranking and sensitivity to anomalies with an AUROC of 0.993 and an AUPRC of 0.976. This framework highlights the potential of sensor-driven machine learning in enabling early detection of system anomalies and optimizing maintenance planning within clinical CT environments.
Flash floods represent one of the deadliest weather-related hazards globally, yet their prediction remains fundamentally challenged by extreme class imbalance in observational data. This study addresses a critical methodological gap: traditional evaluation metrics, both overall accuracy and Area Under the ROC Curve (AUC), are systematically misleading for rare event prediction. We demonstrate empirically how models achieving 93% accuracy and AUC exceeding 0.98 can simultaneously fail to detect 65% of flood events. Moving beyond conventional approaches, we introduce distribution theory-informed feature generation by integrating Extreme Value Theory through Weibull distribution analysis. We derive 24 features from rigorous statistical characterization of precipitation extremes spanning 16 years (2010-2026) of ERA5-Land reanalysis over Nova Scotia, Canada. Evaluating seven model configurations using Environment and Climate Change Canada operational warning thresholds, we find that adding just six Weibull-derived features to a Random Forest baseline nearly doubles flood detection, with recall increasing from 0.35 to 0.65 and F1-score from 0.48 to 0.74, while maintaining 87% precision. This controlled comparison provides the clearest evidence for the value of distribution-informed features. Across architectures, Support Vector Machines with selected features achieve 93.4% balanced accuracy with perfect recall, while Artificial Neural Networks achieve a balanced operational profile (75% recall, 65% precision). SHAP analysis reveals that physically meaningful interaction features, particularly the intensity-duration product and rain-on-saturated-soil, dominate predictions, with raw precipitation ranking only sixth, confirming that models learn genuine multivariate susceptibility structure rather than recovering classification thresholds. These findings provide essential guidance for practitioners: comprehensive reporting of balanced accuracy, precision, and recall is mandatory for imbalanced datasets where traditional metrics mask operational failure.
The recent explosive growth of Unmanned Aerial Vehicles (UAVs) has contributed to their high vulnerability to cyber-attacks including Denial of Service (DoS), identity impersonation and unauthorized access to data. The UAV networks have the inherent risks of centralized Intrusion Detection Systems (IDS) which pose critical privacy risks and points of failure, and therefore the decentralized and privacy-preserving learning paradigms are required. The paper presents federated learning architecture called FedDrone-Shield( Federated Learning Framework for Drone Security and Shield against Intrusions), which is used in the task of detecting UAV intrusions in the scenarios of Independent and Identically Distributed (IID) data and the assessment of several aggregation algorithms: FedAvg, FedProx, FedAdam, FedMedian, and ClusterAvg. A significant amount of experiments that were carried out on a dataset on anomaly detection of UAVs prove that FedAdam and ClusterAvg outperform other aggregation strategies by achieving test accuracies of 99.98, F1-scores of 0.9999, and impressively low loss values of 0.0009-0.0014. FedMedian has also closely competitive performance, whereas FedAvg and FedProx are slightly less accurate and slower converging. Client-level assessments also show a consistent high precision, recall and F1-score across all attack types, with weighted F1-scores between 0.9997 and 0.9999, which again shows that there is reliable detection performance amongst distributed UAV clients. These findings make FedDrone-Shield a strong and feasible bench-marking model of federated intrusion detection in UAV networks proving that adaptive aggregation approaches contribute to a substantial improvement of detection accuracy, training, and data privacy. This means that the proposed structure offers a robust basis of intrusion detection that is safe and ensures privacy in distributed UAVs.
Dietary patterns may influence depression, yet findings remain inconsistent, partly due to methodological variation in dietary pattern identification. As data-driven approaches may help reduce subjectivity and improve reproducibility in dietary pattern identification, this study aimed to identify dietary patterns using a machine learning approach and examine their associations with depression among Korean adults. Using data from 21,321 Korean adults aged 19-64 years from the Korea National Health and Nutrition Examination Survey (2016-2021), we applied K-means clustering to identify dietary patterns based on both food group and nutrient intake. Dietary intake was assessed using a 24 h dietary recall, and depression status was based on physician diagnosis. Three distinct patterns were identified in both food group-based and nutrient-based analyses. In the food group-based analysis, a balanced and diverse dietary pattern (Cluster 3) was associated with lower odds of depression compared with a pattern characterized by overall low food intake (Cluster 1) (OR 0.64; 95% CI, 0.47-0.88; p = 0.007) after full adjustment, whereas no significant association was observed for the high processed food pattern (Cluster 2 vs. Cluster 1) (OR 0.73; 95% CI, 0.53-1.01). No significant associations were observed for nutrient-based clusters after full adjustment. Our findings suggest that adherence to balanced and diverse dietary patterns based on whole foods is associated with lower odds of depression. Food group-based clustering approaches may offer more reproducible and interpretable insights than nutrient-based approaches, supporting their potential utility in epidemiological research and public health strategies.
Traditional conveyor belt object detection methods often lack robustness and adaptability under challenging conditions such as low-light and low-resolution environments. This study proposes an improved detection method specifically designed for conveyor belt environments, built upon the YOLOv11 object detection framework. A custom dataset was created to support foreign object detection on factory conveyor belts. To overcome the low resolution of image recognition, the Enhanced Super- Resolution Generative Adversarial Network (ESRGAN) was employed to improve the input image clarity. Additionally, to enhance the performance under low-illumination conditions, several architectural improvements were embedded in the YOLOv11 framework, leading to the proposed Conveyor Belt Foreign Object Detection (YOLOv11-CBFD) algorithm. These enhancements included an optimized upsampling module, integrated attention mechanisms, a modified convolution module, an improved loss function, and a modified convolution module. Experimental results demonstrated that the proposed YOLOv11-CBFD algorithm significantly enhanced the accuracy of foreign object recognition. Based on a dataset collected from a factory conveyor belt, YOLOv11-CBFD achieved an accuracy of 86.1%, a recall of 86.7%, an [Formula: see text] of 89.1%, and a model size of only 2.17 M parameters. Compared to the original YOLOv11n model, the proposed method reduced the parameter count by 16.2% while demonstrating no significant degradation in recognition capabilities. In terms of computational efficiency, the optimized architecture demonstrated a 12.4% increase in the number of frames per second when deployed on a Jetson Orin NX-embedded AI computer. Field experiments conducted in industrial inspection scenarios validated the practical effectiveness of the system, demonstrating continuous operation over 48 h under real-time constraints (average latency <33 ms/frame), while consistently maintaining an accuracy of 86.1% across multiple deployment cycles. The experimental results highlight the ability of the model to effectively balance computational efficiency and detection performance on embedded AI platforms.
Dairy foods-particularly cheeses produced from raw or minimally processed milk-remain vulnerable to hazards such as Listeria monocytogenes, where delayed laboratory confirmation can expand recalls, increase food waste, and delay outbreak containment. This study proposes a veterinary-aware digital traceability framework that embeds herd health data, milk-quality testing, and inspection outcomes directly into batch-level EPCIS event records. By representing veterinary public health controls as structured, machine-actionable traceability elements, the framework enables automatic logging of mandatory control points, systematic compliance verification, and rule-based risk state transitions within standard EPCIS infrastructures. Using regulation-consistent dairy simulations modeling delayed Listeria detection during maturation, we evaluate the operational impact of event-level causal traceability within the proposed architecture. Compared with conventional time-window recall strategies, provenance-based trace-forward queries reduced recall scope under the evaluated synthetic scenarios. Integrating structured veterinary controls into EPCIS-based traceability systems supports automated regulatory evidence generation and more targeted recall decisions, contributing to improved auditability and reduced food waste in dairy supply chains.
Pulp calcifications (PC) play a major role in the clinical outcome of endodontic treatment. Dentists use different radiographic modalities for detecting the presence of PCs. Transfer learning models have shown promising results with low computational resources in the detection and classification of diseases and conditions in dental radiographs. The aim of the present study was to evaluate the performance of transfer learning models in the detection of PCs in cropped panoramic radiographs (PRs). Two calibrated examiners collected 240 cropped PR images (120 with PCs and 120 without PCs) of maxillary and/or mandibular posterior teeth. The images were preprocessed using CLAHE (Contrast Limited Adaptive Histogram Equalization) and then augmented. Three pre-trained models, VGG16, ResNet101V2, and MobileNetV2, were used for classification of the images. A Fine-tuning approach was used for training the model. The inter-rater reliability among the five examiners was 0.91. VGG 16 was the best-performing model with training, validation, and test accuracies of 0.80, 0.85, and 0.85, respectively. VGG16 showed a precision of 0.84, a recall of 0.87, an F1-score of 0.86, and an AUC of 0.93. ResNet101V2 and MobileNetV2 showed test accuracies of 69% and 50%, respectively. The Transfer-learning model VGG 16 outperformed other models in the detection of PCs in cropped PRs. Due to the use of cropped PRs, the model cannot be generalized; however, future work will be aimed at attaining similar performance metrics in uncropped PRs in larger datasets.
Lung adenocarcinoma presenting as ground-glass nodules (GGNs) comprises three invasive subtypes (adenocarcinoma in situ [AIS], minimally invasive adenocarcinoma [MIA], invasive adenocarcinoma [IAC]) with distinct prognoses and management strategies. Preoperative discrimination of these subtypes remains challenging for radiologists, and existing deep learning models rarely integrate multi-modal data for reliable prediction. This study aimed to develop and internally validate a multi-modal fusion framework based on the standard ResNet50 architecture, integrating CT images, clinical variables, and tumor markers, to improve the preoperative prediction of ground-glass nodule invasiveness. A retrospective study was conducted including 431 patients with pathologically confirmed ground-glass nodules. All patients underwent standard chest computed tomography before surgery. A multi-modal deep learning model was constructed based on the ResNet50 network, combined with clinical characteristics and laboratory indicators. Model performance was evaluated using accuracy, area under the receiver operating characteristic curve, precision, recall, and F1-score with five-fold cross-validation. The proposed multi-modal model achieved an overall accuracy of 72.2%, precision of 95.6%, negative predictive value of 96.0%, weighted F1-score of 40.0%, and multiclass Matthews correlation coefficient of 73.1% in the three-class classification of AIS, MIA, and IAC. Per-class analysis showed precision of 84.6%, 35.7%, and 84.4% and recall of 57.9%, 29.4%, and 81.8% for AIS, MIA, and IAC, respectively. The fusion model yielded a macro-average AUC of 0.87, which was higher than the CT-only model (0.79) and both the senior (0.67) and junior radiologists (0.57). The model demonstrated superior diagnostic performance compared to human readers, particularly for the challenging MIA subtype. This multi-modal deep learning model combining CT images, clinical variables, and serum tumor markers enables accurate and robust three-class classification of AIS, MIA, and IAC in ground-glass nodules. The proposed model outperforms both human radiologists and the imaging-only model, suggesting its potential as a reliable auxiliary tool to improve preoperative prediction of lung adenocarcinoma invasiveness and assist clinical decision-making.
COPD remains a prevalent and debilitating respiratory condition, necessitating early and accurate diagnosis for optimal clinical intervention. In this study, we propose a novel deep learning-based diagnostic framework that employs the ECAPA-TDNN (Emphasized Channel Attention, Propagation and Aggregation-Time Delay Neural Network) architecture to classify respiratory sound signals from the ICBHI dataset. Originally designed for speaker verification, ECAPA-TDNN introduces channel attention and multi-scale feature aggregation, which we adapt for the first time to the domain of medical acoustic analysis. This architecture allows the model to effectively capture subtle and discriminative patterns in pathological breathing sounds, overcoming the limitations of conventional CNN-based methods. Our methodology integrates rigorous signal preprocessing, log-Mel spectrogram extraction, and data augmentation to enhance robustness and generalization. An Attentive Statistics Pooling mechanism is employed for temporal feature summarization, while Grad-CAM-based explainability is incorporated to improve the interpretability of the diagnostic predictions. The model is rigorously validated using a five-fold cross-validation scheme, achieving a mean validation accuracy of 96.8% with consistently high F1-scores and recall rates across all folds. Comparative analysis with prior methods highlights the superiority of our ECAPA-TDNN-based approach in terms of diagnostic precision, robustness, and potential clinical applicability. To the best of our knowledge, this is the first work to adapt ECAPA-TDNN for COPD detection from respiratory sounds, establishing a new benchmark in interpretable and high-performance acoustic-based respiratory disease screening.
The identification and management of grocery items in retail environments have traditionally relied on barcode-based systems, which require significant human intervention and underutilize existing surveillance infrastructure. Computer vision-based approaches offer a promising alternative for automated product recognition. However, many existing grocery datasets remain relatively homogeneous or limited in scale, geographic diversity, or real-world variability. To support more realistic evaluation settings, we present a large-scale grocery dataset collected from eight stores across multiple states in India. The dataset comprises over 13,000 images spanning 349 product categories and captures practical retail challenges such as dense shelf arrangements, occlusions, viewpoint variations, and visual ambiguity. Rather than claiming novelty in addressing these challenges individually, our contribution lies in systematically integrating them within a unified and diverse dataset framework. We also introduce a lightweight product identification pipeline based on omni-scale feature learning, designed to balance representational capacity and computational efficiency. The proposed model achieves a mAP@0.50 of 58.3, a precision of 72.9%, and a recall of 77.9% on the proposed dataset, demonstrating competitive performance while maintaining a compact architecture. Comprehensive comparisons with established benchmark models further contextualize our contributions within the broader literature. Overall, this work provides a diverse evaluation benchmark and an efficient detection framework for practical retail deployment.
Luteinizing hormone (LH) plays a pivotal role in regulating reproductive function and alterations in its amino acid sequence can profoundly affect hormonal activity. Variations in protein sequence is crucial for understanding hormonal imbalances and associated reproductive disorders. In this study, we present a deep learning-based computational framework for predicting LH sequence alterations using protein sequence data. Protein sequences were pre-processed, numerically encoded and balanced prior to model development. Individual deep learning models, including convolutional neural networks (CNN) and bidirectional long short-term memory (BiLSTM) networks, were implemented and compared with hybrid attention-enhanced architectures, culminating in the proposed CNN + Attention + BiLSTM model. Experimental evaluation using stratified training, validation and testing splits revealed that the CNN model achieved a testing accuracy of 86.07%, whereas the BiLSTM network improved performance to 91.47%, underscoring the importance of modeling sequential dependencies. Notably, the proposed CNN + Attention + BiLSTM model outperformed all other architectures, achieving testing and validation accuracies of 99.42% and 99.60%, respectively, along with near-perfect precision (0.995), recall (0.99) and F1-score (0.995). This study validates the efficacy of attention-based hybrid deep learning architectures in predicting variations in LH sequence, representing a promising approach for identifying novel protein biomarkers and enhancing diagnostic capabilities in reproductive health.
Background: Children and adults with Trisomy 21 are more likely to develop nutrition-related conditions and diseases. The nutrition-related health of Canadians with Trisomy 21 is unknown. We aimed to determine the nutrient intake and physical activity of school-aged children with Trisomy 21 in Manitoba, Canada. Methods: Mothers of 14 school-aged children (n = 7 female, average age 9 years old) with Trisomy 21 completed a 24 h dietary recall and a survey that included questions about their children's nutrition and physical activity. Nutrient intake analysis was conducted to compare food and beverage consumption with dietary guidelines and nutrient recommendations. Data were analyzed descriptively. Results: Most children with T21 included in this study consumed an adequate average intake of daily protein, carbohydrate, and iron; an inadequate average intake of daily dietary fibre and calcium; and an excessive average daily intake of added sugars and saturated fat. Notably, all children consumed inadequate vitamin D and excessive sodium. Most children consumed a dietary supplement (10/14), engaged in moderate-intensity physical activity (10/14), and were active for more than 60 min per day (12/14). Conclusions: Most children with Trisomy 21 included in this study met daily physical activity recommendations. However, despite a variety of foods reportedly consumed across all food groups, nutrient intake among school-aged children with Trisomy 21 included in this study was mixed, as both deficiencies and excessive amounts of some nutrients were observed. There is a need to improve the nutrient intake of children with Trisomy 21 to reduce their risk of developing nutrition-related conditions and diseases.