There is a paucity of real-life-like cognitive training programmes to bolster the transfer of treatment-related cognitive gains to daily-life functioning. We aimed to investigate the effects of 4-week, intensive virtual reality-based cognitive remediation therapy (VR-CRT) involving daily-life challenges versus an active virtual reality control treatment on cognition and functioning in patients with mood or psychosis spectrum disorders. This was a single-centre, double-blind, parallel-group, randomised controlled trial at the Psychiatric Centre Copenhagen in Denmark. Clinically stable outpatients aged 18-55 years with an ICD-10 diagnosis of unipolar disorder, bipolar disorder, or a psychosis spectrum disorder, and with clinically relevant objective and subjective cognitive impairment were included. Participants were randomly assigned (1:1) to 4 weeks of VR-CRT or virtual reality control treatment and assessed at baseline, treatment completion (5 weeks) and follow-up (17 weeks). Randomisation used the REDCap module with block randomisation (block sizes four to eight), stratified by age (<35 vs ≥35 years) and diagnosis (mood disorder vs psychosis spectrum disorder), with allocation concealed from the enrolling author. The primary outcome was a global functional cognitive capacity score on the Cognition Assessment in Virtual Reality test at week 5. This trial was registered with ClinicalTrials.gov, NCT06038955, and is completed. Between Oct 10, 2022, and Aug 9, 2024, 103 candidates were assessed for eligibility, of whom 62 participants were enrolled and randomly assigned (VR-CRT group n=31 and control group n=31). Of the 30 participants commencing VR-CRT, 28 (93%) completed treatment per protocol. At week 5, the mean Z score improvement in the global functional cognitive capacity score was 1·5 (SD 0·6) in the VR-CRT group and 0·4 (0·8) in the virtual reality control group (treatment effect 0·98 [95% CI 0·65-1·32]; p<0·0001; d=1·55). This effect was maintained at follow-up at week 17 (mean Z score improvement 1·6 [0·5] vs 0·6 [0·7]; treatment effect 0·91 [0·61-1·21]; p<0·0001; d=1·53). The intervention was well tolerated with no treatment-related serious adverse events. Improvement in functional cognitive capacity was significantly greater in the VR-CRT group than the control group, and the intervention was well tolerated. Our findings suggest that embedding cognitive strategy training in immersive virtual reality can enhance transfer to real-world functioning, offering a feasible, engaging solution for cognitive rehabilitation in psychiatry. TrygFonden, Axel Muusfeldts Foundation, Jascha Foundation, Ivan Nielsen Foundation, and Familien Hede Nielsen Foundation.
To accurately and quickly diagnose overheating defects in dry-type flexible cable terminals to reduce failure rates, hotspots must be detected in real time using aerial infrared images of the cable terminals. To address the challenges of the small target size, complex background, and real-time performance in the task, I defined four types of hotspots in dry-type flexible terminals and propose an improved model based on You Only Look Once model. This model embeds Diverse Branch Blocks to enhance the representation capabilities, introduces the Multi-Scale Convolutional Attention to better focus on key areas and detect small targets, and reduces the model's complexity by introducing the FasterNet Block. The proposed model performs significantly better than the baseline model and other mainstream You Only Look Once series models, satisfying the real-time detection demand for hotspots in dry-type flexible cable terminals using aerial infrared images. The source code can be found at https://gitee.com/yao-shunyu1/real-time-detection-of-hotspots-in-dry-type-flexible-cable-terminals.git.
Tenofovir amibufenamide (TMF) and tenofovir alafenamide (TAF) are considered efficacious and safe nucleoside/nucleotide analogs (NAs) for patients with chronic hepatitis B (CHB). However, real-world data on TMF for CHB patients with low-level viremia (LLV) remain scarce. This study investigated the real-world effectiveness and safety of TMF combined with entecavir (ETV) in the treatment of CHB patients with LLV, and compared the efficacy and safety of TMF combined with ETV and tenofovir alafenamide (TAF) combined with ETV in the treatment of CHB patients with LLV. This retrospective real-world study included CHB patients with LLV who received treatment with either TMF combined with ETV or TAF combined with ETV. After the 48-week follow-up, this study evaluated the differences between the two groups in terms of virological response rate (VR rate), ALT normalization rate, HBsAg and HBeAg seroclearance rate, liver fibrosis assessment, and safety endpoints (renal function and blood lipids). A total of 258 patients were enrolled: 123 in the experimental group (TMF combined with ETV) and 135 in the control group (TAF combined with ETV). The levels of HBV DNA and AST in both groups were significantly lower than the baseline levels after 48-week treatment period (P < 0.05). The VR rate at week 48 was 79.67% in the experimental group, while that in the control group was 72.59%, and there was no statistically significant difference between the two groups (P = 0.184). With respect to the ALT normalization rate, HBsAg seroclearance rate, HBeAg seroclearance rate and liver transient elastography (TE) results, no statistically significant differences were detected between the two groups (P > 0.05). Similarly, for the safety endpoints, including serum creatinine (Cr), glomerular filtration rate (eGFR), triglyceride (TG), total cholesterol (TC), high-density lipoprotein (HDL), and low-density lipoprotein (LDL) levels, none of these indicators significantly differed between the two groups. TMF combined with ETV was found to be effective in CHB patients with LLV and appeared to have a favorable safety profile within the limitations of the available data. It exhibits robust antiviral activity and can improve the liver function of patients.
Older adults with complex needs (CN), commonly defined as the coexistence of multiple chronic conditions and functional limitations, are associated with high levels of medical and long-term care (LTC) utilization. However, evidence on real-world patterns of joint medical and LTC service use among this population in China remains limited. This cross-sectional study utilized data from 177,807 individuals aged ≥ 60 years who underwent LTC insurance assessment in Shanghai between January and May 2023. CN was defined as having three or more chronic conditions with at least one limitation in activities of daily living. Within an integrated care framework, latent class analysis (LCA) was applied to identify patterns of medical and LTC service utilization based on 10 indicators informed by the Andersen Behavioral Model of Health Services Use. Multinomial logistic regression was used to examine the association between CN status and class membership, adjusting for demographic, socioeconomic, and health-related factors. Older adults with CN (n = 42,277) differed significantly from those without CN (n = 135,530) across demographic, socioeconomic, health status, and service utilization. Six latent classes of medical and LTC service utilization were identified: Low Medical & Low Care, Moderate Medical & Low Care, High Medical & All Care, High Medical & Informal Care, High Medical & Formal Care, and High Inpatient & Formal Care. Compared to non-CN individuals, CN individuals had higher probabilities of belonging to high-utilization classes, particularly High Medical & All Care, High Medical & Informal Care, and High Medical & Formal Care classes, with the Low Medical & Low Care Class as the reference. These associations remained significant after adjusting for covariates. Older adults with CN in China showed heterogeneity in patterns of medical and LTC service utilization and were more frequently represented in intensive and multi-sector service use profiles. Early identification of CN individuals and the development of risk-stratified integrated care models may help inform more coordinated and people-centered service delivery approaches.
Myelofibrosis (MF) is a subgroup of Philadelphia chromosome-negative myeloproliferative neoplasms that are associated with an increased risk of cardiovascular disease, including pulmonary hypertension. Here we report the real-world prevalence, risk factors, and clinical outcomes of eRVSP in MF patients. A retrospective, single-center cohort study was conducted on 208 patients with MF that were diagnosed between 2013 and 2023. Elevated right ventricular systolic pressure (eRVSP) was defined as RVSP > 35 mmHg. Major adverse cardiac events (MACE) were defined as new-onset congestive heart failure, coronary artery disease requiring intervention, cerebrovascular events, or cardiovascular death after MF diagnosis. Univariate and multivariable Cox proportional hazard ratio regression model with eRVSP status as time-dependent covariate was used to estimate overall survival. Echocardiograms were performed in 208 MF patients, with RVSP estimates available in 156 patients (75%). eRVSP was present in 61 patients (39.1%). Patients with eRVSP were older (65 vs. 62 years, p = 0.053) and had higher rates of baseline hypertension (65.6% vs. 43.2%, p = 0.01), atrial fibrillation (16.4% vs. 4.2%, p = 0.02). MACE after diagnosis of MF occurred in 43 (20.7%) patients and was more frequent in eRVSP patients (36.1% vs. 15.8% p = 0.007), and this was predominantly driven by increased rates of new-onset congestive heart failure (27.9% vs. 8.4%, p = 0.003). In multivariable analysis, the presence of time-dependent eRVSP was associated with reduced survival (aHR: 4.41; 95%CI:2.84-6.85) after adjustment for clinically relevant covariates. eRVSP was prevalent among MF patients undergoing echocardiogram and this was associated with increased cardiovascular morbidity, particularly HF. Furthermore, eRVSP was associated with inferior survival and underscores the need for targeted screening and management in MF patients with cardiovascular risk factors and diseases. Prospective studies with baseline cardiovascular assessment and echocardiograms in all MF patients will establish true prevalence and may further elucidate the morbidity and mortality implication of eRVSP in MF patients.
This study derives real-world exposure parameters for ten color cosmetics among adult females in Shanghai using a 14-day tracking survey. These data were integrated into a Tier-1 deterministic health risk assessment evaluating six potentially toxic elements in lipsticks. Results show that the 90th percentile (P90) daily lipstick usage is 0.012 g/day. This value is significantly lower than the European Scientific Committee on Consumer Safety (SCCS) default (0.057 g/day). Risk modeling indicates that under typical and high-end scenarios, both non-carcinogenic and carcinogenic risks from PTEs remain negligible. However, under a theoretical worst-case scenario, the Lifetime Cancer Risks (LCR) for hexavalent chromium (Cr) and arsenic (As) marginally exceed the 1.00E-06 safety threshold. These two elements act as primary hazard drivers under extreme conditions. The findings highlight the need for localized, population-specific exposure data to prevent systematic risk misestimation. Regulatory frameworks should transition from universal international defaults to context-specific parameters, prioritizing evidence-based impurity controls for Cr and As.
Several studies have demonstrated superior outcomes with endovascular aortic repair (EVAR) compared to open aortic repair (OAR) in patients with infrarenal ruptured abdominal aortic aneurysms (rAAA). However, in emergent settings, aortic neck suitability for EVAR and adherence to IFU criteria are often not met in a significant proportion of patients. We aimed to compare EVAR and OAR in patients with rAAA using a recent national database, incorporating favorable neck (FN) versus hostile neck (HN) anatomy. We analyzed VQI data for rAAA from 2018-2024. Two analyses were performed: first, a comparison between OAR and EVAR; second, a comparison among three cohorts: OAR, EVAR with FN (EVAR-FN), and EVAR with HN (EVAR-HN). HN anatomy was defined as neck length <15 mm, neck diameter >30 mm, or infrarenal angle >60°. The primary outcomes were 30-day and one-year mortality. Secondary outcomes included postoperative complications, ICU stay >3 days, RBC transfusion >4 units, and postoperative reintervention. Logistic and Cox regressions were used for the analyses. A total of 4,578 rAAA repairs were performed, of which 3,275 (71.5%) were EVAR. Among EVAR cases, 2,452 (74.9%) had HN anatomy. Thirty-day mortality was 35.5% for OAR and 21.5% for EVAR (P< 0.001). One-year mortality was 42.5% for OAR, 31.7% for all EVARs, 26.1% for EVAR-FN, and 33.5% for EVAR-HN. After adjusting for confounders, EVAR was associated with reduced 30-day and one-year mortality (aOR= 0.66, 95% CI 0.52-0.84, P= 0.001; and aHR= 0.79, 95% CI 0.67-0.93, P =0.005). EVAR was also associated with reduced risk of postoperative complications. When stratified by neck anatomy, EVAR-FN was associated with more pronounced reduced 30-day (aOR= 0.46, 95% CI 0.33-0.65; P< 0.001) and one-year mortality (aHR= 0.66, 95% CI 0.53-0.82; P< 0.001) compared with OAR. EVAR-HN was associated with reduced 30-day mortality (aOR= 0.74, 95% CI 0.58-0.94; P= 0.013) but not one-year mortality (aHR= 0.84, 95% CI 0.71-1.00; P= 0.052) compared with OAR. EVAR-HN was also associated with increased 30-day and one-year mortality compared with EVAR-FN. The majority of rAAAs are treated today with EVAR, and 75% of these patients present with HN anatomy. EVAR was associated with reduced postoperative mortality and complications compared with OAR, regardless of neck anatomy. However, EVAR maintained a one-year survival advantage over OAR only in patients with FN anatomy. While EVAR-HN demonstrated similar one-year mortality to OAR, it remains the preferred option due to better perioperative outcomes and lower 30-day mortality. Longer-term follow-up is needed to evaluate reintervention, rupture, and aneurysm-related mortality, particularly in patients with HN anatomy.
Precision nutrition aims to tailor dietary guidance to individual biology, yet current methods struggle to integrate complex molecular and multi-omic data into clinical care. Emerging quantum-driven technologies encompassing quantum computing, quantum chemistry and quantum-enhanced sensors link detailed molecular modelling with real-time metabolic forecasting. Quantum chemical simulations and machine learning model nutrient protein interactions at the atomic level, while quantum algorithms and echo state networks have been applied to create digital metabolic avatars that predict weight and metabolic trajectories from daily diet and activity data. Quantum computing enables rapid integration of genomic, metabolomic and microbiome datasets and supports optimization of personalised diet plans. Advances in computational molecular modelling now allow prediction of molecular structures and properties relevant to food components, and prototype quantum metabolic twins have demonstrated the capacity to forecast weight trends from incomplete real-world data. The clinical implications include proactive dietary interventions, noninvasive nutrient deficiency screening and improved prediction of disease risk from metabolic profiles, all of which can enhance patient outcomes and clinical decision making. This perspective synthesizes recent advances and delineates research directions at the intersection of quantum science, medical diagnostics, metabolism and clinical nutrition, with implications for clinicians, physicians, dietitians and clinical decision support in patient care.
In sharp force injury cases, assessments of applied force and injury severity primarily rely on empirical judgment, lacking quantitative and objective data support. This study aims to construct a high-fidelity finite element (FE) head model, validate it using 3D-printed biomimetic skull experiments, and investigate the relationship between momentum and injury severity in sharp instrument stabs to the head, thereby providing scientific evidence for case analysis. A high-fidelity FE head model was reconstructed based on the Total Human Model for Safety (THUMS) model. Biomimetic skulls were 3D-printed using PEEK material. Experimental data obtained from slashing with a Chinese kitchen knife were collected via sensors and motion capture systems, using the erosion failure model to simulate wound formation. Following model validation, the approach was applied to a real stabbing fatality case to systematically simulate stabbing processes under varying momenta (0.75-11.25 kg·m/s). FE model validation demonstrated close alignment between simulation and experimental results, with errors in wound dimensions and slashing force within 15.0%. Case reconstruction revealed that the minimum momentum required to reproduce penetrating injury in the homicide case was 9.75 kg·m/s. In the present simulation framework, momentum values of 0.75 kg·m/s, 2.25 kg·m/s, and 5.25 kg·m/s were associated with minor, moderate, and serious skull injuries, respectively. This study provides a biomechanical framework for quantitative simulation and illustrative case reconstruction of sharp force injuries. The real-case application serves primarily as an example of the proposed biomechanical reconstruction approach, which may enhance the objectivity of injury severity assessment when integrated with other forensic evidence and provide reproducible biomechanical support for case investigation.
Accurate identification of dangerous driving behaviors is critical for accident prevention and occupant protection. However, most existing in-vehicle driver monitoring systems rely primarily on facial or head motion analysis, which fails to capture full-body driving behaviors and raises privacy concerns due to dependence on RGB or near-infrared imaging. In addition, these systems often exhibit limited robustness under low-light conditions. To address these limitations, this study proposes a comprehensive depth-based framework for in-vehicle 3D human pose estimation and dangerous driving posture recognition. First, a large-scale dual-view 3D pose dataset encompassing ten typical driving behaviors is constructed using a Time-of-Flight (ToF) camera. Based on this dataset, we develop a lightweight end-to-end pipeline in which an anchor-based regression model estimates the 3D poses of 16 driver keypoints, followed by an enhanced ST-GCN++ architecture for skeleton-based action recognition. By integrating pose estimation with graph-based temporal modeling, the proposed method effectively distinguishes visually similar hazardous behaviors. To facilitate real-world deployment, the algorithm is further integrated into a software system that enables closed-loop pose monitoring and hierarchical intervention. Experimental results verify that the proposed method achieves 96.02% accuracy in 3D pose estimation and 98.0% accuracy in behavior recognition. With a computational cost of only 1.49 G FLOPs and an inference latency of 0.0375 s per sample, the system achieves real-time performance (27-28 FPS) on an automotive embedded platform, making it well suited for practical in-vehicle safety applications.
High-efficiency video coding (HEVC) is renowned for achieving efficient video compression without compromising quality; however, it introduces significant computational complexity, particularly in intra-frame prediction. This paper proposes a dynamic programming optimization technique for HEVC encoders, implemented on an FPGA platform to improve both performance and resource efficiency. The architecture, which includes a sample extractor, correlation analyser, and sample predictor, uses dynamic programming to compute optimal pixel correlations and generate precise directional vectors, enhancing prediction accuracy. Hardware validation was performed on the Virtex-6 ML605 FPGA platform, achieving an operating frequency of 838 MHz and enabling real-time encoding of Ultra High Definition (UHD) 4K video frames. Benchmark results show that the proposed design achieves a 43% improvement in throughput, processing 50 frames per second at UHD resolution, compared with existing HEVC implementations. The architecture demonstrates high resource efficiency, utilising only 12% of available logic, 7% of block RAM, and 4% of DSP resources, while maintaining low power consumption (2.4 W). PSNR comparisons against traditional discrete cosine transform (DCT)-based prediction methods show consistent improvements in video quality, ranging from [Formula: see text] dB to [Formula: see text] dB across various block sizes (4 × 4, 8 × 8, 16 × 16, and 32 × 32). The proposed architecture minimises encoding time and achieves low latency, making it well suited to real-time applications. The proposed method can facilitate efficient, high-quality video compression in next-generation video processing systems deployed on FPGA platforms.
Early neurological improvement (ENI) after endovascular thrombectomy (EVT) is a clinically relevant early outcome and has been associated with subsequent functional recovery. However, simple bedside approaches for estimating the likelihood of ENI using routinely available clinical variables remain limited. We therefore sought to develop and internally evaluate a pragmatic prediction model for ENI after EVT in a real-world stroke cohort. We performed a single-centre retrospective cohort study at Shanxi Provincial People's Hospital. A total of 314 EVT-treated patients were initially screened from hospital records. After preliminary data verification and assembly of the research database, 253 patients remained in the final study database available for variable-level assessment, of whom 185 with complete data on the primary outcome and prespecified key model variables were included in the primary complete-case analysis. ENI was defined as either a reduction in NIHSS score of at least 8 points from baseline to 1 week or an absolute 1-week NIHSS score of 1 or less. Candidate predictors included age, sex, baseline NIHSS, diabetes, cardioembolic aetiology, prior cerebrovascular disease, and door-to-puncture time (DPT). A multivariable logistic regression model was developed and translated into a simplified bedside score based on baseline NIHSS category and cardioembolic aetiology. Model performance was assessed using discrimination, calibration, Brier score, decision curve analysis, and bootstrap internal validation. Sensitivity analyses included an alternative ENI-4 definition, 48-hour neurological improvement as an alternative early outcome, alternative DPT thresholds, and multiple imputation for incomplete baseline covariates only. Among the 185 patients in the primary analytical cohort, 53 (28.6%) achieved ENI. Baseline NIHSS was the dominant predictor of ENI in both univariable and multivariable analyses, whereas the additional contribution of other candidate predictors was modest. In the full model (Model 2), each 1-point increase in baseline NIHSS was associated with a 13% increase in the odds of ENI (adjusted OR 1.13, 95% CI 1.05-1.21; p < 0.001). The full model showed an apparent AUC of 0.706 and an optimism-corrected AUC of 0.657 after 1,000 bootstrap resamples; the corresponding Brier scores were 0.181 and 0.197. Bootstrap-corrected calibration suggested some overfitting (intercept - 0.335, slope 0.591). The simplified bedside score yielded an apparent and optimism-corrected AUC of 0.677, while the NIHSS-only model showed an apparent AUC of 0.673 and an optimism-corrected AUC of 0.674. Missing 1-week NIHSS was associated with higher baseline NIHSS, shorter length of stay, lower availability of 48-hour NIHSS, and worse discharge outcomes, suggesting that missing outcome data were unlikely to be completely random. Sensitivity analyses using alternative outcome definitions, alternative DPT thresholds, and multiple imputation for incomplete baseline covariates were broadly supportive of the primary findings, although some smaller-effect covariates were unstable in restricted subsets. In this single-centre real-world EVT cohort, baseline NIHSS emerged as the main predictor of early neurological improvement. A parsimonious model based on routinely available clinical variables showed only moderate discrimination, and the derived simplified bedside score may be useful for exploratory early risk stratification rather than as a stand-alone clinical decision tool. Given the substantial missingness in 1-week NIHSS, the possibility of selection bias, evidence of overfitting, and the absence of external validation, the model should be considered exploratory and requires independent validation before routine clinical use.
Medical education has increasingly emphasized social accountability and community-oriented learning to prepare graduates for complex health system challenges. Early Community Exposure (ECE) introduces medical students to real-world community and primary care contexts early in training, with the potential to foster empathy, professionalism, and contextual understanding of health. While ECE has been widely implemented, evidence remains limited regarding its impact on first-year medical students in newly established medical schools located in peripheral regions of Indonesia. This study explored how novice medical students experienced and interpreted their first exposure to community-based learning. This study employed a qualitative exploratory case study design. Participants were first-year medical students who completed the inaugural ECE program at Universitas Borneo Tarakan, a newly established medical school in North Kalimantan, Indonesia. Using purposive and snowball sampling, 15 students participated in in-depth semi-structured interviews conducted between January and March 2025. Interviews were audio-recorded, transcribed verbatim, and analyzed thematically using a hybrid inductive-deductive approach informed by community-based education, experiential learning, and social cognitive theory. Strategies to enhance trustworthiness included member checking, reflexive journaling, and peer debriefing. Seven interrelated themes were identified: (1) meaningful experiences, (2) early learning transformation, (3) bridging theory and practice, (4) professional identity formation, (5) internalization of professional values, (6) adaptation and motivation, and (7) program optimization. Students described ECE as a transformative and humanistic learning experience that enhanced empathy, responsibility, teamwork, and confidence. ECE enabled students to contextualize preclinical knowledge within real community settings while fostering early professional identity formation. Challenges included limited time allocation, scheduling constraints, and accessibility issues for some community participants. Early Community Exposure provided first-year medical students with a pivotal learning experience that integrated cognitive, affective, and professional development from the outset of training. Embedding structured reflection, mentorship, and institutional support may further strengthen the educational impact of ECE, particularly in peripheral and resource-limited settings. These findings highlight the value of early community-based learning in preparing socially accountable physicians.
Surgery is often viewed as a lifesaving intervention, yet recovery depends just as much on what follows. The concept of individualised anaesthesia and analgesia in surgery challenges the traditional one-size-fits-all model by tailoring care to each patient's physiological and psychological profile, with the potential to improve outcomes, reduce complications, and enhance the patient experience. Making individualised anaesthesia and analgesia in surgery a clinical reality will require robust, multifaceted trials, as its implementation is both a clinical and ethical imperative. Such research must examine predictive risk, drug-based treatments, psychobiological influences on perioperative experience, real-time nociception monitoring, and alignment between patient and clinician values, and it should span the complete perioperative period.
Human teams excel at dynamically restructuring both task assignments and team composition in response to emerging challenges, proactively recruiting or releasing members as needed. This capacity for autonomous adaptation is a cornerstone of effective teamwork, yet remains difficult to achieve in heterogeneous multi-robot systems, which typically operate under fixed team configurations or adapt only responsively to external disruptions. In this work, we present a systematic investigation of the Proactive Collaboration paradigm for robot teams, where the working team autonomously recruits or releases members as tasks evolve. We implement this paradigm by equipping robots with the developed Autonomous Interaction framework, which utilizes need-driven multi-round communication to facilitate discussions over task progress, negotiated task allocation, and dynamic team resizing. Through real-world and simulated experiments, we demonstrate that our framework effectively realizes the Proactive Collaboration. By resolving capability gaps via anticipatory planning and minimizing action redundancy, it yields consistent and measurable gains in team efficiency and robustness. Our findings suggest that enabling individual-level initiative may offer a promising pathway toward more adaptive and cohesive collective behavior in multi-robot systems.
Low-rate stealth attacks present a major challenge in Internet of Things (IoT) environments because their slow, irregular, and noise-like traffic patterns evade traditional rate-based intrusion detection systems. To address this problem, this paper proposes FSL-IDS, a hierarchical federated intrusion detection framework designed for resource-constrained IoT deployments. The framework integrates sparse representation learning, hierarchical temporal modeling, and federated optimization across distributed IoT, fog, and cloud layers. At the device level, a sparsity-constrained encoder captures subtle deviations in local traffic behavior, while fog nodes employ a Hierarchical Ensemble for Correlated Events (H-EFCE) to identify cross-device temporal attack patterns. At the cloud layer, a Federated Gradient Aggregation Module (FGAM) performs robust global model aggregation, and a Lightweight Quantized Model Optimization Layer (LQMOL) enables efficient deployment on constrained edge devices. The proposed system was evaluated on the IoTID20 dataset containing heterogeneous smart-home IoT traffic, including Mirai, DoS, scan, authentication attacks, and stealth anomalies. To address the scarcity of low-rate stealth samples, a Synthetic Stealth Attack Injection Module (SSAIM) was used to augment training data while preserving realistic traffic characteristics. Experimental results demonstrate that the proposed architecture significantly improves stealth attack detection compared with multiple baseline methods, including federated learning algorithms and classical anomaly detection models. The FGAM-based cloud model achieved an AUC-ROC of 0.992 and F1-score of 0.965, outperforming FedAvg, FedProx, GRU-Autoencoder, Temporal Convolutional models, and Isolation Forest baselines under identical dataset and split conditions. Ablation experiments confirm that each architectural component contributes to performance gains, with the removal of the sparse encoder, temporal ensemble, or SSAIM module reducing detection accuracy and recall for low-rate stealth anomalies. Sensitivity analysis further shows stable detection performance across varying proportions of synthetic stealth traffic, while evaluation on limited real stealth samples confirms consistent detection capability. Additionally, model compression experiments demonstrate that 6-bit quantization provides an effective balance between efficiency and security, reducing edge-device energy consumption by 43% while maintaining near-optimal detection performance; further compression below this threshold leads to noticeable degradation in stealth-attack recall.
The growing adoption of artificial intelligence in healthcare highlights the need for models that can leverage heterogeneous patient data while preserving strict privacy requirements. This paper proposes a novel multi-modal federated learning framework with differential privacy for decentralized healthcare AI. The model integrates electronic health records and ECG time-series using modality-specific encoders and a shared latent fusion network, enabling comprehensive representation learning without centralizing sensitive data. Differential privacy is incorporated into local updates to provide formal guarantees against information leakage in federated aggregation. Extensive experiments on real-world healthcare datasets show that the proposed method achieves [Formula: see text] accuracy, [Formula: see text] precision, [Formula: see text] recall, [Formula: see text] F1-score, and [Formula: see text] AUC, outperforming centralized, single-modality, and non-private baselines. The framework also converges [Formula: see text] faster than single-modality federated learning, reaching [Formula: see text] accuracy in 35 rounds. An ablation study confirms the contribution of multi-modal fusion and class balancing, while client variance analysis shows the lowest performance deviation ([Formula: see text]) under heterogeneous distributions. These results indicate that combining federated optimization, differential privacy, and multi-modal learning provides an effective framework for privacy-preserving clinical AI, with potential for deployment in distributed healthcare settings.
T2-comorbidities including allergic rhinitis, chronic rhinosinusitis with and without nasal polyps, atopic dermatitis, chronic spontaneous urticaria, food allergy, aspirin sensitivity and eosinophilic esophagitis are the most common in severe asthma (SA) patients and have a negative impact on disease outcomes but also an important socio-economic burden. In the era of personalized medicine, treating SA and its comorbidities by one medication is a very exciting possibility for the clinicians. Several biologics used for SA showed benefits on T2-comorbidities but current knowledge regarding the magnitude and consistency of biologics efficacy across these comorbidities as well as the optimal strategies for selecting biologics in multimorbid patients remain limited. In this narrative review, we discuss the available evidence on the efficacy and safety of different biologics currently available as add-on treatment for SA on the most frequent T2-comorbidities, individually or associated, based on randomized controlled trials and real-world studies.
This study presents a novel hybrid cryptographic model designed to enhance privacy preservation and data integrity in IoT-enabled Wireless Sensor Networks (WSNs). Traditional algorithms such as RSA, AES, and Blowfish are evaluated and combined into a Hybrid Model to address the resource-constrained nature of IoT devices. The proposed model was tested on a dataset of sensor data, with performance metrics including encryption/decryption time, security strength, memory usage, data throughput, and communication overhead. Numerical findings demonstrate the Hybrid Model's superior performance, with encryption time reduced by 18% compared to Advanced Encryption Standard (AES), The hybrid model employs RSA-2048 (112-bit security strength) for key exchange and AES-256/Blowfish for data encryption (256-bit confidentiality protection). The memory usage was optimized, requiring only 25.16 KB, making it suitable for low-power IoT devices. Additionally, the Hybrid Model achieved a data throughput of 24.89 KB/s and reduced communication overhead to 1.32 KB. These results highlight the efficiency and robustness of the Hybrid Model in securing IoT-enabled WSNs. This research contributes a scalable, resource-efficient solution for privacy and data integrity, offering a promising advancement for real-time IoT applications in sectors such as healthcare, industrial automation, and smart homes.
Echinococcosis is a rare but potentially life-threatening parasitic disease caused by Echinococcus species. In Japan, epidemiological data are mainly derived from notification-based surveillance, and large-scale nationwide analyses focusing on hospitalized patients remain limited. This study aimed to clarify the nationwide epidemiology and clinical characteristics of hospitalized patients with echinococcosis in Japan using an administrative database. We conducted a retrospective nationwide study using the Japanese Diagnosis Procedure Combination database. Hospitalized patients diagnosed with echinococcosis between April 1, 2014, and March 31, 2021, were identified. Data on age, sex, geographic distribution, affected organs, comorbidities, treatments, Japan Coma Scale score, and length of hospital stay were extracted and analyzed. A total of 170 hospitalized patients coded for echinococcosis were included after exclusion of duplicate cases. The median age was 65.5 years, and 51% were male. Hepatic involvement was observed in 90% of patients, followed by pulmonary (4%), cutaneous (2%), cerebral (2%), and osseous (2%) involvement. Surgical treatment was frequently performed, including hepatectomy in 48% and cholecystectomy in 24% of patients, while albendazole therapy was administered in 21%. Most patients were from the Hokkaido region (85%), followed by the Kanto region (8%). The average annual number of hospitalized patients was approximately 24. Echinococcosis remains a clinically relevant parasitic disease in Japan, particularly in Hokkaido, and often requires hospitalization and surgical intervention. Nationwide administrative data provide valuable insights into the real-world clinical burden of echinococcosis.