A dedicated real-time detection system (NTM-RTDS) has also been developed to determine the radial position of neoclassical tearing mode (NTM) magnetic islands. Built upon the LabVIEW real-time framework and PXIe architecture, it adopts a distributed upper-lower computer structure that enables high-speed acquisition and real-time processing of signals from magnetic probes and electron cyclotron emission diagnostics. By analyzing perturbation frequencies associated with magnetic island rotation and electron temperature profile variations, the system achieves island localization within a 10 ms cycle. Validation on experimentally advanced superconducting tokamak confirms a detection accuracy of 80.25% in identifying magnetic islands, verifying the system's robustness. The NTM-RTDS thus represents a critical instrument for enabling active, real-time NTM control via electron cyclotron resonant heating, and provides a foundational platform for real-time disruption mitigation strategies in future large-scale fusion experiments, including the international thermonuclear experimental reactor.
Introduction: Hearing loss significantly impairs speech comprehension in noisy environments, creating major communication challenges for individuals with hearing impairment. Modern hearing aids increasingly rely on intelligent systems capable of real-time speech and noise classification to enhance speech intelligibility while suppressing background noise. Researchers have explored signal processing, machine learning, and deep learning approaches to improve classification accuracy, adaptability, and performance in complex acoustic environments. Methods: A systematic review was conducted following Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 guidelines to identify peer-reviewed studies on machine learning-based speech and noise classification for hearing aids published between 2004 and 2024. Searches were conducted across major academic databases using structured keywords. Studies were screened using predefined inclusion and exclusion criteria, and quality assessment was performed using a structured scoring framework. Due to methodological heterogeneity, findings were synthesised narratively. Results: The review found that machine learning and deep learning approaches consistently outperform traditional signal processing techniques in speech and noise classification tasks. Deep learning architectures, particularly convolutional neural networks, recurrent neural networks, long short-term memory networks, and hybrid models, demonstrated improved robustness, adaptability, and classification performance in noisy and dynamic acoustic environments. Emerging trends include lightweight architectures, model compression, ensemble learning, and context-aware systems designed to support real-time deployment in hearing aids. Conclusion: Machine learning and deep learning techniques have significantly advanced speech and noise classification in hearing aids. However, challenges related to latency, computational efficiency, energy consumption, and real-world deployment remain. Future research should prioritise lightweight, adaptive, and user-centred hearing aid technologies. This systematic review shows that integrating machine learning into hearing aids can greatly improve how users understand speech in noisy and challenging environments compared to traditional sound amplification alone. Intelligent speech and noise classification can help reduce listening effort and makes communication easier in everyday settings such as markets, classrooms, and public transport. The findings highlight the potential of modern hearing aids to become smarter and more user-centered communication tools that better support the daily lives of people with hearing impairment.
Hospital-at-Home (HaH) programs supported by telemedicine have emerged as a promising alternative to conventional hospitalization. However, evidence on patient experience in large-scale, real-world virtual-first models remains limited. The objective of this research was to evaluate patient experience and satisfaction in a telemedicine-based, virtual-first HaH program. We conducted a retrospective observational cohort study, including adult patients admitted to a virtual-first HaH program at a tertiary hospital in Madrid, Spain, between October 2020 and May 2025. Patient experience was assessed at discharge using a routinely implemented digital questionnaire. A set of 18 common items across questionnaire versions was analyzed, covering communication, perceived safety, professional competence, and overall satisfaction. Responses were standardized and dichotomized using a top-box approach. A total of 887 patients were included (median age 64 years; median length of stay 9 days). Patient experience was highly positive, with satisfaction rates exceeding 95% across most domains. Perceived safety was reported by 98.98% of patients, and 97.06% indicated that telemedicine was easily integrated into daily life. Overall satisfaction reached 96.50%, and the same proportion would choose the HaH model again. A telemedicine-based, virtual-first HaH program achieved very high levels of patient satisfaction and perceived safety in a large real-world cohort. These findings support the feasibility and acceptability of virtual-first models for acute hospital care, demonstrating consistent patient experience across diverse clinical profiles, age groups, and lengths of stay.
To evaluate the real-world efficacy and safety of cadonilimab (a bispecific antibody targeting PD-1 and CTLA-4) in combination with chemotherapy with or without bevacizumab for cervical cancer and to identify potential biomarkers. This preliminary report analyzes the first 51 consecutive patients from a protocol-driven observational cohort initiating cadonilimab (≥2 cycles) between June 2022 and August 2025. The treatment regimens were: cadonilimab + chemotherapy + bevacizumab (n=22), cadonilimab + chemotherapy (n=24), or cadonilimab alone (n=5). Standardized data collection included clinicopathological variables, peripheral blood biomarkers, and protocol-defined tumor assessments (RECIST v1.1 every 6 weeks). Interim efficacy endpoints (objective response rate [ORR], disease control rate [DCR], median progression-free survival [mPFS]) and safety (CTCAE v5.0) were evaluated, with multivariate analyses to identify predictive factors. The median follow-up was 11.0 months. At the first tumor evaluation timepoint after completing two treatment cycles, 15 patients achieved complete response (CR), 22 patients achieved partial response (PR), and 9 patients achieved stable disease (SD), with an ORR of 72.5% and a DCR of 90.2%. At the data cutoff date (December 2025), the median PFS was 7.0 months (IQR: 4.0-10.0) and the DCR was 37.3% (19/51). Multivariable analysis revealed that squamous cell carcinoma histology (OR = 4.471, 95% CI = 1.037-21.699; P = 0.045) and baseline IL-6 levels ≤5.4 pg/mL (OR = 4.494, 95% CI = 1.089-18.541; P = 0.038) were independently associated with higher odds of achieving an objective response. Regarding safety, hematologic toxicities were the most common (74.5%), which may be related to the chemotherapeutic agents used in the combination therapy. Immune-related adverse events (irAEs) included liver function abnormalities in 39.2% of patients and skin and subcutaneous tissue disorders in 23.5% of patients. Analysis of immune-related dermal toxicity identified that a baseline Systemic Immune-Inflammation Index (SII) ≥660 (OR = 8.742, 95% CI = 1.372-55.648, P = 0.022) and CD4+PD-1+ >42.10% (OR = 18.121, 95% CI = 1.368-239.948, P = 0.028) were independent risk factors, whereas IL-17 >21.4 pg/mL (OR = 0.042, 95% CI = 0.003-0.542, P = 0.015) was an independent protective factor. In this real-world cohort, cadonilimab showed promising early efficacy and a manageable safety profile in cervical cancer. Identified biomarkers for response and immune-related dermal toxicity require validation in larger, prospective studies with longer follow-up.
Post-transplant relapse remains a major clinical challenge in Philadelphia chromosome-positive acute lymphoblastic leukemia (Ph + ALL). Real-time quantitative PCR (RQ-PCR) for BCR::ABL1 is the current standard for measurable residual disease (MRD) monitoring, whereas digital PCR (dPCR) offers substantially higher analytical sensitivity. Whether this increased sensitivity translates into additional prognostic value after allogeneic hematopoietic stem cell transplantation (allo-HSCT) remains unclear. In this prospective study (NCT06211166), 270 patients with Ph + ALL were longitudinally monitored after allo-HSCT. MRD was assessed in parallel using dPCR, RQ-PCR, and MFC. Based on the first post-transplant MRD detection pattern, patients were categorized into four groups: double-negative (n = 80), dPCR-single-positive (n = 158), RQ-PCR-single-positive (n = 3), and double-positive (n = 29). The dPCR-single-positive pattern was the most prevalent MRD status, accounting for 58.5% of patients. dPCR positivity independently predicted subsequent MFC-MRD conversion (HR 9.56, P = 0.029), with a median lead time of 77 days. In addition, dPCR detected BCR::ABL1 positivity earlier than RQ-PCR, preceding subsequent hematologic relapse by a median of 64.5 and 91.5 days, respectively. However, the cumulative incidence of hematologic relapse (CIR), the primary endpoint of this study, did not differ significantly among the four MRD-defined groups (P = 0.60). Consistently, isolated dPCR positivity was not associated with inferior 2-year leukemia-free survival (LFS; P = 0.30) or overall survival (OS; P = 0.60). Although dPCR detects molecular disease earlier and anticipates MFC-MRD by 2 months after allo-HSCT in Ph + ALL, isolated ultra-low-level BCR::ABL1 positivity does not impact relapse risk, LFS, or OS. Routine MRD monitoring with RQ-PCR plus MFC remains sufficient for prognostic stratification, while dPCR primarily provides an ultra-early signal to guide timely intervention rather than improving survival prediction.
Cyclin-dependent kinase 4/6 inhibitors (CDK4/6is) combined with endocrine therapy are the standard of care for metastatic hormone receptor-positive, HER2-negative breast cancer (HR+ MBC). However, systemic therapy options and outcomes after progression on CDK4/6is remain poorly defined. We analyzed real-world treatment patterns and outcomes in patients with advanced HR+ MBC who received systemic therapy following CDK4/6i progression. We identified all patients with HR+ MBC treated at MD Anderson Cancer Center between January 1, 1997, and December 31, 2024. Kaplan-Meier and log-rank tests compared PFS and OS across post-CDK4/6i therapies, and a multivariable Cox model assessed prognostic factors. Among 1826 patients (99% female; median follow-up, 32.4 months), the median age at metastatic diagnosis was 56 years, and most were White (72%),with 10% Black, 9% Hispanic, 6% Asian/Pacific Islander, and <1% Native American. Following progression on CDK4/6is, 43% received chemotherapy, 38% targeted therapy, 12% endocrine therapy alone, and 7% investigational agents. Nearly half (48%) had visceral progression. Median PFS on subsequent therapy was 4.73 months (95% CI, 4.40-4.96), with no significant difference by therapy type. CDK4/6i duration ≥12 months was independently associated with improved PFS (HR 0.86; p = 0.021). Following CDK4/6i progression, most patients received chemotherapy, but PFS did not differ significantly across post-CDK4/6i treatment types. As CDK4/6is remain frontline therapy, further studies are needed to optimize treatment sequencing and improve post-CDK4/6i outcomes. Department of Defense grant (HT9425-24-1-0991) and the National Cancer Institute MDACC Support Grant (P30 CA016672).
A growing body of empirical research shows that, for both affirmative and negative sentences, language processing is not only shaped by the ease of integrating linguistic input into world knowledge, but also by expectations of informative communication. Yet, it remains unclear how comprehenders coordinate pressures from both aspects during real-time sentence processing. We propose that language comprehension reflects a dynamic balance between world-knowledge typicality and communication informativeness, which is subject to contextual modulation. Two self-paced reading studies test this proposal by examining affirmative and negative sentences describing part-whole relations that vary in typicality and informativity in unmarked and unexpectedness-signaling contexts. Our results provide evidence for joint effects of typicality and informativity, with distinct patterns across sentence polarities and contexts.
the prevalence of childhood overweight and obesity has steadily increased over the past two decades, both in Spain and globally. The COVID-19 pandemic disrupted lifestyles, impacting children's diet and physical activity. we retrospectively analyzed the evolution of weight and BMI (body mass index) in 563 children aged 4 to 14 years, comparing three periods: pre-pandemic (until 2019), pandemic (2021-2022), and post-pandemic (2023-2024). before 2020, a significant weight gain trend was already present: weight Z-scores increased by +0.21 SD (standard deviations) between the ages of 4 and 6. The lockdown drastically accelerated weight gain; weight Z-scores increased by up to +0.41 SD between the ages of 4 and 6 in the immediate post-lockdown phase, nearly doubling the baseline trend. Two years later, in the late post-pandemic phase, a partial reduction in weight was observed (-0.20 SD at age 14), indicating that weight gain has not been fully reversed. The consolidation of excess weight was also found: the combined rates of overweight and obesity (BMI Z-score > +1 SD) had increased: 42.2 % at age 12 and 40.5 % at age 14. a significant weight gain was observed during the pandemic, especially in younger children, with partial persistence. The sustained increase in overweight and obesity rates highlights the urgent need for preventive strategies and monitoring of lifestyle habits in the pediatric population.
暂无摘要(点击查看详情)
To evaluate the association between nasal procedures and long-term continuous positive airway pressure (CPAP) treatment persistence in a large US administrative claims database. Retrospective cohort study using the Komodo Health Sentinel Database (2020-2026). Adults with sleep apnea (ICD-10 G47.3x) who initiated CPAP were classified into six cohorts: temperature-controlled radiofrequency (TCRF) nasal valve remodeling (n = 920), septoplasty (n = 24,575), functional rhinoplasty (n = 758), combined functional rhinoplasty and septoplasty (n = 3,910), other nasal surgery (n = 12,995), and no nasal surgery (n = 1,297,292). The primary outcome was CPAP discontinuation, defined as a ≥180-day gap in positive airway pressure claims. Covariate-adjusted Cox regression and coarsened exact matching (CEM) on six variables assessed the association between nasal procedures and discontinuation. Sensitivity analyses included inverse probability of censoring weighting (IPCW), alternative gap thresholds (90-365 days), E-values, and insurance stratification. Among 1,340,450 CPAP initiators, the overall discontinuation rate was 29.0%. TCRF was associated with the lowest discontinuation (HR 0.62, 95% CI 0.53-0.72; p < 0.001); IPCW addressing informative censoring strengthened this association (HR 0.48, 0.41-0.56). Other nasal procedures showed more modest associations: combined FR and septoplasty (HR 0.84, 0.79-0.90), other nasal surgery (HR 0.86, 0.83-0.89), septoplasty (HR 0.87, 0.84-0.89), and functional rhinoplasty (HR 0.90, 0.78-1.02). CEM confirmed these findings, with TCRF showing HR 0.57-0.61 across model specifications. One-year persistence was 84.2% for TCRF versus 76.1% for no surgery. The TCRF association was consistent across insurance types and all sensitivity analyses. The E-value was 2.61 (CI bound 2.12). Nasal procedures were associated with higher CPAP persistence compared with no surgery, with TCRF showing the strongest and most consistent association. These findings suggest that treating nasal obstruction may help sustain long-term CPAP use, though prospective studies are needed to confirm causality.
To characterize the genetic landscape of congenital ectopia lentis (EL) and assess genotype-phenotype correlations with implications for surgical decision-making. This retrospective study enrolled patients with congenital EL who presented to Fudan University Eye and ENT Hospital between 2017 and 2025. We performed targeted next-generation sequencing for probands, with candidate variants confirmed by Sanger sequencing. Patients were categorized into FBN1 and non-FBN1 groups. The ocular features and surgical options were compared across genotypes. A total of 497 probands were enrolled. The molecular diagnostic yield was 93.36%, with FBN1 variants accounting for 82.93% and non-FBN1 variants for 10.44%. Compared with FBN1 cases, non-FBN1 patients exhibited higher EL severity (P < 0.001), lower corneal curvature radius (CCR) (P < 0.001), and higher incidence of ocular comorbidities (P < 0.01). Surgically, non-FBN1 patients more often required robust intraocular lens fixation methods than did FBN1 patients (P < 0.001). Within the FBN1 cohort, the DN(Cys+CaB)+HI subgroup exhibited longer axial length (AL) (P < 0.001), thinner central corneal thickness (CCT) (P = 0.015), and a higher proportion of clinically diagnosed Marfan syndrome (P < 0.001) compared with the DN(Others) subgroup. In contrast, the FBN1 DN(Others) subgroup showed comparable AL, CCT, and CCR to the non-FBN1 group (all P > 0.05). No significant difference of ocular biometrics or surgical options were observed within the non-FBN1 group, except for the highest CCR in patients harboring CPAMD8 variants (P = 0.004). Genetic characterization of congenital EL extends beyond diagnosis to inform ocular phenotype variability and surgical decision-making.
Treatment for exocrine pancreatic insufficiency (EPI) includes pancreatic enzyme replacement therapy (PERT). Although PERT may improve symptoms, there are currently no EPI symptom measures to monitor PERT treatment. This analysis evaluated the utility of the Pancreatic Exocrine Insufficiency Questionnaire (PEI-Q) in patients with chronic pancreatitis and EPI treated with pancrelipase. This prospective, observational study (NCT04949828) included adult patients with chronic pancreatitis and EPI who were recommended pancrelipase 72,000 lipase units per meal and 36,000 lipase units per snack by an independent physician. Outcomes included the change from baseline to 1 month and 3 months after treatment initiation in PEI-Q symptom score, PEI-Q symptom severity categories, and health-related quality of life (HRQoL) as measured by the PEI-Q impact domain score. The per-protocol population included 32 and 29 patients with data at 1 and 3 months after pancrelipase initiation, respectively. A significant reduction in mean PEI-Q symptom scores was observed from baseline to 1 month (- 1.0; p < 0.001) and 3 months (- 1.1; p < 0.001). Despite all patients having moderate/severe EPI symptoms at baseline, 62.5% and 62.1% of patients reported mild/no EPI symptoms after 1 and 3 months, respectively. There was also a significant reduction in mean PEI-Q impact scores from baseline to 1 month (- 1.0; p < 0.001) and 3 months (- 1.2; p < 0.001), indicating improved HRQoL. Pancrelipase improved patient-reported EPI symptoms and HRQoL based on the PEI-Q, which may be a useful tool for monitoring PERT treatment in patients with chronic pancreatitis and EPI.
暂无摘要(点击查看详情)
Emotion dysregulation (ED) is a hallmark of attention-deficit/ hyperactivity disorder (ADHD); however, routine retrospective evaluations neglect its dynamic nature. To address this limitation, Ecological Momentary Assessment (EMA) provides a useful methodology for collecting data in real time and in participants' natural surroundings. The current systematic review synthesizes findings from EMA studies on emotion dysregulation in ADHD. Systematic searches were conducted through April 2025 across three databases (PubMed, Scopus, and Web of Science). We included 33 studies, with approximately 2678 participants. Findings demonstrate that EMA successfully captures variation in emotion dysregulation while relating it to negative affect and functional impairment, regardless of age. Compliance rates (60-99%) confirm feasibility; however, studies with larger sample sizes and longitudinal data, particularly conducted with adolescents, are required. By providing "real-time" insight into the everyday dynamics of ADHD, the evidence compiled in this review offers a strong basis for the clinical use of EMA in patient management.
Oncogenic viruses cause high-risk cancers in humans and are responsible for nearly 20% of all cancer cases worldwide. Currently, very limited data exist in the realm of wastewater-based viral epidemiology (WBE) for cancer-causing viruses, with existing studies using targeted approaches (i.e., PCR-based approaches) that lack genomic resolution. In this study, we used a hybrid-capture approach to detect, filter, and sequence all known oncogenic virus signals from wastewater samples collected over 3 years (May 2022-May 2025) in 16 Texas cities, covering nearly 25% of the state's population. Once sequenced, we used custom computational tools designed for wastewater metagenomics to assign reads into their respective virus of origin, estimate viral abundances over time, and measure genomic read coverage. Our data indicate that we successfully detected oncogenic viruses, including six known oncogenic viruses, and three suspected oncogenic viruses, across all sampling locations within Texas. We observed a gradual increase in the viral abundance of oncogenic viruses over 3 years, with distinct peaks and dips over the summer and winter months. The prevalence of high-risk viruses such as human papillomavirus (HPV) and Epstein-Barr virus (EBV) rose, with sharp increases in viral abundance observed post-2024. We also obtained nearly 100% genome coverage with viral reads captured using this hybrid-capture technique for nearly all oncogenic viruses, with resolution down to the species and type taxonomic levels in some cases, such as that of HPV. Our study showcases the utility of hybrid-capture techniques to detect and track multiple oncogenic viruses simultaneously.IMPORTANCECancer-causing viruses are of major clinical significance, responsible for nearly 20% of all recorded cancer incidences in humans worldwide. There is a need for improved detection, tracking, and control of oncogenic viruses across the globe. To our knowledge, this work is the first comprehensive WBE approach used to detect all known oncogenic viruses concurrently, demonstrating the feasibility of monitoring the presence and levels of cancer-causing viruses and enabling the possibility of public health interventions in the future. Using this method, we obtain broad genomic coverage at strong depth and specificity, coupled with consistent real-time tracking dynamics of multiple oncogenic viruses. Furthermore, we showcase the ability to identify genomic regions on viral reference genomes from which sequenced reads originate. This information can be an invaluable tool toward understanding the viral prevalence dynamics in general populations, their relationship to cancer incidences in humans, and their mechanisms of viral evolution, including mutations.
Biofilms are structured microbial communities that thrive on diverse surfaces in natural, industrial, and host environments. The biofilm lifestyle underpins microbial survival, shapes ecosystem function, and drives persistent infections; yet, for many microbes, the molecular determinants of biofilm development remain poorly defined. Here, we introduce "label-free analysis of biofilms" (LFAB), an imaging method that integrates time-lapse, low-magnification brightfield microscopy with regional optical density measurements to quantify biofilm biomass. Unlike conventional assays, LFAB enables real-time, non-perturbative, and high-throughput monitoring of biofilm dynamics. We validated LFAB across diverse microbial species and observed a strong correlation with traditional biofilm quantification methods. Applying LFAB to Streptococcus pneumoniae, a major human pathogen whose biofilm lifecycle underpins colonization and infection, we uncovered reproducible patterns of microcolony biofilm expansion and growth. LFAB-enabled screening of a transposon mutant library revealed that biofilm formation in S. pneumoniae is shaped by genes spanning carbohydrate metabolism, cell wall synthesis, adhesion, and surface interactions. Further analysis identified choline-binding protein A (CbpA) and its associated two-component regulator, as well as the peptidoglycan hydrolase LytB, as key drivers of microcolony biofilm dynamics. Together, these findings establish LFAB as a broadly applicable platform for dissecting biofilm biology and reveal new regulators of biofilm development in a clinically important pathogen. Biofilms are structured communities of microorganisms that attach to surfaces and persist within a self-produced matrix. The biofilm lifestyle underlies microbial survival in nature, contributes to industrial biofouling, and drives many chronic infections. Despite the importance of biofilms, high-throughput measurements of biofilm growth dynamics are challenging using existing tools, which are often disruptive or are not scalable. To overcome this limitation, we developed "label-free analysis of biofilms" (LFAB), a brightfield-based imaging platform that enables real-time, non-perturbative, and scalable quantification of biofilm biomass. LFAB is broadly applicable across species and correlates strongly with traditional assays. Applying LFAB to Streptococcus pneumoniae, a major human pathogen, we performed a mutagenesis screen, uncovering new genetic regulators of biofilm formation in this organism. These findings advance understanding of S. pneumoniae pathogenesis and establish LFAB as a powerful approach for dissecting the molecular basis of microbial community growth.
Atypical hemolytic uremic syndrome (aHUS) is a rare, life-threatening complement-mediated thrombotic microangiopathy (TMA) associated with high morbidity, mortality, and substantial healthcare resource utilization (HCRU). Although complement inhibitors such as eculizumab and ravulizumab have dramatically improved outcomes, aHUS continues to impose significant economic and operational burden on healthcare systems. In the Gulf region, payers and health systems face increasing pressures driven by rising costs, delayed diagnosis, fragmented access, and limited real-world evidence necessary for informed resource allocation decisions. This manuscript summarizes the findings of a payer-focused regional expert meeting, which included three presentations: (1) the role and value of HCRU studies for payers, (2) the clinical and economic burden of aHUS, and (3) existing global evidence on HCRU in aHUS and gaps relevant to the Gulf. The meeting concluded with an expert panel discussion and a survey assessing feasibility and priorities for a collaborative real-world study. Consensus highlighted urgent needs to establish regional incidence and prevalence, quantify utilization and cost burden, evaluate treatment patterns and outcomes, and design a standardized multicenter study. Priority research directions include retrospective chart review with prospective follow-up and unified data collection across Gulf institutions. These findings provide a payer-informed roadmap for building evidence that can support sustainable, equitable, and value-based decision making for aHUS care in the region.
Purpose To quantify postimplementation concordance between a U.S. Food and Drug Administration-cleared artificial intelligence (AI) tool and AI-informed radiologists for pulmonary embolism (PE) detection on CT pulmonary angiography (CTPA), with real-time adjudication of discordances. Materials and Methods A PE AI tool (AIDOC, Tel Aviv, Israel) was retrospectively implemented in the clinic across an integrated network (August 9, 2021-February 20, 2023). Adult CTPAs underwent real-time AI analysis and radiologist interpretation. Radiologist-AI disagreements triggered adjudication by thoracic radiologists via the AI Quality Oversight Process; adjudicator diagnosis served as the reference standard for discordant cases. Concordance was measured and diagnostic performance of radiologists and AI was compared using adjudication for discordant cases. Results 32,501 CTPAs obtained from 29,492 patients (mean age, 62.4 years ± 18.6 [SD], 17,424 female) were evaluated. PE positivity was 9.93% (3,226/32,501). Overall concordance was 97.79% (95% confidence interval [CI]: 97.62-97.94%), higher for AI-negative than for AI-positive examinations (98.18% vs 93.75%; P < .001). Expert adjudication favored the radiologist in 88.73% of discordances. The rate of unique diagnosis by the interpreting radiologist (483/3,226, 14.97%) was approximately 19 times that of the AI tool alone (26/3,226, 0.81%). Concordance varied by PE features: acute versus chronic (87.34% vs 60.12%; P < .001) and location (central 95.79%, lobar/segmental 83.81%, subsegmental 58.62%; all P < .001). Conclusion In large-scale deployment, AI showed high concordance with radiologists and made meaningful contributions in discordant reviews = while expert oversight confirmed complementary roles and highlighted scenarios of radiologist-AI divergence. ©RSNA, 2026.
We present BIOMERO 2.0, a major evolution of the BIOMERO framework that transforms OMERO into a FAIR-compliant (findable, accessible, interoperable, and reusable), provenance-aware bioimaging platform. BIOMERO 2.0 integrates data import, preprocessing, analysis, and workflow monitoring through an OMERO.web plugin and containerised components. The importer subsystem facilitates in-place import using containerised preprocessing and metadata enrichment via forms, while the analyser subsystem coordinates and tracks containerised analyses on high-performance computing systems via the BIOMERO Python library. All imports and analyses are recorded with parameters, versions, and results, ensuring real-time provenance accessible through integrated dashboards. This dual approach places OMERO at the heart of the bioimaging analysis process: the importer ensures provenance from image acquisition through preprocessing and import into OMERO, while the analyser records it for downstream processing. These integrated layers enhance OMERO's FAIRification, supporting traceable, reusable workflows for image analysis that bridge the gap between data import, analysis, and sharing.
Multiple sclerosis (MS) is a chronic, inflammatory disorder of the central nervous system (CNS) characterized by overlapping relapsing and progressive pathologies. Although they exist along a continuum, pathology in relapsing MS (RMS) is primarily driven by the peripheral adaptive immune system, while progressive MS (PMS) pathology is underpinned by innate immune activity and other pathologic processes compartmentalized within the CNS. While currently approved therapies are highly effective in targeting aspects underpinning relapsing pathology, they have limited efficacy in PMS due to either inability to cross the blood brain barrier (BBB) or inability to modulate pathologies driving PMS. Bruton's tyrosine kinase (BTK) inhibitors are an emerging class of oral agents that represent a novel approach to MS treatment. These agents target BTK, an enzyme which plays a pivotal role in both adaptive and innate immune signaling. As small molecules, some BTK inhibitors can cross the BBB at biologically relevant concentrations. Therefore, BTK inhibitors may potentially modulate both relapsing and progressive MS pathology. Data from Phase 2 and Phase 3 trials have been promising in this regard. The unique pharmacological profile of each BTK inhibitor may underpin observed differences in efficacy and safety outcomes to date. In particular, tolebrutinib has demonstrated efficacy against disability progression in non-relapsing secondary progressive MS, while fenebrutinib has recently shown promise in both relapsing and primary progressive MS. However, adverse events include elevated liver enzymes, which can reach life-threatening levels. This review highlights the role of BTK in immune signaling and the rationale for BTK inhibition in MS, discusses the evolution of BTK inhibitors for MS and other indications, summarizes findings from Phase 2 and Phase 3 trials in RMS and PMS, and explores practical questions around the potential use of BTK inhibitors in real-world practice.