Routine data from healthcare are gaining importance for evidence generation. In Germany, new structures have been established in recent years to support their use. As part of the Medical Informatics Initiative (MII), data integration centres (DIZ) have been set up at all university hospitals, where patient data are made available in a pseudonymized, standardized and operationalized form. Since routine data, unlike primary data, are collected for healthcare purposes, aspects of the original data collection must be considered when formulating the research question, planning and analysing the study, and interpreting the results.The EVAluation research based on data from routine clinical care 4 the MII (EVA4MII) project supports researchers in analysing nationwide clinical routine data, for example through training programs and a central advisory service. This support is provided by an interdisciplinary team with methodological-statistical, data-technical, and clinical-epidemiological expertise in close coordination with data-providing institutions. The service covers the entire research process, from study planning and formal requirements to implementation, analysis, evaluation, and publication, and is available to projects within the MII and beyond.The aim of this article is to highlight the importance of methodological support in the analysis of clinical routine data and to identify key stages in the research process where such support is particularly relevant. Finally, an outlook on future advisory needs is provided, including the potential role of artificial intelligence as a supportive tool. Routinedaten aus der medizinischen Versorgung gewinnen für die Evidenzgenerierung an Bedeutung. In Deutschland wurden hierfür in den letzten Jahren neue Strukturen geschaffen. Beispielsweise wurden im Zuge der Medizininformatik-Initiative (MII) an allen Universitätskliniken Datenintegrationszentren (DIZ) aufgebaut, in denen Patient*innendaten pseudonymisiert, standardisiert und operationalisiert vorgehalten werden. Da Routinedaten, anders als Primärdaten, für Versorgungszwecke erhoben werden, müssen Aspekte der ursprünglichen Datenerhebung bei der Forschungsfrage, Studienplanung und -auswertung sowie bei der Interpretation der Ergebnisse berücksichtigt werden.Das Projekt EVA4MII (EVAluationsforschung auf der Grundlage von Daten aus der klinischen Routineversorgung 4 MII) unterstützt Forschende bei der Analyse deutschlandweiter klinischer Routinedaten unter anderem durch Weiterbildungsangebote und eine zentrale Beratungsplattform. Die Beratung erfolgt durch ein interdisziplinäres Team mit methodisch-statistischer, datentechnischer und klinisch-epidemiologischer Expertise in enger Abstimmung mit datenbereitstellenden Einrichtungen. Das Angebot umfasst den gesamten Forschungsprozess – von der Studienplanung und den Formalitäten über Durchführung, Auswertung und Bewertung bis hin zur Veröffentlichung – und richtet sich an Projekte der MII und darüber hinaus.Ziel des Artikels ist es, die Bedeutung methodischer Unterstützung bei der Analyse klinischer Routinedaten darzustellen und zentrale Punkte im Forschungsprozess zu identifizieren, an denen diese besonders relevant ist. Abschließend wird ein Ausblick auf zukünftigen Beratungsbedarf gegeben, wobei auch der Einsatz künstlicher Intelligenz als unterstützendes Werkzeug berücksichtigt wird.
The purpose of this study is to identify hub genes associated with both osteoporosis (OP) and chronic kidney disease (CKD) through bioinformatics analysis, and to explore the potential pathogenetic mechanisms in OP and CKD through these hub genes. We downloaded the GSE15072 and GSE56815 datasets from the GEO database as training sets, and GSE7158 and GSE70528 for validation. Differential expression genes were selected using the "limma" package, while gene co-expression networks were constructed with "WGCNA." Functional enrichment analyses were performed using "clusterProfiler." Hub genes were identified through machine learning techniques, and their diagnostic efficacy was evaluated by ROC curves plotted with the 'pROC' package. Immune infiltration was analyzed using CIBERSORT, and pan-cancer relationships were explored to identify associations between hub genes and various tumors. Potential therapeutic agents were investigated using the Drug Signatures Database (DSigDB). Experimental validation was conducted via RT-qPCR using cisplatin-induced chronic kidney disease (CKD) and ovariectomy (OVX)-induced osteoporosis models in C57BL/6J mice. After anesthesia and sacrifice, peripheral blood mononuclear cells (PBMCs) were collected to analyze the expression changes of hub genes. This study identified four hub genes (FAM184A, NFKBIA, RP2, HIRA). All hub genes exhibited excellent diagnostic performance, with FAM184A showing the best performance. Immune infiltration analysis revealed the relationships between hub gene expression levels and various immune cells. Pan-cancer analysis revealed the expression levels of FAM184A in different tumors, and it showed that high expression of FAM184A in SARC, SKCM, and PAAD is associated with improved prognosis and reduced mortality rates. Finally, RT-qPCR analysis revealed the mRNA expression levels of the hub genes in both OP and CKD. The mRNA expression of all hub genes were downregulated in osteoporosis model mice compared with normal mice, while in CKD mice, the mRNA expression of all hub genes except FAM184A was upregulated. This study identified four hub genes with significant diagnostic efficacy, suggesting they may act as crucial links between osteoporosis and chronic kidney disease. These genes offer promising targets for the treatment of both diseases. The findings of this study provide valuable insights for future research, which could further elucidate the complex pathogenetic mechanisms connecting chronic kidney disease and osteoporosis.
High-throughput processing of patient biosamples by next-generation sequencing and the comparison of molecular data with patient-level and sample-level clinical data require precise tracking and matching of sample identifiers throughout the biospecimen chain of custody and are critical to enabling robust interpretation of biomarker trial results. In addition to tracing individual steps in the sample and data processing workflows, bioinformatics solutions can be used to confirm that samples originate from the same patient. Here, the use of a bioinformatics workflow to identify matched samples originating from the same individual is showcased. The analysis workflow is suitable for any two or more pairs of NGS datasets to be compared and verified for patient sample origin. A scoring algorithm based on genome-wide comparisons of samples enables the user to determine whether two samples stem from the same individual. Specifically, single-nucleotide polymorphisms (SNPs) within selected linkage disequilibrium blocks are used to identify and compare samples. Threshold combinations for permissive and stringent selection of matched and mismatched samples were identified. The utility of this protocol was demonstrated through its application to the quality control and validation of clinical tumor tissue and blood samples, encompassing multiple omics modalities from over 2,000 patients.
The Greater China region represents the fastest-growing market for online medical records (OMR) implementation. However, the implications of these technologies for patient-centered care remain unclear. This study examined the relationships among different types of OMR usage (online patient-provider interactions, health information management, self-health management, and decision-making), patient-centered communication (PCC), and patient empowerment using a national sample (N = 6,271) of patients from Mainland China, Taiwan, Hong Kong, and Macao. Results revealed that OMR usage for online patient-provider interactions, self-health management, and decision-making was associated with patient empowerment among younger (18-34) and middle-aged (35-59) cohorts, with PCC mediating these relationships. eHealth literacy (eHL) moderated the relationship between self-health management and PCC in younger and middle-aged cohorts, while also moderating the association between decision-making and PCC across all age cohorts. Framed within the ecological model of health communication and life span perspectives, these findings suggest that future OMR studies would benefit from more nuanced and dialectical approaches. Practical implications are also discussed.
The IMforte trial had supported lurbinectedin as a recommended first-line maintenance therapy for patients with extensive-stage small-cell lung cancer (ES-SCLC), while its cost-effectiveness remains uncertain for patients and clinical decision-makers. This study aims to evaluate the cost-effectiveness of adding lurbinectedin to atezolizumab as maintenance therapy for ES-SCLC and to assess the impact of drug wastage and mitigation strategies from the perspective of the US healthcare system. A partitioned survival model was developed to compare the cost-effectiveness of lurbinectedin plus atezolizumab versus atezolizumab alone. Key outcomes included total costs, effectiveness measured in quality-adjusted life years (QALYs), and the incremental cost-effectiveness ratio (ICER). Drug wastage was incorporated in the base-case analysis, and scenario analyses evaluated mitigation strategies through vial sharing and dose rounding. Model uncertainty was assessed using one-way sensitivity analysis and probabilistic sensitivity analysis. Over a 10-year time horizon, the lurbinectedin plus atezolizumab group accrued 0.3 more QALYs than the atezolizumab group. Under conditions of drug wastage, the addition of lurbinectedin resulted in an incremental cost of $230,000. When drug wastage was either excluded or mitigated through strategies such as vial sharing and dose rounding, the incremental cost was reduced to $180,000. The corresponding ICERs were $770,000 and $610,000/QALY, respectively. To achieve a greater than 50% probability of cost-effectiveness, an 85% price reduction in both high-cost drugs would be required. Sensitivity analyses indicated that the results were robust to variations in model parameters. At current price, lurbinectedin plus atezolizumab is not cost-effective as ES-SCLC maintenance therapy. Mitigating drug wastage and implementing substantial price reductions are essential to improve its economic value.
We evaluated the association between bevacizumab use and interstitial lung disease (ILD), 180-day mortality, and venous thromboembolism (VTE) risks in patients with non-squamous non-small-cell lung cancer (NSCLC) receiving first-line platinum-based chemotherapy. Using the Japanese Diagnosis Procedure Combination database (2011-2023), we identified patients with stage III-IV non-squamous NSCLC who initiated platinum-based chemotherapy with pemetrexed, with or without bevacizumab. A target trial emulation framework was applied. The primary outcome was ILD requiring corticosteroid treatment within 180  days of chemotherapy initiation. The secondary outcomes included 180-day mortality, mortality within 30 days after ILD onset, and VTE. Propensity score overlap weighting was used to balance the baseline covariates. Fine-Gray models and Cox proportional hazards models were used to estimate subdistribution hazard ratios (SHRs) for ILD and VTE and hazard ratios (HRs) for mortality. Overall, 47,433 patients were analyzed (bevacizumab group: n = 12,101; non-bevacizumab group: n = 35,332). Bevacizumab use was associated with lower risks of ILD [SHR, 0.75; 95% confidence interval (CI), 0.67-0.84], 180-day mortality (HR, 0.61; 95% CI, 0.57-0.66), and mortality within 30 days after ILD (HR, 0.71; 95% CI, 0.57-0.88). The overall VTE risk was similar between the two treatment groups. The main results were consistent across subgroups stratified by the baseline platinum agent, ICI use, age group, and ILD history. In patients with advanced non-squamous NSCLC receiving first-line platinum-based chemotherapy, bevacizumab use was associated with lower risks of ILD and short-term mortality, without increased VTE risk.
As global competition intensifies, work-related stress and burnout have become increasingly prevalent, yet their neural correlates remain poorly understood. Drawing on the Conservation of Resources (COR) theory as a conceptual framework, this study examined whether white-matter integrity in executive-function-related brain tracts statistically mediates the associations between workload, resource use, and burnout. Using cross-sectional diffusion MRI data from 188 healthy adults in Japan, we focused on fractional anisotropy (FA) in the superior longitudinal fasciculus (SLF), internal capsule (IC), and external capsule (EC), which have been consistently linked to executive functioning. Correlation and mediation analyses revealed that higher workload was negatively associated with FA in the SLF, IC, and EC, whereas FA in these tracts was negatively associated with cynicism, a core dimension of burnout. Resource use was positively associated with FA in the SLF. Mediation analyses further indicated that FA in the SLF, IC, and EC partially mediated the association between workload and cynicism, and that FA in the SLF fully mediated the association between resource use and cynicism. No comparable mediation effects were observed for exhaustion or professional efficacy. These findings should be interpreted as associative rather than causal. FA is not conceptualized as a resource itself, but as a neural correlate of executive-function capacity that covaries with psychological resource dynamics. By integrating a well-established stress-behavior framework with structural neuroimaging, this study provides an initial interdisciplinary perspective on how work-related demands, resource utilization, and burnout-related attitudes align with individual differences in white-matter integrity.
Perfluoroalkyl substances (PFAS) are pollutants with relevant accumulation in humans, and the enterohepatic circulation of PFAS secreted in bile sustains their persistence. A significant increase in fecal excretion has been experimentally assessed with the use of oral adsorbents with negligible gut absorption. Here, we evaluated in vitro the use of activated charcoal (AC) for human consumption, as sorption material for a panel of PFAS, such as, perfluoro-butanoic acid (PFBA), perfluoro-butanesulfonic acid (PFBS), perfluoro-hexanoic acid (PFHxA), perfluoro-hexanesulfonic acid (PFHxS), perfluoro-octanoic acid (PFOA), and perfluoro-octanesulfonic acid (PFOS), in an experimental simulated bile juice (SBJ). The aim was to obtain preliminary data for possible clinical applications to reduce PFAS blood levels in humans. PFAS concentrations in experimental samples were quantified by liquid chromatography-mass spectrometry. In kinetic tests, equimolar solutions of single PFAS in SBJ were incubated with AC at 37 °C up to 120 min, and the time-dependent reduction of PFAS concentration was monitored. In thermodynamic tests, PFAS solutions in SBJ were incubated at increasing concentrations with AC for 24 h at 37 °C and the concentrations at equilibrium evaluated. Results were finally fitted with available models in order to characterize the PFAS interaction with AC. All PFAS showed more than 80% sorption on activated charcoal from simulated bile juice within 120 min. This suggests rapid and nearly complete removal. Modeling analysis indicated that the pseudo-first-order kinetic model best described short-chain PFAS, while PFOS and PFOA fitted better with the Elovich model. Thermodynamic analysis showed a general fitting with the Freundlich model, presumptive of a heterogeneous binding model. PFOS binding was concentration-dependent and was better described by the Sips model. These data are suggestive of a potential noninvasive intervention strategy to increase fecal PFAS excretion through the dietary use of AC, in order to mitigate health issues associated with PFAS exposure.
The translation of big data analytics and artificial intelligence (AI) into clinical decision support systems (CDSSs) has advanced from proof of concept to real-world clinical practice. AI-informed CDSSs show measurable improvements in diagnostic accuracy, risk stratification, resource use, and patient outcomes compared to traditional models, offering the potential to assist clinicians in managing symptom complexity and uncertainty in health care delivery. Despite this potential, access to large amounts of high-quality and granular data remains one of the most significant bottlenecks to AI-enabled CDSSs. We argue that as health care systems increasingly adopt data-driven decision support, addressing the challenges of data accessibility and protection is essential to realizing the full potential of AI in clinical medicine. We use selected case examples of AI-informed CDSSs in oncology, organ transplantation, diabetic retinopathy, epilepsy, spinal cord injury, rare disease diagnosis, and emergency medicine to illustrate opportunities and challenges related to AI's potential to improve patient outcomes. We discuss public and semipublic, medical institutional and commercial, and government and national data sources that are currently available for the development of CDSSs and highlight the practical and ethical constraints associated with these data. We consider alternative data resources and ways in which health care systems can strengthen data ecosystems to increase AI-driven CDSS efficacy and implementation to improve patient outcomes.
Diabetic retinopathy (DR) diagnosis from digital fundus images is a long-standing topic of research in medical image processing. The determination of optic disk boundaries in two-dimensional retinal images is difficult due to blurred edges, which makes this field in need of improvement. All these problems cannot be solved by a single technique. An efficient algorithm for identifying DR-related retinal changes and structure is still needed. If DR is recognized and treated in a timely manner, visual deterioration can be managed or avoided. It is based on telemedicine analysis of color fundus pictures or clinical evaluations by medical doctors. However, due to intrinsic human subjectivity, both systems are time-consuming, labor-intensive, and prone to inaccuracy. Due to their great specificity and sensitivity, automated methods capable of analyzing color fundus pictures have become important for the general deployment of DR screening. To study the existence of DR-related characteristics and to cope with the various diabetes severity diagnosis phases, a hybrid quantum convolutional neural network (HQCNN) is presented. Kaggle fundus images database is utilized to test and train the network. Finally, the presented work is compared for analyzing efficiency using the system of measurement like precision, specificity, accuracy, sensitivity, and f1 score. The proposed work obtains accuracy of 98.89%, sensitivity of 99.37%, specificity of 99.57%, precision of 98.89%, and F1 score of 97.58%.
BackgroundDetecting breast cancer, especially identifying microcalcifications in mammograms, is challenging due to the need for high sensitivity and efficient processing. This study presents a novel algorithm, Sigmoidal Slope Analysis and Aspect Ratio Evaluation (SAAR), designed for real-time application on edge devices. By employing a multi-step adaptive process with sigmoidal functions, SAAR enhances intensity contrast and prioritizes regions of interest, enabling fast, accurate detection of microcalcifications.ObjectiveThis study aims to develop and validate an efficient, edge-device-compatible method for detecting microcalcifications in mammographic images. The goal is to provide a tool that enhances diagnostic efficiency through real-time processing, thereby supporting early breast cancer detection in both clinical and remote settings.MethodsThe SAAR algorithm utilizes an adaptive slope detection technique based on the sigmoid function, dynamically adjusting to local intensity features. This approach allows for greater adaptability to image variations. The algorithm prioritizes regions of interest through a multi-step adaptive process, enhancing intensity differences to focus on potential microcalcifications.ResultsTesting on established mammography databases, such as MIAS, demonstrates the algorithm's effectiveness, with improved sensitivity compared to conventional methods. Designed for edge devices, the algorithm leverages their real-time processing capabilities, offering lower latency and enhanced privacy.ConclusionsThe integration of SAAR with edge devices represents a promising advancement in breast cancer detection. The adaptive nature of SAAR, coupled with the real-time processing capabilities of edge devices, provides a robust solution for enhancing microcalcification detection efficiency and sensitivity in mammography.
Lung cancer (LC) is the leading cause of cancer-related mortality worldwide, primarily due to diagnosis at advanced stages. Although low-dose computed tomography (LDCT) screening reduces lung cancer mortality in high-risk populations, current screening programmes are largely restricted to individuals defined by age and smoking history. This approach excludes never-smokers and individuals with non-smoking-related risk factors, limiting the equity, efficiency and scalability of lung cancer screening. The LUng Cancer risk factors and their Impact Assessment (LUCIA) project aims to overcome these limitations by developing personalised lung cancer risk prediction models and evaluating novel non-invasive technologies for early detection within a risk-adapted screening strategy. LUCIA is a multicentre, observational, longitudinal cohort study that will recruit approximately 4000 participants across four European regions: Andalusia and the Basque Country (Spain), Liège (Belgium) and Riga (Latvia). The study population includes smokers, never-smokers and reduced smokers with low-to-moderate lung cancer risk. All participants will initially enter phase 1 (wide population screening) and may transition to phase 2 (precision screening) or phase 3 (diagnosis) based on LDCT findings, results from non-invasive screening devices and artificial intelligence-based risk prediction models. Participants will be followed up for 24 months, with assessments at baseline and at 6, 12 and 24 months. Data collection includes sociodemographic characteristics, medical history, environmental and occupational exposures, lifestyle factors, spirometry, multi-omics profiles and outputs from novel non-invasive devices, including a breath analyser, spectrometry-on-card and a skin-applied volatile organic compound sensing patch. The study will develop and validate integrated lung cancer risk prediction models and evaluate the diagnostic performance of these technologies to support population stratification and personalised screening. The study will be conducted in accordance with the Declaration of Helsinki, Good Clinical Practice guidelines and applicable national and European regulations. Ethical approval has been obtained from the relevant ethics committees in all participating countries. Written informed consent will be obtained from all participants. Study findings will be disseminated through peer-reviewed open-access publications, scientific conferences and communication with public health stakeholders. ClinicalTrials.gov, NCT06473870.
Digital health interventions (DHIs) often struggle with participant engagement. A stepped care approach, starting with low-resource intensity strategies and escalating as needed, can optimize resource use. Yet its application and cost implications remain underexplored. This study uses data from the iSIPsmarter experimental arm of a 2-group randomized controlled trial targeting sugar-sweetened beverage consumption in rural Appalachia. This study examines the demand and implementation costs associated with iSIPsmarter's stepped care engagement approach and simulates how variations in monitoring efficiency, demand, and stepped care intensity influence resource use and implementation costs to inform future implementation. iSIPsmarter's stepped care process combined automated and human-supported components to enhance engagement across 6 web-based modules ("Cores") over 9 weeks. Participants who did not complete a Core received an automated email, followed by stepped care if still incomplete: a text (step 1, low-resource intensity) after 7 days and up to 3 telephone attempts (step 2, high-resource intensity) after another 7 days. Staff time was tracked to estimate implementation costs: monitoring averaged 3 minutes (US $1.68), texts 2.83 minutes (US $1.58), and calls 5.1 minutes (US $2.85). Simulations explored 18 scenarios varying monitoring efficiency (20%, 50%, and 80% of trial-observed monitoring time and costs), stepped care demand (20%, 50%, and 80% of participants needing stepped care), and intervention intensity (low vs high). Among 126 participants, the mean stepped care contact was 1.2 (SD 1.3): 52 (41%) required none, 42 (33%) required 1 Core contact, 26 (21%) required 2, and 7 (6%) required 3. On average, participants completed 5.2 (SD 1.6) of 6 Cores. The mean stepped care implementation time per participant was 26.46 (SD 11.02) minutes, with a corresponding mean cost of US $14.80 (SD 6.16). Monitoring accounted for 78% of total cost (mean cost US $11.61, SD 2.37), with initial monitoring contributing 58% of total cost (mean cost US $8.51, SD 2.35). Simulations showed variation in time and cost based on monitoring efficiency. In low-demand, low-intensity scenarios, efficient monitoring required mean of 7.47 (95% CI 7.36-7.57) minutes and mean cost of US $4.18 (95% CI 4.12-4.24), while inefficient monitoring required a mean of 19.58 (95% CI 19.21-19.95) minutes and mean cost of US $10.95 (95% CI 10.74-11.16). In high-demand, high-intensity scenarios, efficient monitoring required a mean of 101.80 (95% CI 101.65-101.96) minutes and mean cost of US $56.92 (95% CI 56.84-57.01), while inefficient monitoring increased time to a mean of 146.32 (95% CI 145.92-146.71) minutes and mean cost of US $81.82 (95% CI 81.60-82.04). A stepped care approach can efficiently sustain engagement in DHIs by targeting support to higher-need participants. These findings offer actionable guidance for designing scalable, cost-effective interventions for real-world settings, as resource-efficient engagement strategies remain a persistent challenge for DHIs.
The Ixworth chicken is a British dual-purpose breed and mostly maintained by small-scale farmers. Due to legislation regarding the ban on male chick culling in European countries, such as in Germany, renewed interest has arisen in rearing dual-purpose chickens that provide both meat and eggs from the same genetic line. This dataset was generated within the scope of projects aiming to evaluate the viability of dual-purpose breeds for sustainable and welfare-oriented poultry production. One of the objectives was to characterize the genetic potential of the Ixworth chicken as a model for breeding programs that combine conservation and practical use in ecological farming systems. Liver samples from 49 male Ixworth chickens were collected after scheduled slaughter at the Campus Frankenforst of the Faculty of Agricultural, Nutritional and Engineering Sciences of the University of Bonn, Germany. Genomic DNA was extracted and subjected to whole-genome resequencing using the Illumina NovaSeq 6000 platform. The dataset provides high-resolution genomic information on a rare breed with a pure dual-purpose background. This resource represents the first public sequencing dataset of the Ixworth chicken and thus offers a valuable foundation for future studies on genetic diversity, conservation genomics, and breeding strategies for sustainable poultry production.
Emergency department presents a distinctive challenge for implementation infection prevention and control (IPC), due to their complex and dynamic environment, diverse patient population, and unknown carrier status. The objective was to assess the compliance with a number of IPC practices among a group of healthcare workers (HCWs) working in the emergency department. An observational cross-sectional study was conducted at a large emergency department at a tertiary care hospital between 2018 and 2023. Data were gathered during observation sessions using a standardized IPC observation form. Observers were either experienced IPC professionals or trained medical students. Out of 123,947 HCW-specific practices observed, 85,542 (69.0%) were compliant and out of 41,650 unit-specific practices observed, 38,355 (92.1%) were compliant. The compliance was highest in the competence of acute respiratory infection procedures (97.3%), followed by isolation precautions (97.0%), housekeeping (96.8%), disposal of sharps (96.8%), waste management (94.5%), donning and doffing of personal protective equipment (PPE, 72.9%), use of PPE (72.3%), hand hygiene (67.2%), patient sitters (64.1%), and disinfection of medical equipment (61.2%). Nurses across all units had much better compliance than other professions. There were > 10% differences in the compliance across the units, with higher compliance in mainly pediatric compared with adult units. The compliance was highest during the COVID-19 pandemic years. There is considerable variability in implementation of IPC at the emergency department, by practice, profession, unit, and pandemic time. The findings underscore the importance of strategies to improve disinfection of medical equipment, hand hygiene, and adherence of patient sitters.
Win ratio and related net benefit summaries analyze prioritized composite endpoints via hierarchical pairwise comparisons. It is tempting to expect monotonicity: If treatment improves each component endpoint, then the hierarchical summary should favor treatment. We show this intuition can fail even in the simplest setting with two binary endpoints. Using an exact 2-by-2 frequency table, treatment has higher marginal success probabilities on both endpoints, yet the win ratio is below one, and the net treatment benefit is negative. The reversal occurs because the secondary endpoint is consulted only within primary-tie strata, inducing a reweighting that can emphasize strata where treatment performs worse. We decompose the net treatment benefit to isolate the tie-stratum contributions driving the reversal and propose minimal reporting diagnostics to improve interpretability.
Lumbar spine MRI is predominantly performed using 2D FSE sequences. 3D FSE sequences offer potential advantages over 2D, especially with recent advances in deep-learning reconstruction (DLR) and the use of conformal high-density coil arrays to improve the SNR. This study aimed to compare image quality and diagnostically relevant differences between 2D and 3D lumbar spine protocols, both using a prototype conformal coil and DLR. Subjects with lumbar curvature referred for routine lumbar spine MRI were prospectively recruited (n=31). A prototype, conformal spine coil array with 70 total posterior receiver elements was used at 3T. DLR was applied to both the 2D- and 3D-protocols to improve sharpness and reduce noise. For 3D, a prototype DLR ("Sonic DL") technique was applied to provide higher acceleration factors. Three attending musculoskeletal radiologists assessed 2D and 3D T1-weighted (T1), T2-weighted (T2), and STIR sequences for image quality and diagnostic performance. The Gwet agreement coefficient (AC1, AC2) or intraclass correlation (ICC) was used to assess interrater agreement, while the Wilcoxon signed rank test or a sign test with Bonferroni correction assessed differences between 2D and 3D sequences. There was an overall 14% scan time reduction of 3D compared with 2D sequences, and a 38% time reduction for T2 sequences. All 3D sequences exhibited similar image quality to 2D. 3D T2 sequences had less CSF flow artifact than 2D T2 sequences. Interrater agreement was mostly comparable between all 2D and 3D assessments, including excellent agreement for the spinal canal area (ICC = 0.952 [95% CI: 0.907-0.976] for 2D, 0.941 [0.866-0.973] for 3D), good agreement for foraminal stenosis assessments on T2 sequences (AC2 = 0.713 to 0.795 for 2D, 0.706-0.832 for 3D), and good agreement for fracture detection on T1 sequences (AC1 = 0.869 for 2D, 0.815 for 3D). DLR-enabled 3D lumbar spine MRI offers comparable image quality to 2D with reduced overall scan time.
There is an unmet need for neuroregenerative therapies in multiple sclerosis (MS). Mesenchymal stem cells (MSCs) have shown immunomodulatory and regenerative effects in preclinical models and early clinical studies. Intrathecal administration may enhance therapeutic potential by direct delivery to the CNS. However, randomized, placebo-controlled trials are needed to establish safety and efficacy. The objective of this trial was to assess whether a single intrathecal administration of autologous MSCs could provide evidence of a neuroregenerative effect in progressive MS. In this randomized, double-blind, placebo-controlled phase I/II trial (NCT04749667), patients with progressive MS enrolled at 4 Norwegian tertiary hospitals received a single intrathecal injection of autologous bone marrow-derived MSCs (1 × 106 cells/kg) in a crossover design. The primary end point was the change in latency of combined evoked potentials at 6 months. Secondary end points included safety, brain MRI measures, functional and ophthalmologic assessments, and serum biomarkers at 6 and 12 months. Exploratory analyses comprised proteomic profiling of CSF. Outcomes were analyzed using baseline-adjusted regression models. A total of 18 patients were included (mean age 46.7 years; 55.6% female). No significant between-group difference was observed for the primary end point (β = -0.31, 95% CI -1.84 to 1.22, p = 0.668). At 6 months, patients in the MSC group showed reduced cerebral atrophy on MRI (β = 9.37, 95% CI 0.29 to 18.45, p = 0.044) and lower serum glial fibrillary acidic protein levels (β = -16.3 pg/mL, 95% CI -33.0 to 0.3, p = 0.054), but neither were sustained at 12 months. Exploratory CSF proteomics revealed reductions in multiple inflammation-related proteins at 6 months. One serious adverse event was deemed probably related to MSC treatment. Common adverse events included fever (n = 9) and low back pain (n = 10) after MSC administration, and spinal MRI abnormalities with fluid loculations and nerve root clumping (n = 7) at 6 months. One patient developed chronic coccygeal pain attributed to arachnoiditis. A neuroregenerative effect was not detected, although interpretation may be limited by the small sample size. Adverse events suggest an acute localized inflammatory reaction after MSC administration. Our findings suggest that intrathecal administration of MSCs in progressive MS should be approached with caution in future studies. This study provides Class III evidence that, in patients with progressive multiple sclerosis, treatment with a single intrathecal administration of autologous mesenchymal stem cells does not provide neuroregenerative effect, as assessed by a composite evoked potential score. ClinicalTrials.gov (NCT04749667); registered February 8, 2021; first patient enrolled August 9, 2021.
Opioid overdose remains a leading cause of preventable death in the United States. Existing approaches to identify individuals at elevated risk rely on imprecise rule-based criteria that misclassify patients' risk of this serious health outcome. Machine learning (ML) algorithms can help improve prediction performance and can be combined with electronic health record (EHR) interventions to reduce overdose risk. The Machine Learning Prediction and Reducing Overdoses With EHR Nudges (mPROVEN) clinical trial integrates a validated ML overdose risk model with behavioral economics-informed EHR nudges to test whether the combination improves evidence-based prescribing behaviors associated with lower overdose risk and, ultimately, reduces overdose among elevated-risk patients. mPROVEN is a pragmatic cluster randomized controlled trial conducted in primary care practices within a large multistate integrated health system. Eligible patients are adults (≥18 years) identified by the ML algorithm as having elevated overdose risk and seen at a primary care visit during the study period. Primary care practices serve as the unit of randomization and will be randomized into three arms: (1) usual care; (2) elevated risk flag only, where clinicians see a noninterruptive EHR flag indicating elevated overdose risk; and (3) elevated risk flag + nudges, in which active choice and accountable justification alerts are embedded within the EHR in addition to the elevated risk flag. The trial will enroll a target cohort of 800 patients for the primary analysis. The intervention period is 4 months (or until the study ends, whichever occurs later). The primary outcome is a 3‑point composite measure of safer opioid prescribing at 4 months, awarding 1 point each for active naloxone prescription, average opioid dosage of 50 morphine milligram equivalents per day or less, and absence of opioid-benzodiazepine overlap. Secondary outcomes include the composite outcome at 6 months, individual score components, and all-cause and overdose-specific emergency department or inpatient visits. Outcomes will be compared across study arms using an intention‑to‑treat approach with linear mixed‑effects models accounting for clinic-level clustering. Funded by the National Institutes of Health, in June 2022, enrollment began on March 10, 2025. Enrollment for the primary analysis cohort (n=798) was completed in May 2025 with additional participants enrolled for secondary analyses through December 2025 (n=1662). Primary cohort analyses began in January 2026, and results are expected by mid-2027. The mPROVEN study is among the first pragmatic randomized controlled trials to integrate ML‑based opioid overdose risk prediction with behavioral nudges within a large health system EHR. By combining advances in data science and behavioral economics, the study aims to reduce opioid overdose risk in primary care using a scalable and low-touch intervention to address a high-priority public health issue. ClinicalTrials.gov NCT06806163; https://clinicaltrials.gov/study/NCT06806163. DERR1-10.2196/94007.
To determine if ocular manifestations in Ehlers-Danlos Syndrome (EDS) are associated with increased risk for cardiovascular (CV) comorbidities. Retrospective cohort study using a US database platform that aggregates de-identified health records. The study group included EDS patients with at least 1/8 ocular manifestations and the control group included those without ocular manifestations. Groups were propensity score matched (PSM) for age at diagnosis, sex, race, and ethnicity. The primary outcome was the relative risk (RR) of a subsequent encounter diagnosis for 1/10 CV comorbidities. The secondary outcome evaluated the relationship between specific ocular diagnoses and specific incident CV comorbidities. Tertiary analysis compared healthcare utilisation between groups. Successful PSM was determined as a standard mean difference <0.1, and statistical significance included 95% confidence intervals (CI) and RR < 0.9 or >1.1. Analyses were performed on the platform and RStudio. After PSM, 13,197 predominantly White female EDS patients were included in each group. EDS patients with ocular manifestations were at significantly increased risk for 70% of included CV outcomes and at significantly increased risk for combined CV outcomes (RR 1.24, 95% CI 1.11-1.31). Myopia, idiopathic intracranial hypertension, and dry eye disease conferred the highest RR. Highest risk outcomes included cerebral, coronary, and intracranial arterial events. Lastly, the study group demonstrated significantly higher hospitalisations (1.44, 1.33-1.55), emergency department visits (1.22, 1.13-1.32), and surgeries (1.14, 1.09-1.2). Ocular manifestations in EDS were associated with a higher incidence of subsequent cardiovascular diagnoses, identifying a potential high-risk subgroup for further investigation.