Background: Conventional electrocardiography (ECG) analysis faces a persistent dichotomy: Expert-defined features provide interpretability but are limited in capturing latent high-dimensional patterns, whereas deep learning approaches achieve strong predictive performance but often lack interpretability and require large annotated datasets. A systematic framework that integrates these paradigms remains needed. Methods: We propose ECGomics, a structured and deployable analytical paradigm that deconstructs cardiac electrical signals into 4 interconnected dimensions: Structural, Intensity, Functional, and Comparative. This taxonomy integrates expert-defined morphological metrics with artificial intelligence-derived latent embeddings to generate multidimensional digital biomarkers. Results: We operationalized this framework into a scalable ecosystem consisting of a web-based platform, a mobile solution (https://github.com/PKUDigitalHealth/ECGomics), and application programming interface invocation. Across multiple representative clinical scenarios-including atrial fibrillation detection, recurrence prediction after cryoablation, screening of severe coronary stenosis in apparently normal ECGs, and maternal cardiac monitoring-ECGomics demonstrated robust predictive performance while maintaining interpretability and relatively low data requirements. These results validate the flexibility and effectiveness of the proposed multidimensional framework. Conclusion: ECGomics establishes an omics-level representation system for ECG analysis, bridging conventional feature engineering and deep learning within a unified taxonomy. By providing a deployable digital biomarker ecosystem, this framework advances scalable precision cardiovascular assessment and data-driven health management.
Current screening for diabetic kidney disease (DKD) relies on the estimated glomerular filtration rate (eGFR) and albuminuria, which often fail to detect early tubular dysfunction and non-albuminuric phenotypes. The integration of macroscopic urine physical characteristics with metabolic signatures may offer a novel approach to precision stratification. We conducted a multicenter, prospective-retrospective cohort study involving 364 participants with type 2 diabetes. We developed "FluxPro-DKD fusion model," that integrates "Digital Physicalomics" (computer-vision quantification of urine foam stability and chromaticity) and "Dual-Fluid Metabolomics" (serum-to-urine flux ratios). The model was trained in a discovery cohort (n=282) and tested in an independent external validation cohort (n=82). The primary outcome was the detection of early-stage DKD. We also assessed the model's prognostic utility for major adverse renal events over a simulated 3-year period. Metabolic profiling identified a distinct "serum-to-urine flux mismatch" of protein-bound uremic toxins (e.g., indoxyl sulfate), suggesting tubular secretory failure prior to glomerular damage. Digital physicalomics revealed that urine foam half-life was correlated with albuminuria (r=0.78). In the discovery cohort, the FluxPro-DKD fusion model achieved an area under the receiver operating characteristic curve (AUC) of 0.90 (95% confidence interval [CI], 0.87 to 0.93), significantly outperforming the standard clinical model (AUC, 0.78; P<0.001). The model maintained robust discrimination in the external validation cohort (AUC, 0.85; 95% CI, 0.79 to 0.91). Among patients with normoalbuminuria, those classified as high-risk by the model had a significantly higher projected 3-year event rate than those classified as low-risk (35.3% vs. 2.3%). The integration of digital urine physical phenotypes and metabolic flux ratios effectively reveals early tubular secretory dysfunction and improved risk stratification for diabetic kidney disease compared with standard clinical metrics.
The rapid expansion of data-driven technologies, particularly machine learning (ML) and artificial intelligence (AI), has substantially influenced biomedical research and drug discovery. In neuropharmacology, the availability of large-scale genomic, proteomic, chemical, and clinical datasets has stimulated the adoption of AI-based approaches to address persistent challenges in neurological drug development, including high attrition rates and the scarcity of disease-modifying therapies. Unlike prior reviews that broadly discuss AI applications in drug discovery or neurology, this narrative review focuses specifically on neuropharmacology, with an emphasis on translational relevance, disease-oriented examples, and real-world constraints. We critically examine the application of AI and ML across key stages of the neuropharmacological drug discovery pipeline, including target identification, drug-target interaction prediction, lead optimization, toxicity assessment, and early-stage clinical translation. Particular attention is given to concrete case studies in neurodegenerative and neurological disorders, illustrating where AI has meaningfully enhanced discovery efficiency and where its anticipated "revolutionary" impact has not yet been realized. In parallel, we analyze the biological, technical, and regulatory barriers that limit the clinical success of AI-driven strategies, including data bias, limited model interpretability, incomplete understanding of brain biology, and translational bottlenecks. By integrating case-based evidence with a critical analytical perspective, this review delineates both the opportunities and limitations of AI in neuropharmacology. We argue that AI is most effective when deployed as a complementary tool alongside mechanistic neuroscience and clinical expertise, rather than as a standalone solution. As AI methodologies continue to mature, their careful, transparent, and ethically governed integration into neuropharmacological research may advance precision medicine and help bridge persistent gaps in the treatment of neurological disorders.
Hematoxylin & Eosin (H&E) stained slides are the gold standard for cancer diagnosis but are subject to labor-intensive review and inter-observer variability. Whole-slide imaging (WSI) and digital pathology are reshaping this landscape, enabling remote diagnosis, quantitative analysis, and integration with clinical and molecular data for precision medicine. The complexity of cancer diagnosis highlights the need for sophisticated analytical tools capable of extracting multidimensional information from tissue sections. Technological and computational advances driving the integration of artificial intelligence (AI) and digital pathology including; the transition from classical machine-learning to deep learning models that learn hierarchical representations from raw WSIs; convolutional neural networks, transformers and foundational computational pathology models; tasks such as biomarker prediction and prognostic modeling; emerging research on multimodal AI systems that are integrating histology images with text data to improve clinical relevance; challenges related to data sharing and privacy, generalizability, and the implementation of these approaches in real-world clinical settings. Digital pathology and AI are transforming cancer diagnosis and evaluation. We expect that AI will be increasingly embedded in routine pathology practice to enhance diagnostic accuracy, improve efficiency, advance biological discovery, and perform tasks out of reach of conventional microscopy, thus advancing precision oncology.
Digital addiction in adolescents is often conceptualized as a latent construct, masking the complex interplay between specific symptoms and psychological resources. This study aims to map the specific symptom-level interactions between digital addiction (Social Media Addiction, Nomophobia) and cognitive resources (Self-Regulation, Self-Efficacy) using a large-scale network analysis approach. The study employed a split-half cross-validation strategy with a total sample of 1,497 adolescents (M age = 16.07). The dataset was randomly partitioned into Discovery (n = 748) and Validation (n = 749) groups. Network structure was estimated using the Gaussian Graphical Model (GGM) with polychoric correlations. Robustness was assessed via Network Comparison Test (NCT), sensitivity analysis controlling for demographics, and case-dropping bootstrapping. The network analysis revealed a distinct topological separation between addiction symptoms and cognitive resources. Centrality analysis identified "intolerance of environmental restriction" (Nomophobia item d6) as the most influential node in the network (highest EI = 1.81 relative to other nodes). Bridge centrality analysis highlighted "withdrawal upon prohibition" (Social Media Addiction item b5) as the critical bridge as the strongest bridge node (highest BS = 0.61) associating addiction symptoms with cognitive resources. Conversely, "self-monitoring" (a8) emerged as the most prominent inversely associated (BS = 0.47). The network structure was highly consistent across subsamples (NCT: p > 0.05), robust to demographic covariates (r = 0.98), and stable [CS(cor=0.70) = 0.60]. The findings suggest that adolescent digital addiction is characterized centrally by restriction intolerance and withdrawal anxiety rather than mere usage gratification. Interventions targeting self-monitoring skills and building tolerance to disconnection may be strategically positioned to destabilize the psychopathological network.
This study examines the construction of collective mourning and social memory in digital environments following the discovery of university student Ayşe Tokyaz's body in a suitcase, using a netnographic approach. Femicide is framed as a form of structural violence that produces societal trauma. A total of 414 online comments from Haberler.com were reviewed; 226 met the inclusion criteria, and 205 were analyzed after data cleaning. An integrated qualitative method combining thematic analysis, manual sentiment coding, and discourse analysis was employed. Findings highlight strong demands for justice and capital punishment, alongside distrust in the legal system and calls for retribution. Patriarchal norms appear in victim-blaming and sexist discourse, while demands to reveal the perpetrator's identity reflect "digital justice." Collective mourning is expressed through empathy and shared grief, whereas fear signals ongoing trauma. Some users invoke religious references. As a single-case study, findings are context-specific.
Emerging infectious diseases are one of the most significant threats to global health, driven by many factors such as zoonotic spillovers, climate change, globalization, and antibiotic resistance. While a great deal of attention is focused on viral and bacterial pathogens (e.g., SARS-CoV-2, influenza, multidrug-resistant TB), parasitic diseases contribute to global morbidity and mortality that remain largely unrecognized. The recent development of artificial intelligence has introduced powerful computational tools that can integrate large and complex datasets to assist with infectious disease surveillance, diagnosis, outbreak prediction, and drug discovery. Artificial intelligence encompasses machine learning, deep learning, and natural language processing techniques, which allow for automated pattern recognition and predictive modeling based on very complex biomedical data sets. This narrative review explores the recent advancements in AI applications in four key areas related to infectious disease: disease surveillance and early-warning systems; diagnostics and clinical decision support; outbreak prediction and modeling; and drug/vaccine discovery. Emphasis will be placed on applications of AI to parasites such as malaria, leishmaniasis, and soil-transmitted helminths. In addition, we discuss several challenges related to AI implementation in endemic regions including limited data availability, algorithmic bias, limited infrastructure in endemic areas, and ethical issues regarding data governance. Integrating AI into the One Health framework of linking human, animal, and environmental health will potentially enhance global preparedness to respond to emerging infectious and parasitic diseases.
Data sharing is essential for modern science, advancing transparency, reproducibility, and discovery. Emerging evidence shows that certain digital health data, particularly high-frequency accelerometry from body-worn sensors, carry re-identification risks even when de-identified. In 2023, the National Institutes of Health published its Data Management and Sharing (DMS) Policy formalizing a commitment to openness by requiring all funded researchers to share scientific data. However, the policy did not anticipate that raw motion signals collected by wearable devices can function as biometric identifiers. The WristPrint study demonstrated that a single day of raw accelerometry data could be used to re-identify individuals with 96% accuracy. Related research in gait detection shows that as few as ten steps may be enough to uniquely identify someone. These findings highlight gaps in current data sharing policies and the need for tailored guidance. We argue that policy updates, enforceable data use agreements, and educational initiatives are essential to align openness with protection. The path forward is not to retreat from data sharing but to share more wisely, safeguarding participant trust while sustaining scientific progress.
The human leukocyte antigen (HLA) system underpins allorecognition and shapes the response to infection, autoimmunity, and treatment response. Technological advances from serology to next-generation sequencing now enable full-gene characterization and four-field HLA nomenclature, while artificial intelligence (AI) and machine learning are transforming the data generation, interpretation, and clinical use. This review summarizes the progress on the technical developments in the HLA era, which could be evaluated in three perspectives. First, we survey AI for antigen processing and T-cell recognition, including HLA–peptide binding, presentation, and T cell receptor (TCR)–epitope models, and outline their effects on applications like neoantigen discovery, vaccine design, and tolerance induction. Since there are still persistent gaps in immunogenicity prediction and coverage of rare alleles, secondly, we evaluated HLA imputation from the single nucleotide polymorphism (SNP) arrays and low-coverage whole-genome sequencing, highlighting deep learning models that improve accuracy for common and low-frequency alleles, and the critical role of diverse reference panels. Third, we assessed the AI-enabled transplant decision support: survival and graft-versus-host disease forecasting from registry data, donor ranking beyond simple allele match, and crossmatch compatibility prediction. We integrate emerging biology, non-classical HLA molecules, allele-specific expression, and HLA loss of heterozygosity, as key modulators of immune activation and evasion with implications for donor selection, infectious diseases, vaccinology, inflammatory disease, and cancer therapy. To accelerate safe clinical translation, we need to have standards for data governance, fairness auditing, validation and calibration, explainability, robustness, monitoring, and human oversight. By bridging core HLA principles with recent biological insights and AI innovations, we outline a path toward reproducible and equitable clinical translation to immunogenomics in transplantation, infectious, inflammatory, oncologic diseases, and precision vaccinology.
Placental histopathology provides important insights into maternal and fetal health, yet the organ's spatial heterogeneity poses significant challenges for objective and reproducible histological analysis. Systematic assessment of cellular and structural composition across placental slides remains limited by the scale and subjectivity of manual evaluation. Quantitative approaches are therefore needed to characterise placental responses to injury beyond visually apparent lesions. We applied the Histology Analysis Pipeline.PY (HAPPY), a biologically inspired hierarchical deep learning framework for quantitative single-cell-resolution analysis of Haematoxylin and Eosin (H&E) slides, to 130 placental parenchyma slides from 62 singleton full-term live births. The dataset included healthy normal controls and four common placental lesion types: infarction, perivillous fibrin, avascular villi, and intervillous thrombosis. Cell-type and tissue-structure compositions were quantified, and slide-level deviation from a healthy reference was assessed using compositional data analysis. Placental slides with lesions exhibited significant cellular composition differences compared with healthy controls, including increased extravillous trophoblast and leukocyte densities and decreased Hofbauer cell densities. These cellular changes were accompanied by tissue-level alterations, particularly increased fibrin deposition and changes in villous structure. Compositional deviation increased with infarction size but not with other lesion types. Notably, compositional differences were also detected in slides without an apparent lesion from placentas with lesion(s) elsewhere, indicating organ-wide responses extending beyond focal pathology. Quantitative deep phenotyping reveals widespread cellular and structural changes associated with placental lesions, including effects not evident on routine histological assessment. These findings demonstrate the potential of AI-based digital histology to complement conventional placental pathology in research and clinical settings.
With ever-increasing computational capabilities, robust and automated research workflows have become essential for orchestrating large numbers of interdependent simulations. However, significant technical expertise is still required to configure execution environments, define calculation inputs, interpret outputs, and manage the complexity of parallel code execution on remote machines. To address these challenges, we developed AiiDAlab, a Jupyter-based web platform powered by the AiiDA computational infrastructure that provides a framework for managing and automating computational workflows while ensuring reproducibility through full provenance tracking. Through a collection of open-source user-friendly applications, AiiDAlab enables scientists to set up, execute, and analyze complex computational workflows without interacting directly with the underlying technical details, allowing them to focus on their research questions. In this paper, we discuss how AiiDAlab has matured over the past few years, expanding beyond computational materials science and its AiiDA origins. We present recent developments toward integrating with electronic laboratory notebooks (ELNs) for FAIR-compliant data management, adoption in large-scale facilities for secure access to experimental data and analytical tools, and applications in educational settings. Together with community-driven efforts to simplify onboarding, improve access to computational resources, and support large-scale data workflows, these advancements position AiiDAlab as a powerful platform for accelerating scientific discovery and fostering collaboration across disciplines.
Traditional vaccine development faced significant hurdles, including lengthy timelines and high costs, which hindered rapid responses to pathogens. Although the emergence of AI offered transformative potential, the necessity for a fully integrated workflow was often overlooked in studies focusing on individual tools. This review addressed a critical gap by synthesizing AI technologies across the vaccine design process, focusing on the integrated workflow from antigen discovery to clinical translation. A systematic framework was required to connect disparate tools and ensure seamless transitions. Consequently, this study provided a comprehensive roadmap for pandemic preparedness and vaccine discovery. A systematic analysis based on the PRISMA framework (2015-2024) was conducted, and 19 landmark articles were reviewed.It was demonstrated that the paradigm shift from predictive to generative AI offered unprecedented opportunities for developing novel antigens and adjuvants with superior immunogenicity. Synthesis of the literature revealed rapid progress toward sophisticated deep learning. Transformer models and Protein Language Models emerged as dominant for epitope prediction, while AlphaFold2 became the standard for structural modeling. The advent of generative AI for de novo antigen design represented the leading edge of the discipline. Additionally, AI-enhanced molecular dynamics and digital twin simulations accelerated clinical validation and manufacturing scalability. The "Integrated AI Workflow for Vaccine Design and Development" was emphasized as a comprehensive system and a prerequisite for sustainable innovation. Overall, this analysis served as a strategic roadmap for utilizing AI as a transformative framework for next-generation vaccine discovery and pandemic preparedness.
Quantitative susceptibility mapping (QSM) measures the intrinsic magnetic susceptibility of tissues. Because susceptibility values are inherently relative rather than absolute, referencing to a region of interest (ROI) is commonly employed to mitigate intersubject and acquisition-related variability. To validate the feasibility and robustness of a differential ROI reference method in QSM using both digital phantom and in vivo data, particularly for deep gray matter susceptibility analysis. In the digital phantom study, susceptibility values in the caudate nucleus were assessed with and without differential referencing across various simulation conditions (slab widths, brain mask erosion levels, air susceptibility). In the in vivo study, susceptibility differences between 16 patients with Parkinson's disease (PD) and 16 controls were compared across multiple deep gray matter ROIs under variable acquisition parameters. A one-way ANOVA statistical test was used in the in vivo study with multiple-comparison correction via the Benjamini-Hochberg false discovery rate (FDR-BH), with significance set at p < 0.05. In this study, "accuracy" refers specifically to the phantom study where ground-truth susceptibility values are known. In vivo, where such ground truth is unavailable, "robustness" and "group-discrimination sensitivity" are used as surrogate indicators of accuracy. Robustness was evaluated by the consistency of susceptibility differences across varying acquisition parameters, whereas group-discrimination sensitivity was evaluated by the ability to detect disease-related group differences between PD patients and controls. In the digital phantom study, the proposed method demonstrated stable susceptibility estimates across all conditions, with minimal deviation from true values when referencing anatomically adjacent ROIs. In the in vivo study, referenced susceptibility values (e.g., substantia nigra minus putamen) yielded consistent and significant group differences across acquisition scenarios, whereas unreferenced values were more variable. The differential ROI reference method reduces susceptibility variability associated with acquisition and processing factors. This approach may offer a practical and pathology-resilient referencing alternative for deep brain analysis in neurodegenerative diseases.
From "old data" to new knowledge discovery, this paradigm is fundamentally reshaping research in chemistry and materials. Unlike traditional trial-and-error approaches, knowledge mining driven by large-scale databases offers unprecedented potential in exploring complex compositional spaces and accelerating rational materials design. In this review, we highlight three significant progresses in discovering new materials knowledge from "old literature data": (1) In the field of catalysis, data-driven approaches reveal new phenomena and limitations of existing theoretical models, greatly accelerating materials design and screening. (2) In the field of solid-state electrolytes, data empowerment accelerates the understanding of underlying physical mechanisms. (3) In the field of hydrogen storage, we demonstrate a pathway from "old data" to structured knowledge and finally to autonomous design. Finally, we highlight the critical role of database construction in data intelligence and the development of AI agents for materials design. Looking ahead, such data-driven models will continue to deepen our knowledge generation and accelerate the discovery of target materials in the relevant field. By integrating knowledge generation from "old data", theoretical simulations, and experimental validation, this approach promises to establish a digital materials ecosystem for cross-disciplinary innovation, where materials discovery will be continuously accelerated.
Digital video platforms are not neutral conduits; their ranking signals and interfaces shape what becomes visible and valuable. Badminton-balancing spectacle and instruction-offers leverage to compare Douyin's engagement-optimized short video with Bilibili's community- and knowledge-oriented ecology. To examine whether engagement patterns differ across platform-native discovery environments and to interpret these differences through an algorithmic cultural filtering lens. We analyzed 400 videos sampled in June 2025 (200 per platform) using each platform's native discovery orders; coders recorded content type, creator identity, duration, and public counters (likes, comments, shares, and bookmarks), with excellent inter-coder reliability (κ = 0.94). Because engagement metrics were highly right-skewed, cross-platform differences were first examined using two-tailed Mann-Whitney U-tests with effect sizes. We then estimated multivariable linear regression models for log-transformed engagement outcomes, adjusting for platform, content type, video duration, creator identity, and keyword match. Douyin yields markedly higher instantaneous reactions (likes/comments/shares), whereas favorites/bookmarks converge across platforms; Bilibili hosts longer videos and more instructional content, and creator ecologies diverge (Douyin KOL-led, Bilibili amateur-led). These regularities are consistent with an algorithmic cultural filtering lens, with platform architectures, creator adaptation, and audience preferences jointly shaping visible engagement patterns. Bridge formats (e.g., linking highlights to modular instruction) may connect attention with learning.
Infectious complications, such as sepsis or catheter-related infections, are common and serious sequelae after trauma. Despite their clinical significance, existing risk-prediction models are limited by reliance on in-hospital data that fail to capture complex physiological interactions. Thus, this study aimed to develop and validate an interpretable ensemble machine learning (ML) model integrating both prehospital and in-hospital clinical data to predict infectious complications after trauma. We used data from the Korean Trauma Data Bank, comprising patients admitted to all 19 trauma centers from 2017 to 2022 in South Korea (discovery; n = 227,567) and from four additional centers added in 2023 for external validation (n = 8867). Trauma cases were defined utilizing S or T diagnostic codes based on the 7th Korean Standard Classification of Diseases, and infectious complications were defined as a composite outcome of pneumonia, urinary tract infection, catheter-related bloodstream infection, surgical site infection (deep, organ, and superficial), osteomyelitis, or severe sepsis. A total of 33 prehospital and in-hospital features were used in ML model training, and the top-performing models were ensembled to construct the final model. Model performance was evaluated through five-fold cross-validation, internal testing, and external validation. Shapley Additive Explanations (SHAP) were applied to assess predictor importance, and predicted risks were categorized into tertiles (T1-T3) to examine associations with in-hospital mortality and presented adjusted odds ratios (aORs) with 95% confidence intervals (CIs). Among 88,899 eligible patients with trauma in the discovery cohort, the soft-voting ensemble model integrating logistic regression, categorical boosting, and extreme gradient boosting achieved the best discrimination, with an area under the receiver operating characteristic curve of 0.796 in the discovery cohort and 0.717 in the external validation cohort. SHAP analysis identified age, accident type, Glasgow Coma Scale verbal response, and sex as the most influential variables. Higher tertiles of predicted infection risk were strongly associated with mortality, with aORs of 2.52 (95% CI, 2.12-2.99) for T1, 4.65 (3.96-5.47) for T2, and 6.19 (5.02-7.62) for T3. This interpretable model, which integrates prehospital and in-hospital data available within the first 24 h of admission, presented robust predictive performance for post-traumatic infectious complications. The proportional association between predicted infection risk and mortality highlights its clinical relevance, as even modest increases in predicted risk may carry meaningful implications for patient outcomes and early intervention strategies.
Placental pathology provides critical diagnostic information following pregnancy complications and has the potential to inform mother and infant clinical care. Yet the field faces substantial challenges, including workforce shortages, high inter-observer variability, inconsistent reporting, and ongoing debate regarding clinical utility. Artificial intelligence (AI) and computer vision methods offer potential solutions through automated analysis of both gross photography and digital histology. Recent studies demonstrate technical feasibility across tasks including gestational age prediction, cellular and structural quantification, and lesion detection. Near-term clinical applications should prioritise standardising measurements, reducing inter-observer variability, and accelerating report generation through assistive workflows to deliver timely results for clinical decision-making. Over the longer term, AI-derived placental phenotypes could enable prognostic research linking placental features to maternal and childhood outcomes, supporting a shift from diagnostic to prognostic clinical pathways. However, significant barriers impede clinical translation. These include the infrastructure and deployment costs associated with digital pathology and AI, regulatory requirements, and challenges specific to placenta pathology such as limited open-source datasets, indication bias in clinical samples, difficulty integrating information across placental compartments, inconsistent data linkage, and uncertainty regarding which features should be measured. Together, these factors contribute to high development costs and limit the availability of clinically deployable models. This review examines both the opportunities for AI and computer vision in placental pathology and the barriers to their translation, and proposes priorities for the pathology community. Addressing these priorities would position AI to transform placental pathology from a resource-limited diagnostic service into a scalable, data-driven discipline that improves immediate clinical care and enables discovery-driven advances in maternal and child health.
Over the past decade, neuropsychopharmacology has shifted from stagnation to momentum, with first-in-class mechanisms and biomarker-enabled trials spanning psychiatry and neurology. We narratively synthesized advances from 2013 to 2026 across central nervous system (CNS) discovery and development, including pivotal trials, regulatory actions, digital/real-world evidence, genetics, artificial intelligence (AI), and implementation/global-access themes that are endorsed by international societies. Therapeutic gains include rapid-acting drugs for treatment-resistant depression (intranasal esketamine); psychedelic-assisted therapy for posttraumatic stress disorder and depression; neuroactive steroid γ-aminobutyric acid-A receptor positive allosteric modulators (brexanolone, zuranolone) for postpartum depression; non-dopaminergic muscarinic agonists (xanomeline-trospium) for schizophrenia; orexin receptor antagonists for insomnia; and anti-amyloid monoclonal antibodies (lecanemab, donanemab) for early Alzheimer's disease. Persistent barriers include high mid-/late-stage attrition that is driven by placebo effects, subjective endpoints, and preclinical-to-clinical gaps; regulatory and economic headwinds; and limited generalizability from tightly run trials. Emerging enablers include adaptive/platform designs, digital health technologies, patient-reported outcomes, and clinical outcome assessments, real-world evidence (RWE), AI/machine learning (ML), genetics for target de-risking and biomarker-guided stratification, and publicly accessible large CNS relevant biological datasets. To convert momentum into durable progress, we recommend: (i) deeper academia-industry/stakeholder collaboration and sustained funding for high-risk/high-reward science from industry, governments and non-for profit foundations; (ii) modernized regulation (flexible evidentiary paths, novel endpoints, and clear guidance on adaptive/platform trials); (iii) data-driven development integrating RWE, AI/ML, and precision medicine; (iv) the adoption of Neuroscience-based Nomenclature (NbN); and (v) a global-access mandate with essential-medicine inclusion, equitable pricing/licensing, capacity building, tele-enabled mental health, and geographically diverse research. Aligning scientific innovation with implementation and equity can accelerate translation and ensure new treatments benefit patients worldwide.
Hepatocellular carcinoma (HCC) remains a significant global health challenge, with therapeutic efficacy in advanced stages often limited by underlying liver dysfunction and adaptive resistance. In this review, the evolving landscape of molecular targets and combinatorial strategies is critically examined, with a particular focus on the transition from preclinical discovery to clinical application. While traditional molecular heterogeneity is acknowledged, the aim is to elucidate how emerging computational paradigms are redefining target discovery and therapeutic stratification in HCC. The primary purpose is to evaluate the role of Artificial Intelligence (AI) and Machine Learning (ML) as integrative tools for translating high-dimensional multi-omics data into clinically actionable insights for HCC management. Special attention is given to the capacity of AI-driven frameworks to analyze complex datasets derived from genomics, transcriptomics, proteomics, metabolomics, and epigenomics, thereby enabling the identification of novel predictive biomarkers, patient subgroups, and rational drug combinations. By synthesizing recent preclinical and clinical evidence, this review highlights how AI-guided approaches can accelerate biomarker validation and optimize therapeutic decision-making. Furthermore, the convergence of AI with spatial transcriptomics, digital pathology, and single-cell technologies is discussed as a transformative infrastructure for decoding tumor-microenvironment interactions and spatial heterogeneity. These integrative strategies provide unprecedented resolution into tumor evolution, immune landscapes, and resistance mechanisms. Collectively, the evidence reviewed supports the conclusion that AI-enabled, multi-omics-driven approaches are instrumental in advancing HCC treatment toward a new era of adaptive, spatially informed, and precision-based personalized medicine.
Artificial Intelligence and Machine Learning (AI/ML) have already revolutionized Drug Discovery. However, Chemistry, Manufacturing, and Controls (CMC) remains in "digital infancy". This report from the 8th APV Winter Conference with 55 participants details a critical industry shift toward Digital Design and Quantitative Formulation. Five cooperative shifts are necessary for this transformation: redefining scientists as orchestrators, democratizing computational fluency, prioritizing structured data, optimizing strategic experimentation, and aligning with evolving regulatory frameworks. Critical challenges like data reliability, model explainability, and "human-in-the-loop" accountability are addressed. Through case studies ranging from physicochemical profiling, formulation and process development, equipment design, manufacturing and analytical control, and biopharmaceutics, the conference demonstrated how AI/ML creates high-fidelity and interoperable architectures advancing drug product development and manufacturing.