共找到 20 条结果
暂无摘要(点击查看原文获取完整内容)
Biology is perhaps the most complex of the sciences, given the incredible variety of chemical species that are interconnected in spatial and temporal pathways that are daunting to understand. Their interconnections lead to emergent properties such as memory, consciousness, and recognition of self and non-self. To understand how these interconnected reactions lead to cellular life characterized by activation, inhibition, regulation, homeostasis, and adaptation, computational analyses and simulations are essential, a fact recognized by the biological communities. At the same time, students struggle to understand and apply binding and kinetic analyses for the simplest reactions such as the irreversible first-order conversion of a single reactant to a product. This likely results from cognitive difficulties in combining structural, chemical, mathematical, and textual descriptions of binding and catalytic reactions. To help students better understand dynamic reactions and their analyses, we have introduced two kinds of interactive graphs and simulations into the online educational resource, Fundamentals of Biochemistry, a multivolume biochemistry textbook that is part of the LibreText c
Dietary flavonoids associate with disease prevention in epidemiological studies, yet their polypharmacological mechanisms remain unclear. We establish network pharmacology as a systematic framework to characterize flavonoid therapeutic properties through integrated computational, experimental, and epidemiological validation. We constructed a master network of 17,869 human proteins, 14 dietary flavonoids, and 1,496 FDA-approved drugs with 278,768 interactions. Flavonoids averaged 45.3 target proteins per compound compared to 16.8 for FDA-approved drugs (2.7-fold higher; p=7.5x10^-4), reflecting multi-target architecture. Statistical analysis revealed that 71.4% of flavonoids targeted proteins associated with cardiovascular drugs and 78.6% aligned with antineoplastic drug targets. MTT-based Jurkat cell assays confirmed network predictions: high-association flavonoids (luteolin LC50=31.4 microM, myricetin=29.5 microM) produced strong cytotoxicity, while low-association flavonoids showed minimal activity (LC50>200 microM). Network-predicted association strengths correlated with experimental bioactivity (Pearson r=0.918; R^2=0.843). We translated network associations into food-level
Reducing the average memory access time is crucial for improving the performance of applications running on multi-core architectures. With workload consolidation this becomes increasingly challenging due to shared resource contention. Techniques for partitioning of shared resources - cache and bandwidth - and prefetching throttling have been proposed to mitigate contention and reduce the average memory access time. However, existing proposals only employ a single or a subset of these techniques and are therefore not able to exploit the full potential of coordinated management of cache, bandwidth and prefetching. Our characterization results show that application performance, in several cases, is sensitive to prefetching, cache and bandwidth allocation. Furthermore, the results show that managing these together provides higher performance potential during workload consolidation as it enables more resource trade-offs. In this paper, we propose CBP a coordination mechanism for dynamically managing prefetching throttling, cache and bandwidth partitioning, in order to reduce average memory access time and improve performance. CBP works by employing individual resource managers to determ
Large language models (LLMs) achieve strong performance across many natural language processing tasks, yet their decision processes remain difficult to interpret. This lack of transparency creates challenges for trust, debugging, and deployment in real-world systems. This paper presents an applied comparative study of three explainability techniques: Integrated Gradients, Attention Rollout, and SHAP, on a fine-tuned DistilBERT model for SST-2 sentiment classification. Rather than proposing new methods, the focus is on evaluating the practical behavior of existing approaches under a consistent and reproducible setup. The results show that gradient-based attribution provides more stable and intuitive explanations, while attention-based methods are computationally efficient but less aligned with prediction-relevant features. Model-agnostic approaches offer flexibility but introduce higher computational cost and variability. This work highlights key trade-offs between explainability methods and emphasizes their role as diagnostic tools rather than definitive explanations. The findings provide practical insights for researchers and engineers working with transformer-based NLP systems. T
In anticipation of the completion of the High-Luminosity Large Hadron Collider (HL-LHC) programme by the end of 2041, CERN is preparing to launch a new major facility in the mid-2040s. According to the 2020 update of the European Strategy for Particle Physics (ESPP), the highest-priority next collider is an electron-positron Higgs factory, followed in the longer term by a hadron-hadron collider at the highest achievable energy. The CERN directorate established a Future Colliders Comparative Evaluation working group in June 2023. This group brings together project leaders and domain experts to conduct a consistent evaluation of the Future Circular Collider (FCC) and alternative scenarios based on shared assumptions and standardized criteria. This report presents a comparative evaluation of proposed future collider projects submitted as input for the Update of the European Strategy for Particle Physics. These proposals are compared considering main performance parameters, environmental impact and sustainability, technical maturity, cost of construction and operation, required human resources, and realistic implementation timelines. An overview of the international collider projects w
Quantitative Systems Pharmacology (QSP) modeling is essential for drug development but it requires significant time investment that limits the throughput of domain experts. We present \textbf{GRASP} -- a multi-agent, graph-reasoning framework with a human-in-the-loop conversational interface -- that encodes QSP models as typed biological knowledge graphs and compiles them to executable MATLAB/SimBiology code while preserving units, mass balance, and physiological constraints. A two-phase workflow -- \textsc{Understanding} (graph reconstruction of legacy code) and \textsc{Action} (constraint-checked, language-driven modification) -- is orchestrated by a state machine with iterative validation. GRASP performs breadth-first parameter-alignment around new entities to surface dependent quantities and propose biologically plausible defaults, and it runs automatic execution/diagnostics until convergence. In head-to-head evaluations using LLM-as-judge, GRASP outperforms SME-guided CoT and ToT baselines across biological plausibility, mathematical correctness, structural fidelity, and code quality (\(\approx\)9--10/10 vs.\ 5--7/10). BFS alignment achieves F1 = 0.95 for dependency discovery,
Acute poly-substance intoxication requires rapid, life-saving decisions under substantial uncertainty, as clinicians must rely on incomplete ingestion details and nonspecific symptoms. Effective diagnostic reasoning in this chaotic environment requires fusing unstructured, non-medical narratives (e.g. paramedic scene descriptions and unreliable patient self-reports or known histories), with structured medical data like vital signs. While Large Language Models (LLMs) show potential for processing such heterogeneous inputs, they struggle in this setting, often underperforming simple baselines that rely solely on patient histories. To address this, we present DeToxR (Decision-support for Toxicology with Reasoning), the first adaptation of Reinforcement Learning (RL) to emergency toxicology. We design a robust data-fusion engine for multi-label prediction across 14 substance classes based on an LLM finetuned with Group Relative Policy Optimization (GRPO). We optimize the model's reasoning directly using a clinical performance reward. By formulating a multi-label agreement metric as the reward signal, the model is explicitly penalized for missing co-ingested substances and hallucinating
As the volume and complexity of nonclinical toxicology studies continue to increase, toxicologic pathology reporting faces persistent challenges, including fragmented sources of data (e.g., histopathology images, clinical pathology and other study data, adverse effects database, mechanistic literature), variable reporting timelines and heightened regulatory expectations. This white paper examines the emerging role of agentic artificial intelligence (AI) in addressing these issues through coordinated workflow orchestration, data integration, and pathologist-in-the-loop report generation. Based on a closed-door roundtable held during the 2025 Society of Toxicologic Pathology (STP) Annual Meeting and follow-on discussions, this paper synthesizes the perspectives of leading toxicologic pathologists, toxicologists, and AI developers. It outlines the key pain points in current reporting workflows, identifies realistic near-term use cases for agentic AI, and describes major adoption barriers including requirements for transparency, validation, and organizational readiness. A phased adoption roadmap and pilot design considerations are proposed to help support responsible evaluation and dep
Object detection in remotely sensed satellite pictures is fundamental in many fields such as biophysical, and environmental monitoring. While deep learning algorithms are constantly evolving, they have been mostly implemented and tested on popular ground-based taken photos. This paper critically evaluates and compares a suite of advanced object detection algorithms customized for the task of identifying aircraft within satellite imagery. Using the large HRPlanesV2 dataset, together with a rigorous validation with the GDIT dataset, this research encompasses an array of methodologies including YOLO versions 5 and 8, Faster RCNN, CenterNet, RetinaNet, RTMDet, and DETR, all trained from scratch. This exhaustive training and validation study reveal YOLOv5 as the preeminent model for the specific case of identifying airplanes from remote sensing data, showcasing high precision and adaptability across diverse imaging conditions. This research highlight the nuanced performance landscapes of these algorithms, with YOLOv5 emerging as a robust solution for aerial object detection, underlining its importance through superior mean average precision, Recall, and Intersection over Union scores. T
Many multi-genic systemic diseases such as neurological disorders, inflammatory diseases, and the majority of cancers do not have effective treatments yet. Reinforcement learning powered systems pharmacology is a potentially effective approach to design personalized therapies for untreatable complex diseases. In this survey, state-of-the-art reinforcement learning methods and their latest applications to drug design are reviewed. The challenges on harnessing reinforcement learning for systems pharmacology and personalized medicine are discussed. Potential solutions to overcome the challenges are proposed. In spite of successful application of advanced reinforcement learning techniques to target-based drug discovery, new reinforcement learning strategies are needed to address systems pharmacology-oriented personalized de novo drug design.
A fundamental mistake in receptor theory has led to an enduring misunderstanding of how to estimate the affinity and efficacy of an agonist. These properties are inextricably linked and cannot be easily separated in any case where the binding of a ligand induces a conformation change in its receptor. Consequently, binding curves and concentration-response relationships for receptor agonists have no straightforward interpretation. This problem, the affinity-efficacy problem, remains overlooked and misunderstood despite it being recognised in 1987. To avoid the further propagation of this misunderstanding, we propose that the affinity-efficacy problem should be included in the core curricula for pharmacology undergraduates proposed by the British Pharmacological Society and IUPHAR.
Traditionally, studies in experimental physiology have been conducted in small groups of human participants, animal models or cell lines. Identifying optimal study designs that achieve sufficient power for drawing proper statistical inferences to detect group level effects with small sample sizes has been challenging. Moreover, average effects derived from traditional group-level inference do not necessarily apply to individual participants. Here, we introduce N-of-1 trials as an innovative study design that can be used to draw valid statistical inference about the effects of interventions on individual participants and can be aggregated across multiple study participants to provide population-level inferences more efficiently than standard group randomized trials. N-of-1 trials have been used since the late 1980s, but without large-scale adoption and with few applications in experimental physiology research settings. In this manuscript, we introduce the key components and design features of N-of-1 trials, describe statistical analysis and interpretations of the results, and describe some available digital tools to facilitate their use using examples from experimental physiology.
Natural language processing (NLP) is an area of artificial intelligence that applies information technologies to process the human language, understand it to a certain degree, and use it in various applications. This area has rapidly developed in the last few years and now employs modern variants of deep neural networks to extract relevant patterns from large text corpora. The main objective of this work is to survey the recent use of NLP in the field of pharmacology. As our work shows, NLP is a highly relevant information extraction and processing approach for pharmacology. It has been used extensively, from intelligent searches through thousands of medical documents to finding traces of adversarial drug interactions in social media. We split our coverage into five categories to survey modern NLP methodology, commonly addressed tasks, relevant textual data, knowledge bases, and useful programming libraries. We split each of the five categories into appropriate subcategories, describe their main properties and ideas, and summarize them in a tabular form. The resulting survey presents a comprehensive overview of the area, useful to practitioners and interested observers.
We present a general methodology for performing statistical inference on the components of a real-valued matrix parameter for which rows and columns are subject to order restrictions. The proposed estimation procedure is based on an iterative algorithm developed by Dykstra and Robertson (1982) for simple order restriction on rows and columns of a matrix. For any order restrictions on rows and columns of a matrix, sufficient conditions are derived for the algorithm to converge in a single application of row and column operations. The new algorithm is applicable to a broad collection of order restrictions. In practice, it is easy to design a study such that the sufficient conditions derived in this paper are satisfied. For instance, the sufficient conditions are satisfied in a balanced design. Using the estimation procedure developed in this article, a bootstrap test for order restrictions on rows and columns of a matrix is proposed. Computer simulations for ordinal data were performed to compare the proposed test with some existing test procedures in terms of size and power. The new methodology is illustrated by applying it to a set of ordinal data obtained from a toxicological stud
This paper presents a quasi-sequential optimal design framework for toxicology experiments, specifically applied to sea urchin embryos. The authors propose a novel approach combining robust optimal design with adaptive, stage-based testing to improve efficiency in toxicological studies, particularly where traditional uniform designs fall short. The methodology uses statistical models to refine dose levels across experimental phases, aiming for increased precision while reducing costs and complexity. Key components include selecting an initial design, iterative dose optimization based on preliminary results, and assessing various model fits to ensure robust, data-driven adjustments. Through case studies, we demonstrate improved statistical efficiency and adaptability in toxicology, with potential applications in other experimental domains.
Large language models (LLMs) have shown strong empirical performance across pharmacology and drug discovery tasks, yet the internal mechanisms by which they encode pharmacological knowledge remain poorly understood. In this work, we investigate how drug-group semantics are represented and retrieved within Llama-based biomedical language models using causal and probing-based interpretability methods. We apply activation patching to localize where drug-group information is stored across model layers and token positions, and complement this analysis with linear probes trained on token-level and sum-pooled activations. Our results demonstrate that early layers play a key role in encoding drug-group knowledge, with the strongest causal effects arising from intermediate tokens within the drug-group span rather than the final drug-group token. Linear probing further reveals that pharmacological semantics are distributed across tokens and are already present in the embedding space, with token-level probes performing near chance while sum-pooled representations achieve maximal accuracy. Together, these findings suggest that drug-group semantics in LLMs are not localized to single tokens but
In recent years considerable portion of the computer science community has focused its attention on understanding living cell biochemistry and efforts to understand such complication reaction environment have spread over wide front, ranging from systems biology approaches, through network analysis (motif identification) towards developing language and simulators for low level biochemical processes. Apart from simulation work, much of the efforts are directed to using mean field equations (equivalent to the equations of classical chemical kinetics) to address various problems (stability, robustness, sensitivity analysis, etc.). Rarely is the use of mean field equations questioned. This review will provide a brief overview of the situations when mean field equations fail and should not be used. These equations can be derived from the theory of diffusion controlled reactions, and emerge when assumption of perfect mixing is used.
Evolution is often understood through genetic mutations driving changes in an organism's fitness, but there is potential to extend this understanding beyond the genetic code. We propose that natural products - complex molecules central to Earth's biochemistry can be used to uncover evolutionary mechanisms beyond genes. By applying Assembly Theory (AT), which views selection as a process not limited to biological systems, we can map and measure evolutionary forces in these molecules. AT enables the exploration of the assembly space of natural products, demonstrating how the principles of the selfish gene apply to these complex chemical structures, selecting vastly improbable and complex molecules from a vast space of possibilities. By comparing natural products with a broader molecular database, we can assess the degree of evolutionary contingency, providing insight into how molecular novelty emerges and persists. This approach not only quantifies evolutionary selection at the molecular level but also offers a new avenue for drug discovery by exploring the molecular assembly spaces of natural products. Our method provides a fresh perspective on measuring the evolutionary processes b