共找到 20 条结果
Two-photon (2P) imaging has proven to be a powerful tool for investigating neural structure and function both in brain slices and in intact systems. In vivo 2P imaging presents significant challenges in sample preparation, which are exacerbated in non-murine species. Here, we describe procedures for the effective virally mediated labeling of neurons and for the implantation of cranial windows for imaging. The procedures described here are applicable to a range of species, including mice, and are routinely used in ferrets and tree shrews to provide large-scale labeling of cortical volumes and high-quality imaging data.
Proteins continuously interact with each other to determine cell fate. Consequently, an examination of just when such protein-protein interactions occur and how they are controlled is essential for understanding the molecular mechanism of biological processes, elucidating the molecular basis of diseases, and identifying potential targets for therapeutic interventions. In Protein-Protein Interactions: Methods and Applications, leading experts describe in detail their highly successful biochemical, biophysical, genetic, and computational techniques for studying these interactions. Their readily reproducible methods demonstrate how to identify protein interaction partners, qualitatively or quantitatively measure protein-protein interactions, monitor protein-protein interactions as they occur in living cells, and determine interaction interfaces. The techniques described utilize a variety of cutting-edge technologies, including surface plasmon resonance (SRP), fluorescence resonance energy transfer (FRET), fluorescence polarization (FP), isothermal titration calorimetry (ITC), circular dichroism (CD), protein fragment complementation assays (PCA), various two-hybrid systems, and proteomics- and bioinformatics-based approaches, such as the Scansite program for computational analysis. Each time-tested protocol includes a background introduction outlining the principle behind the technique, lists of equipment and reagents, and tips on troubleshooting and avoiding known pitfalls. Authoritative and highly practical, Protein-Protein Interactions: Methods and Applications offers both beginning and experienced investigators a full range of the powerful tools needed for deciphering how proteins interact to form biological networks, as well as for unraveling protein-protein interactions in disease in the search for novel therapeutic targets
Achieving complete reproducibility in science, particularly in research fields such as biodiversity, is challenging due to analytical choices, bias and interpretation. Here, we examine examples of reproducibility in biological systematics, ecology, and molecular biology. To mitigate the impact of interpretation and analytical choices, Artificial Intelligence (AI) has provided potential tools. In the present work, while emphasizing the need for methodological rigor and transparency, we acknowledge the role of interpretation in activities such as coding biological characters and detecting morphological patterns in nature. We explore the opportunities and limitations associated with the synergy between big data and AI in molecular biology, emphasizing the need for a more comprehensive and integrated approach based on dataset quality and usefulness. We conclude by advocating for AI-based tools to assist biologists, reinforcing consilience as a criterion for scientific validity without hindering scientific progress.
From Optical Activity in Quartz to Chiral Drugs:Molecular Handedness in Biology and Medicine Ronald Bentley, Professor Emeritus Ronald Bentley, Professor Emeritus Department of Biological Sciences, University of Pittsburgh, Pittsburgh, Pennsylvania 15260 References 1. BROWNE, M. W. "Mirror image" chemistry yielding new products. The New York Times, 13 Aug. 1991, sect. B: 5, 8. Google Scholar 2. FDA's policy statement for the development of new stereoisomeric drugs. Chirality 4:338-340, 1992. Google Scholar 3. FEDER, B. J. Separating "mirror" molecules for better drugs. The New York Times, 12 Feb. 1992, sect. C: 7. Google Scholar 4. LEWIN, R. Chemistry in the image of biology. Science 238:611-612, 1987. Google Scholar 5. CHAIKEN, I.; CHIANCONE, E.; FONTANA, A.; and NERI, P., eds. Macromolecular Biorecognition: Principles and Methods. Clifton, N.J.: Humana Press, 1987. Google Scholar 6. ROBERTS, S. M., ed. Molecular Recognition: Chemical and Biochemical Problems. Vols. I, II. Cambridge: Royal Society of Chemistry, 1989, 1992. Google Scholar 7. HOLMSTEDT, B.; FRANK, H.; and TESTA, B., eds. Chirality and Biological Activity. New York: A. R. Liss, 1990. Google Scholar 8. BROWN, C., ed. Chirality in Drug Design and Synthesis. San Diego: Academic Press, 1991. Google Scholar 9. WILSON, K., and WALKER, J. Chirality and its importance in drug development. Biochem. Soc. Trans. 19:443-474, 1991. Google Scholar 10. SHELDON, R. A. Chirotechnology: Industrial Synthesis of Optically Active Compounds. New York: M. Dekker, 1993. Google Scholar 11. AMATO, I. Looking glass chemistry. Science 256:964-966, 1992. Google Scholar 12. CAHN, R. S.; INGOLD, C.; and PRELOG, V. Specification of molecular chirality. Angew. Chem. Internat. Edit. 5:385-415, 1966. Google Scholar 13. MISLOW, K. Introduction to Stereochemistry. New York: W. A. Benjamin, 1965. Google Scholar 14. BIELLMANN, J.-F. Chiralité du diptérocarpol en C-20. Tetrahedron Letters, 4803-4805, 1966. Google Scholar 15. MACDERMOTT, A. J. Distinguishing true chirality from its accidental imitators. Nature 323:16-17, 1986. Google Scholar 16. BARRON, L. D. True and false chirality and absolute asymmetric synthesis. J. Amer. Chem. Soc. 108:5539-5542, 1986. Google Scholar 17. KYBA, E. P.; SIEGEL, M. G.; SOUSA, L. R.; et al. Chiral, hinged, and functionalized multiheteromacrocycles. J. Amer. Chem. Soc. 95:2691-2692, 1973. Google Scholar 18. KYBA, E. P.; KOGA, K.; SOUSA, L. R.; et al. Chiral recognition in molecular complexing. J. Amer. Chem. Soc. 95:2692-2693, 1973. Google Scholar 19. JOB, R., and BRUICE, T. C. Chiral recognition of a prochiral (meso-carbon) centre by Λ(—)436-α-l,l-2,9-diamino-4,7-diazadecanecobaltate. J. Chem. Soc. Chem. Comm. 332-333, 1973. Google Scholar 20. HERSCHEL, J. F. W. On the rotation impressed by plates of rock crystal on the planes of polarization of the rays of light, as connected with certain peculiarities in its crystallization. Trans. Camb. Phil. Soc. 1:43-52, 1822. Google Scholar 21. PASTEUR, L. Researches on the Molecular Asymmetry of Natural Organic Products. Reissue edition published for the Alembic Club. Edinburgh: E. & S. Livingstone, 1948. Google Scholar 22. JAPP, F. R. Stereochemistry and vitalism. Nature 58:452-460, 1898. Google Scholar 23. HANEIN, D.; GEIGER, B.; and ADDADI, L. Differential adhesion of cells to enantiomorphous crystal surfaces. Science 263:1413-1416, 1994. Google Scholar 24. FRANKLAND, P. Pasteur Memorial Lecture. J. Chem. Soc. Trans. 71:683-743, 1897. Google Scholar 25. FISCHER, E. Ueber die Spaltung einiger racemischer Amidosäuren in die optisch-activen Componenten. Ber. dtsch. chem. Ges. 32:2451-2471, 1899. Google Scholar 26. FISCHER, E. Synthesen in der Zuckergruppe. Ber. dtsch. chem. Ges. 23:2114-2141, 1890. Google Scholar 27. FISCHER, E. Einfluss der Configuration auf die Wirkung der Enzyme. Ber. dtsch. chem. Ges. 27:2985-2993, 1894. Google Scholar 28. FISCHER, E., and BERGELL, P. Ueber die Derivate einiger Dipeptide und ihr Verhalten gegen Pankreasfermente. Ber. dtsch. chem. Ges. 33:2592-2608, 1903. Google Scholar 29. DAKIN, H. D. The hydrolysis of optically inactive esters...
Understanding the biological mechanisms of disease is crucial for medicine, and in particular, for drug discovery. AI-powered analysis of genome-scale biological data holds great potential in this regard. The increasing availability of single-cell RNA sequencing data has enabled the development of large foundation models for disease biology. However, existing foundation models only modestly improve over task-specific models in downstream applications. Here, we explored two avenues for improving single-cell foundation models. First, we scaled the pre-training data to a diverse collection of 116 million cells, which is larger than those used by previous models. Second, we leveraged the availability of large-scale biological annotations as a form of supervision during pre-training. We trained the \model family of models comprising six transformer-based state-of-the-art single-cell foundation models with 70 million, 160 million, and 400 million parameters. We vetted our models on several downstream evaluation tasks, including identifying the underlying disease state of held-out donors not seen during training, distinguishing between diseased and healthy cells for disease conditions and
In this paper, we propose and study several inverse problems of determining unknown parameters in nonlocal nonlinear coupled PDE systems, including the potentials, nonlinear interaction functions and time-fractional orders. In these coupled systems, we enforce non-negativity of the solutions, aligning with realistic scenarios in biology and ecology. There are several salient features of our inverse problem study: the drastic reduction in measurement/observation data due to averaging effects, the nonlinear coupling between multiple equations, and the nonlocality arising from fractional-type derivatives. These factors present significant challenges to our inverse problem, and such inverse problems have never been explored in previous literature. To address these challenges, we develop new and effective schemes. Our approach involves properly controlling the injection of different source terms to obtain multiple sets of mean flux data. This allows us to achieve unique identifiability results and accurately determine the unknown parameters. Finally, we establish a connection between our study and practical applications in biology, further highlighting the relevance of our work in real-
The understanding of molecular cell biology requires insight into the structure and dynamics of networks that are made up of thousands of interacting molecules of DNA, RNA, proteins, metabolites, and other components. One of the central goals of systems biology is the unraveling of the as yet poorly characterized complex web of interactions among these components. This work is made harder by the fact that new species and interactions are continuously discovered in experimental work, necessitating the development of adaptive and fast algorithms for network construction and updating. Thus, the "reverse-engineering" of networks from data has emerged as one of the central concern of systems biology research. A variety of reverse-engineering methods have been developed, based on tools from statistics, machine learning, and other mathematical domains. In order to effectively use these methods, it is essential to develop an understanding of the fundamental characteristics of these algorithms. With that in mind, this chapter is dedicated to the reverse-engineering of biological systems. Specifically, we focus our attention on a particular class of methods for reverse-engineering, namely th
Systems biology relies on mathematical models that often involve complex and intractable likelihood functions, posing challenges for efficient inference and model selection. Generative models, such as normalizing flows, have shown remarkable ability in approximating complex distributions in various domains. However, their application in systems biology for approximating intractable likelihood functions remains unexplored. Here, we elucidate a framework for leveraging normalizing flows to approximate complex likelihood functions inherent to systems biology models. By using normalizing flows in the Simulation-based inference setting, we demonstrate a method that not only approximates a likelihood function but also allows for model inference in the model selection setting. We showcase the effectiveness of this approach on real-world systems biology problems, providing practical guidance for implementation and highlighting its advantages over traditional computational methods.
Dynamical systems modeling, particularly via systems of ordinary differential equations, has been used to effectively capture the temporal behavior of different biochemical components in signal transduction networks. Despite the recent advances in experimental measurements, including sensor development and '-omics' studies that have helped populate protein-protein interaction networks in great detail, modeling in systems biology lacks systematic methods to estimate kinetic parameters and quantify associated uncertainties. This is because of multiple reasons, including sparse and noisy experimental measurements, lack of detailed molecular mechanisms underlying the reactions, and missing biochemical interactions. Additionally, the inherent nonlinearities with respect to the states and parameters associated with the system of differential equations further compound the challenges of parameter estimation. In this study, we propose a comprehensive framework for Bayesian parameter estimation and complete quantification of the effects of uncertainties in the data and models. We apply these methods to a series of signaling models of increasing mathematical complexity. Systematic analysis o
The molecular machinery of life is largely created via self-organisation of individual molecules into functional assemblies. Minimal coarse-grained models, where a whole macromolecule is represented by a small number of particles, can be of great value in identifying the main driving forces behind self-organisation in cell biology. Such models can incorporate data from both molecular and continuum scales, and their results can be directly compared to experiments. Here we review the state of the art of models for studying the formation and biological function of macromolecular assemblies in cells. We outline the key ingredients of each model and their main findings. We illustrate the contribution of this class of simulations to identifying the physical mechanisms behind life and diseases, and discuss their future developments.
The central dogma of molecular biology, formulated more than five decades ago, compartmentalized information exchange in the cell into the DNA, RNA and protein domains. This formalization has served as an implicit thematic distinguisher for cell biological research ever since. However, a clear account of the distribution of research across this formalization over time does not exist. Abstracts of >3.5 million publications focusing on the cell from 1975 to 2011 were analyzed for the frequency of 100 single-word DNA-, RNA- and protein-centric search terms and amalgamated to produce domain- and subdomain-specific trends. A preponderance of protein- over DNA- and in turn over RNA-centric terms as a percentage of the total word count is evident until the early 1990s, at which point the trends for protein and DNA begin to coalesce while RNA percentages remain relatively unchanged. This term-based census provides a yearly snapshot of the distribution of research interests across the three domains of the central dogma of molecular biology. A frequency chart of the most dominantly-studied elements of the periodic table is provided as an addendum.
In a recent paper, Wilmes et al. demonstrated a qualitative integration of omics data streams to gain a mechanistic understanding of cyclosporine A toxicity. One of their major conclusions was that cyclosporine A strongly activates the nuclear factor (erythroid-derived 2)-like 2 pathway (Nrf2) in renal proximal tubular epithelial cells exposed in vitro. We pursue here the analysis of those data with a quantitative integration of omics data with a differential equation model of the Nrf2 pathway. That was done in two steps: (i) Modeling the in vitro pharmacokinetics of cyclosporine A (exchange between cells, culture medium and vial walls) with a minimal distribution model. (ii) Modeling the time course of omics markers in response to cyclosporine A exposure at the cell level with a coupled PK-systems biology model. Posterior statistical distributions of the parameter values were obtained by Markov chain Monte Carlo sampling. Data were well simulated, and the known in vitro toxic effect EC50 was well matched by model predictions. The integration of in vitro pharmacokinetics and systems biology modeling gives us a quantitative insight into mechanisms of cyclosporine A oxidative-stress
BACKGROUND: A major problem in pain medicine is the lack of knowledge about which treatment suits a specific patient. We tested the ability of quantitative sensory testing to predict the analgesic effect of pregabalin and placebo in patients with chronic pancreatitis. METHODS: Sixty-four patients with painful chronic pancreatitis received pregabalin (150-300 mg BID) or matching placebo for three consecutive weeks. Analgesic effect was documented in a pain diary based on a visual analogue scale. Responders were defined as patients with a reduction in clinical pain score of 30% or more after three weeks of study treatment compared to baseline recordings. Prior to study medication, pain thresholds to electric skin and pressure stimulation were measured in dermatomes T10 (pancreatic area) and C5 (control area). To eliminate inter-subject differences in absolute pain thresholds an index of sensitivity between stimulation areas was determined (ratio of pain detection thresholds in pancreatic versus control area, ePDT ratio). Pain modulation was recorded by a conditioned pain modulation paradigm. A support vector machine was used to screen sensory parameters for their predictive power of pregabalin efficacy. RESULTS: The pregabalin responders group was hypersensitive to electric tetanic stimulation of the pancreatic area (ePDT ratio 1.2 (0.9-1.3)) compared to non-responders group (ePDT ratio: 1.6 (1.5-2.0)) (P = 0.001). The electrical pain detection ratio was predictive for pregabalin effect with a classification accuracy of 83.9% (P = 0.007). The corresponding sensitivity was 87.5% and specificity was 80.0%. No other parameters were predictive of pregabalin or placebo efficacy. CONCLUSIONS: The present study provides first evidence that quantitative sensory testing predicts the analgesic effect of pregabalin in patients with painful chronic pancreatitis. The method can be used to tailor pain medication based on patient's individual sensory profile and thus comprises a significant step towards personalized pain medicine.
A number of models in mathematical epidemiology have been developed to account for control measures such as vaccination or quarantine. However, COVID-19 has brought unprecedented social distancing measures, with a challenge on how to include these in a manner that can explain the data but avoid overfitting in parameter inference. We here develop a simple time-dependent model, where social distancing effects are introduced analogous to coarse-grained models of gene expression control in systems biology. We apply our approach to understand drastic differences in COVID-19 infection and fatality counts, observed between Hubei (Wuhan) and other Mainland China provinces. We find that these unintuitive data may be explained through an interplay of differences in transmissibility, effective protection, and detection efficiencies between Hubei and other provinces. More generally, our results demonstrate that regional differences may drastically shape infection outbursts. The obtained results demonstrate the applicability of our developed method to extract key infection parameters directly from publically available data so that it can be globally applied to outbreaks of COVID-19 in a number
OBJECTIVES: Cerebral ischemia/reperfusion (IR) drives oxidative stress and injurious metabolic processes that lead to redox imbalance, inflammation, and tissue damage. However, the key mediators of reperfusion injury remain unclear, and therefore, there is considerable interest in therapeutically targeting metabolism and the cellular response to oxidative stress. METHODS: The objective of this study was to investigate the molecular, metabolic, and physiological impact of itaconate treatment to mitigate reperfusion injuries in in vitro and in vivo model systems. We conducted metabolic flux and bioenergetic studies in response to exogenous itaconate treatment in cultures of primary rat cortical neurons and astrocytes. In addition, we administered itaconate to mouse models of cerebral reperfusion injury with ischemia or traumatic brain injury followed by hemorrhagic shock resuscitation. We quantitatively characterized the metabolite levels, neurological behavior, markers of redox stress, leukocyte adhesion, arterial blood flow, and arteriolar diameter in the brains of the treated/untreated mice. RESULTS: We demonstrate that the "immunometabolite" itaconate slowed tricarboxylic acid (TCA) cycle metabolism and buffered redox imbalance via succinate dehydrogenase (SDH) inhibition and induction of anti-oxidative stress response in primary cultures of astrocytes and neurons. The addition of itaconate to reperfusion fluids after mouse cerebral IR injury increased glutathione levels and reduced reactive oxygen/nitrogen species (ROS/RNS) to improve neurological function. Plasma organic acids increased post-reperfusion injury, while administration of itaconate normalized these metabolites. In mouse cranial window models, itaconate significantly improved hemodynamics while reducing leukocyte adhesion. Further, itaconate supplementation increased survival in mice experiencing traumatic brain injury (TBI) and hemorrhagic shock. CONCLUSIONS: We hypothesize that itaconate transiently inhibits SDH to gradually "awaken" mitochondrial function upon reperfusion that minimizes ROS and tissue damage. Collectively, our data indicate that itaconate acts as a mitochondrial regulator that controls redox metabolism to improve physiological outcomes associated with IR injury.
A key aim of systems biology is the reconstruction of molecular networks, however we do not yet have networks that integrate information from all datasets available for a particular clinical condition. This is in part due to the limited scalability, in terms of required computational time and power, of existing algorithms. Network reconstruction methods should also be scalable in the sense of allowing scientists from different backgrounds to efficiently integrate additional data. We present a network model of acute myeloid leukemia (AML). In the current version (AML 2.1) we have used gene expression data (both microarray and RNA-seq) from five different studies comprising a total of 771 AML samples and a protein-protein interactions dataset. Our scalable network reconstruction method is in part based on the well-known property of gene expression correlation among interacting molecules. The difficulty of distinguishing between direct and indirect interactions is addressed optimizing the coefficient of variation of gene expression, using a validated gold standard dataset of direct interactions. Computational time is much reduced compared to other network reconstruction methods. A key
Support vector machines and kernel methods are increasingly popular in genomics and computational biology, due to their good performance in real-world applications and strong modularity that makes them suitable to a wide range of problems, from the classification of tumors to the automatic annotation of proteins. Their ability to work in high dimension, to process non-vectorial data, and the natural framework they provide to integrate heterogeneous data are particularly relevant to various problems arising in computational biology. In this chapter we survey some of the most prominent applications published so far, highlighting the particular developments in kernel methods triggered by problems in biology, and mention a few promising research directions likely to expand in the future.
Quantum computers can in principle solve certain problems exponentially more quickly than their classical counterparts. We have not yet reached the advent of useful quantum computation, but when we do, it will affect nearly all scientific disciplines. In this review, we examine how current quantum algorithms could revolutionize computational biology and bioinformatics. There are potential benefits across the entire field, from the ability to process vast amounts of information and run machine learning algorithms far more efficiently, to algorithms for quantum simulation that are poised to improve computational calculations in drug discovery, to quantum algorithms for optimization that may advance fields from protein structure prediction to network analysis. However, these exciting prospects are susceptible to "hype", and it is also important to recognize the caveats and challenges in this new technology. Our aim is to introduce the promise and limitations of emerging quantum computing technologies in the areas of computational molecular biology and bioinformatics.
This article frames the relation between biology and physics by characterizing the former as a subdiscipline rather than a special case of the latter. To do this, we posit biological physics as the science of living matter in contrast to classic biophysics, the study of organismal properties by physical techniques. At the scale of the individual cell, living matter is nonunitary, i.e., not composed of aggregated subunits, and has features (e.g., intracellular organizational arrangements and biomolecular condensates) that are unlike any materials of the nonliving world. In transiently or constitutively multicellular forms (social microorganisms, animals, plants), living matter sustains physical processes that are generic (shared with nonliving matter, e.g., subunit communication by molecular diffusion in cellular slime molds), biogeneric (analogous to nonliving matter but realized through cellular activities, e.g., subunit demixing in animal embryos) or nongeneric (pertaining to sui generis materials, e.g., budding of active solids in plants). This "forms of matter" perspective is philosophically situated in the dialectical materialism of Engels and Hessen and the multilevel physica
Two blind source separation methods (Independent Component Analysis and Non-negative Matrix Factorization), developed initially for signal processing in engineering, found recently a number of applications in analysis of large-scale data in molecular biology. In this short review, we present the common idea behind these methods, describe ways of implementing and applying them and point out to the advantages compared to more traditional statistical approaches. We focus more specifically on the analysis of gene expression in cancer. The review is finalized by listing available software implementations for the methods described.