Establishing best practice recommendations helps to increase consistency, equity and innovation in clinical genomics services. Bioinformatics approaches are a core component of clinical genomics services that use high-throughput genomic sequencing applied in the diagnosis of rare disorders and cancer. While a broad range of international recommendations exist for genomic diagnostic testing and genetic variant classification, the current UK-specific best practice recommendations for bioinformatics approaches applied in this context are outdated. We assembled a team of bioinformaticians and scientists with diverse expertise in rare disease and cancer genomics applied in clinical diagnostics within the UK National Health Service. Through structured discussion, polls and surveys, we developed an updated set of best practice recommendations for bioinformatics approaches applied to high-throughput genomic sequencing in clinical genomic testing. We provide best practice recommendations across the spectrum of activities within a clinical genomics bioinformatics pipeline, including quality control, primary, secondary and tertiary analysis approaches and shared knowledge bases. We also comment on issues related to software development and maintenance. The recommendations can be applied to multiple sequencing technologies and encompass both targeted and whole genome sequencing approaches applied to germline and tumour DNA samples. The best practice recommendations outlined in this study provide a national framework for adoption and innovation of bioinformatics approaches across diverse clinical genomic testing strategies in the UK National Health Service.
Breakthrough advancements in protein tertiary and quaternary structure prediction have accelerated structural bioinformatics research activity and drug development processes. However, many biological mechanisms involve more complicated interactions, such as those between amino and nucleic acids. Predicting the structure of protein-RNA complexes is highly relevant and challenging due to data scarcity and experimental difficulties. Understanding and interpreting these interactions can yield crucial insights into various human diseases and biological phenomena. Thus, quality assessment methods that specifically evaluate protein-RNA complex models can provide significant utility in this emerging area of protein-RNA structural bioinformatics research. We propose a novel graph transformer-based approach named CARP (complex quality assessment of RNA and protein) to infer multiple quality perspectives of protein-RNA complex models. For a single protein-RNA complex model, in one shot, CARP simultaneously predicts multiple overall fold, overall interface, and per-protein-RNA interface quality estimates. When evaluated against a non-redundant protein-RNA docking benchmark, our methods demonstrated obvious improved performance compared to almost all of the existing scoring tools, particularly when ordering and selecting the highest quality decoys. Furthermore, CARP consistently selected higher quality models relative to other predictors when tested on CASP16 targets. Specifically, CARP-predicted global interface and global protein-RNA interface qualities were ranked first and second, respectively, based on the selected top-3 models over all ten CASP16 protein-RNA complex targets. CARP also showed a strong ability, compared to both existing tools and AlphaFold3 self-estimates, in selecting high quality AlphaFold3 models. CARP is freely available at github.com/zwang-bioinformatics/CARP/. Supplementary information and data are available at Bioinformatics online.
Post-translational modifications (PTMs) alter functional states and interaction specificity largely through the conformational changes they impose on protein structure. However, most existing resources remain sequence-centric and cannot reveal how chemical modifications reshape three-dimensional structures. To address this gap, we propose a structural database that systematically extracts and contextualizes modification sites within experimentally determined protein structures, providing a foundation for future studies of protein structure, function, and regulatory mechanisms. We present StrucPTM, a database that extracts modified residues directly from the Protein Data Bank (PDB) structures using atom-level composition rules, substantially expanding coverage beyond annotation-dependent methods. Each validated PTM modification is mapped onto a UniProt entry. The database further characterizes residues using key structural descriptors-including secondary structure, relative solvent accessibility (RSA), and whether the PTM site lies at an inter-chain interface. All chains associated with the same UniProt ID are compared and grouped into homolog sets based on sequence identity. This emphasizes structural conservation among homologs, allowing PTM-induced conformational deviations to be distinguished from unrelated sequence divergence. StrucPTM offers searchable access, interactive 3D visualization, and homolog-based structural comparison through its web interface: https://prix.hanyang.ac.kr/strucptm. The source code and datasets are permanently archived on Zenodo (DOI: 10.5281/zenodo.18939125) and are accessible via GitHub (https://github.com/HanyangBISLab/StrucPTM.git). Supplementary data are available at Bioinformatics online.
Understanding chemical reactions requires bridging fine-grained molecular edits with broader semantic context. Reaction mechanisms are determined not only by local atom-bond transformations but also by the global reaction class. However, most existing approaches treat these tasks separately or rely on external atom-mapping tools, introducing noise and limiting end-to-end learnability. We introduce MARCC (Mapping-Assisted Reaction Center and Classification), a multi-task graph neural network that jointly predicts atom mappings, reaction centers, and reaction classes within a unified architecture. MARCC integrates three key innovations: (i) a mapping-guided cross-attention mechanism that aligns reactants and products for local edit detection, (ii) a dual-graph design that explicitly reasons about bond-level transformations, and (iii) pooled product embeddings for global reaction classification. On the USPTO-50K benchmark, MARCC achieves state-of-the-art results when trained with both reactants and products, including 98.2% atom mapping accuracy, 99.1% Top-1 edit localization accuracy, and 97.2% reaction classification accuracy. Even under the products-only setting, MARCC delivers competitive performance comparable to specialized baselines. Ablation studies confirm the value of mapping-guided attention and multi-task supervision, which enhance both predictive accuracy and interpretability. By unifying atom-level alignment, local reactivity, and global classification, MARCC provides a structured and interpretable framework for reaction understanding. Beyond benchmarks, MARCC has the potential to support applications in reaction annotation, template discovery, and mechanism inference; with additional domain-specific modeling and data, it could be extended to biochemical domains such as enzyme-catalyzed transformations and metabolic pathway modeling. The source code and implementation details are available at https://github.com/maryamastero/MARCC and archived at https://doi.org/10.5281/zenodo.18500230. Supplementary data are available at Bioinformatics online.
Predicting the thermodynamic stability of proteins upon single-point mutations is a pivotal step in both protein engineering and medicine. In the study of predicting protein thermodynamic stability, various computational methods, whether they extract features at the local-level or global-level, exhibit their respective advantages and limitations. To leverage the advantages of both features, we developed MuFaDDG, a novel sequence-based method that integrated multiscale feature fusion for improved prediction of protein stability changes (ΔΔG). MuFaDDG achieves comparable performance on the S669 benchmark, demonstrating strong capabilities in stabilizing mutations. Notably, it shows a significant advantage in the ACC metric, with values of 0.75, 0.88, and 0.81 on the direct, reverse, and overall datasets of the CAGI5 Challenge's Frataxin, respectively. Furthermore, our method outperforms leading sequence-based approaches including THPLM, DDGemb, DDGun, and INPS-Seq on protein Myoglobin stability prediction. Additionally, MuFaDDG demonstrates exceptional predictive performance with higher PCC and ACC on the protein ThreeFoil, which is uncurated by FireProtDB and ProThermDB databases. The source code and data are available at https://github.com/PengjiaMa23/MuFaDDG. Supplementary data are available at Bioinformatics online.
How individuals with conditions, disabilities or abnormalities were treated gives us valuable insights into past societies. Chromosomal aneuploidies, the presence of an abnormal number of copies of the chromosomes, represent the most common large-scale chromosomal abnormalities in human populations. Chromosomal aneuploidies can affect autosomal chromosomes (e.g. Down syndrome) as well as the sex chromosomes (e.g. Klinefelter syndrome), with physical manifestations ranging from mild to severe. While simple to identify genetically, chromosomal aneuploidies are difficult to diagnose from skeletal remains alone, as they present skeletal pathologies consistent with many other conditions. Here we present ChASM (Chromosomal Aneuploidy Screening Methodology), a statistically rigorous Bayesian method for detecting full autosomal and sex chromosomal aneuploidies. The method leverages chromosome-wise read counts and takes into account differences in sequencing methodology, genetic coverage and condition rarity to produce posterior probability estimates for the screening of small and large databases of sequence data. To facilitate the ease of use, ChASM has been implemented in R as the package RChASM. RChASM is available under MIT license on the Comprehensive R Archive Network. Supplementary data are available at Bioinformatics online.
Spatial transcriptomics techniques capture gene expression data and spatial coordinates, while simultaneously correlating them with tissue section images. This advantage makes Spatial transcriptomics data highly valuable for research, such as investigating disease mechanisms and cancer prognosis. However, the extended time and high cost of spatial transcriptomic sequencing currently limit further advancements in this field. The development of numerous deep learning methods aimed at predicting spatial transcriptomics from histology images has advanced significantly. However, these approaches often lack the ability to effectively integrate histology images with spatial transcriptomic data. Here, we propose GR2ST, a deep learning model that learns the underlying connections between image features and gene expression to predict spatial transcriptomics. GR2ST leverages a large pre-trained pathology model to extract high-level histological features. We designed a dual-branch graph architecture, consisting of a dynamic threshold-based functional graph and a radius-constrained spatial graph, to capture complex spot interactions within heterogeneous tissues. The model aligns histology images with gene expression representations through a multimodal contrastive learning framework. It achieves adaptive gene expression generation via a Cell-Type Guided Multi-Branch Regression Head supervised by a context-aware weighting network, which is further integrated with cross-sample retrieval to construct an ensemble prediction. The performance of the model is evaluated on three cancer-related spatial transcriptomics datasets, including cutaneous squamous cell carcinoma and two human breast cancer cohorts, to demonstrate its effectiveness and robustness. https://github.com/zjl1109294570/GR2ST. Supplementary data are available at Bioinformatics online.
Determining the functional consequence of missense mutations acquired in the development of cancer is critical to the understanding of the evolution and the therapeutic vulnerabilities of an individual tumour. Several million missense mutations associated with cancer have been reported across different databases with little functional annotation accompanying each mutation. We have designed the MOKCa-3D database, (https://bioinformaticslab.sussex.ac.uk/MOKCa-3D/) to enable the contextualization and interpretation of cancer somatic missense mutations, including the structural impact of the mutation on the 3D structure, and whether the mutation results in a gain or loss of the protein's function. For each protein, a sequence feature viewer enables interactive visualization of the amino acid sequence, missense mutations, post-translational modification sites, protein domains, active sites, binding sites, protein-protein interaction sites, and mutational frequency. The mutation-level page concisely presents functional insights for each individual mutation, and an interactive MOL* viewer highlights mutated residue on an AlphaFold protein structural model. The SAAP structural impact analysis pipeline was used to identify the structural impact of the mutation. MOKCa-3D concisely presents functional insights and structural impacts of cancer somatic missense mutations enabling users to interpret their functional consequences. It is freely accessible and easy to navigate, making it usable by the widest range of researchers.
The rapid progress in tumour genome sequencing has created a need for bioinformatics tools to interpret the clinical significance of detected variants. VarStack² integrates information from several publicly available resources, including the Catalogue of Somatic Mutations in Cancer (COSMIC), ClinVar, cBioPortal, UCSC Genome Browser, and ClinicalTrials.gov, CIViC and presents it through a user-friendly interface. VarStack2 simplifies the process of retrieving data, saving users significant time compared to manually navigating each database individually. Users can input a variant by specifying a gene symbol, amino acid change, and coding sequence change, with the option to search tumour-specific studies in cBioPortal alongside their primary query. Results are organized into separate sections and can be exported in CSV format for further analysis. Additionally, VarStack2 offers a smart search feature that suggests variants for the gene of interest based on its database search results. These features make VarStack2 a useful tool for scientists and clinicians by enhancing the variant interpretation process and integrating somatic variant information into workflows. VarStack² is freely available at http://varstack.brown.edu/.
T cell receptor (TCR) and peptide interactions (TPI) are one of the most important parts of T cell immunity. Experimental identification of TPI is time-consuming and labor-intensive; therefore, it is necessary to develop computational prediction method that exploit existing data to predict TPI. We use huge TCR and peptide sequences to pre-train two language models (∼152M parameters), respectively, and integrate them into a sequence-based only prediction framework (i.e., RoBERTcr) with supervised fine-tuning (SFT). Visualization of amino acids embedding from pre-trained language model (PLM) shows biochemical clusters based on different properties, and our PLMs outperform existing protein language models (i.e., ESM and ProtTrans) under the same condition. RoBERTcr achieved higher performance than other state-of-the-art methods based on structures or sequences without dataset bias. The visualization of attention from our framework implies valuable spatial information that residues in TCR contacting peptides are the key to their interaction. RoBERTcr is free available at https://fca_icdb.mpu.edu.mo/robertcr/ and https://zenodo.org/records/19042627. Supplementary data are available at Bioinformatics online.
Spatial transcriptome data have both gene expression information and cell spatial location information, offering exceptional prospects for analyzing cell-cell interaction (CCI) network. Most existing statistical and optimal transport-based methods rely only on known ligand-receptor pairs to infer CCI network. Furthermore, most current deep learning frameworks rely on symmetric decoders or undirected graph architectures. Taking advantage of spatial transcriptomic data and graph autoencoders, we present a directed heterogeneous graph autoencoder-based approach DualCellChat to reconstruct a complete and accurate CCI network from incomplete single cell spatial transcriptomics. Benchmarked on five single-cell spatial datasets from four different technologies, we demonstrate that DualCellChat outperforms existing deep learning-based methods and can inherently model the direction of cellular interactions. Furthermore, we introduce downstream analysis to infer signature genes involved in cellular interactions from the reconstructed CCI network and infer significant ligand-receptor pairs for specific cell types. The dataset and code are available in GitHub (https://github.com/JinxianHu/DualCellChat) and Zenodo (DOI: 10.5281/zenodo.18512678). Supplementary materials are available at Bioinformatics online.
Advances in DNA sequencing have outpaced advances in computation, making sequence alignment a major bottleneck in genome data analyses. Classical dynamic programming (DP) algorithms are particularly memory-intensive, especially when computing gap-affine and dual gap-affine alignments. Existing strategies to reduce memory consumption often sacrifice speed or alignment accuracy. We present Singletrack, an efficient algorithm for backtrace gap-affine and dual gap-affine alignments that requires storing a single DP matrix while preserving optimal alignment results. Compared to classical DP algorithms, Singletrack removes the need to store additional matrices (i.e., 2 for gap-affine and 4 for dual gap-affine), significantly reducing memory consumption and, in turn, reducing pressure on the memory hierarchy and improving overall performance. Most importantly, Singletrack is a general backtrace method compatible with state-of-the-art DP-based algorithms and heuristics, such as the Suzuki-Kasahara (SK) and the Wavefront Alignment (WFA) algorithms. We demonstrate that Singletrack reduces memory consumption for both SK and WFA algorithms, lowering SK usage by 2× and 4× and WFA usage by 3× and 5× for gap-affine and dual gap-affine alignments, respectively. Moreover, replacing KSW2's memory-reduction technique with Singletrack accelerates its SK implementation by up to 1.4× at the cost of doubling memory consumption, while Singletrack increases the performance of the WFA implementation in WFA2-lib by 1.2-2.1×. Compared to the efficient linear-memory BiWFA algorithm, the Singletrack-accelerated version of WFA trades a practical increase in memory usage for up to 5.2× higher performance. The Singletrack implementations presented in this work are available on Zenodo (DOI : 10.5281/zenodo.18770585) and GitHub (https://github.com/LorienLV/singletrack). Supplementary data are available at Bioinformatics online.
Dimensionality reduction for single-cell RNA-sequencing (scRNA-seq) data involving multiple biological samples presents a significant analytical challenge. We introduce MUlti-Sample Trajectory-Assisted Reduction of Dimensions (MUSTARD), an innovative trajectory-guided dimensionality reduction method specifically designed for multi-sample, multi-condition scRNA-seq data. By integrating pseudotemporal information, MUSTARD provides a comprehensive unsupervised approach that simultaneously captures major gene expression variation patterns along pseudotime trajectories and across multiple samples, facilitating the discovery of biologically meaningful sample heterogeneity, endotypes, and associated gene markers and modules. In data-driven simulations, MUSTARD outperformed existing methods in distinguishing sample groups, achieving superior out-of-sample prediction accuracy. In two COVID-19 datasets and a tuberculosis dataset, MUSTARD identified components linked to symptom severity, batch effect, and other known biological variations, with notable overlap in immune response genes across the two independent COVID-19 datasets. These results underscore MUSTARD's flexibility and power in identifying biologically relevant sample heterogeneity across diverse datasets. The R package MUSTARD with a detailed user manual is publicly available at https://github.com/haotian-zhuang/MUSTARD and Zenodo (DOI: 10.5281/zenodo.18293392). The source code to reproduce the results in this paper is available at https://github.com/haotian-zhuang/MUSTARD_Paper and Zenodo (DOI: 10.5281/zenodo.18293392). Supplementary data are available at Bioinformatics online.
Integrated analysis across biological databases is becoming increasingly important in life science research, leading many public databases to adopt Semantic Web technologies, also known as knowledge graphs. However, biological data possesses inherently complex and diverse structures, which makes the resulting Resource Description Framework (RDF) schemas intricate and difficult for non-expert users to master, preventing them from translating natural language questions into correct SPARQL queries. Although recent large language model (LLM)-based approaches show potential for automatic SPARQL query generation, they often suffer from structural hallucinations and require large-scale training data to capture schema-specific structures. In this study, we propose a novel framework that avoids hallucinations and requires no training data by combining LLM-based word extraction with a schema-based SPARQL query builder. The LLM extracts variables and parameters from the user's question based on a predefined schema, and the query builder generates a syntactically correct SPARQL query accordingly. By providing a predefined schema in prompts, our method eliminates the need for training data. Experimental results on UniProt, Rhea, and Bgee demonstrate that our method outperforms baseline LLM-based methods using fine-tuning and prompt-tuning in terms of the similarity between search results obtained from generated and expert-written queries. Furthermore, we developed a proof-of-concept chatbot system that enables users to query RDF databases via natural language input, demonstrating the practical utility of our approach in improving access to biological data resources. Experimental environment: https://github.com/scott2121/sparql_query_generator (DOI: https://doi.org/10.5281/zenodo.18539213). Chatbot: https://github.com/scott2121/sparql_query_chatbot (DOI: https://doi.org/10.5281/zenodo.18539225). Supplementary data are available at Bioinformatics online.
Duplex-Indel is a novel Snakemake workflow for detecting somatic small insertions and deletions (Indels) from Tn5 transposase-based duplex sequencing data. Duplex-Indel enhances the accuracy of mutation calling at the single-molecule level by requiring consensus support from both DNA strands for each somatic Indel, minimizing confounding from technical artifacts. Duplex-Indel extends somatic mutation calling in Tn5 transposase-based duplex sequencing data to include Indels. We have demonstrated the accuracy and robustness of Duplex-Indel using cancer cell lines. Source code and documentation are available under the MIT license on GitHub at https://github.com/ealee-lab/duplex-indel and archived on Zenodo at https://doi.org/10.5281/zenodo.19228799. Supplementary data are available at Bioinformatics online.
The biological functions of RNAs are tightly connected to their specific RNA structures. As experimental techniques to determine high-accuracy structures are costly and time-consuming, computational prediction approaches became indispensable for biological RNA research; most notably, the prediction of minimum free energy secondary structures. Pseudoknots are prevalent, highly significant structural motifs, yet they are commonly ignored to achieve acceptable efficiency. Existing reliable pseudoknot prediction methods typically have prohibitive complexity. A route to fast scalable pseudoknot prediction was suggested with HFold following the hierarchical folding hypothesis. Recent successful sparsification of the CCJ pseudoknot prediction algorithm in Knotty promises a further boost by introducing this technique to hierarchical folding. We introduce Spark, a sparsified algorithm for predicting pseudoknotted RNA structures. Spark predicts exactly the same minimum-energy structures as its predecessor HFold in the accurate HotKnots 2.0 energy model for pseudoknots. While sparsification maintains exact energy minimization and theoretical complexity, it strongly improves the time and space consumption over HFold. We benchmarked the performance of Spark against HFold and, as a pseudoknot-free baseline, RNAfold. Compared with HFold, Spark substantially reduces both run time and memory usage, while achieving run times close to RNAfold. Across all tested sequence lengths, Spark used the least memory and consistently ran faster than HFold. Combining sparsification and hierarchical folding in Spark results in an remarkably fast and memory-efficient tool for the accurate prediction of pseudoknotted RNA structures. Consequently, Spark practically enables pseudoknot prediction in large scale and even for very long RNA sequences. Spark software is available on Github (https://github.com/TheCOBRALab/Spark), with a permanent archive of the software and results deposited on Zenodo (https://doi.org/10.5281/zenodo.19073315). Supplementary data are available at Bioinformatics online.
Genome-scale metabolic network (GSMN) models enable flux-based metabolite fate discovery, metabolic engineering, drug target identification, and multi-omics integration. However, programming requirements, architectural complexity, and limited visualization support impede its adoption by the broader scientific community. Existing tools exclusively specialize in GSMN analyses or visualization while lacking important features such as pathway-specific views, database-integrated refinement, and comprehensive enrichment and perturbation analyses. Here, we present NAViFluX (metabolic Network Analysis and Visualization of Flux), a visualization-centric, web browser-based tool that unifies native pathway/subsystem map generation, interactive model refinement via KEGG/BiGG, pathway merging and modules for flux computations, topology, and functional enrichment all within network views. Using three independent case studies on Escherichia coli, the utility of NAViFluX for characterization of nutrient-specific metabolic adaptations, enhancing gene essentiality predictions and interpretability, and rational design of an optimized carbon-fixing metabolic state is demonstrated. All source code and supplementary files associated with the case studies are publicly available via Zenodo at https://zenodo.org/records/19107831. NAViFluX can be easily installed as a standalone software through https://github.com/bnsb-lab-iith/NAViFluX. Supplementary data are available at Bioinformatics online.
Phylogenetic trees are ubiquitous and central to biology, but most published trees are available only as visual diagrams and not in the machine-readable Newick format. There are, thus, thousands of published trees in the scientific literature that are unavailable for follow-up analyses, comparisons, and supertree construction. Experts can easily read such diagrams, but the manual construction of a Newick string from a diagram is laborious, error prone, and time-consuming. Previous attempts to semi-automate the reading of tree images relied on image processing techniques. These often encounter difficulties as typical published tree diagrams contain various graphical elements and annotations that overlap the branches, such as error bars on internal nodes. Here we introduce Treemble, a user-friendly desktop application for generating Newick strings from tree images. The user simply clicks to mark node locations, assisted by a deep learning-based node detection tool, and Treemble algorithmically assembles the tree from the node coordinates alone. Treemble also facilitates the automatic reading of tip name labels and can be used for both rectangular and circular trees. Treemble is a native desktop application for macOS and Windows and is freely available, with documentation, at treemble.org. Source code is available at github.com/John-Allard/Treemble. The trained node detection model is available at huggingface.co/John-Allard/treemble-1. Supplementary data are available at Bioinformatics online.
Molecular dynamics (MD) simulations model the physical movements of atoms in biomolecular systems over time, providing atomic-resolution insight into conformational changes, binding events, and dynamic behaviors that cannot be captured by static structures alone. As such, MD simulations are playing an increasingly important role in understanding the functional roles and molecular interactions of proteins. However, trajectories from these simulations can be extremely large, often reaching tens of gigabytes for a single simulation of modest duration. This creates substantial challenges for storage and data transfer, motivating efficient compression strategies. Furthermore, many downstream analyses require extraction of only a subset of frames or specific atoms from the full trajectory, so an ideal compression format should support rapid random-access decompression of such samplings without requiring full file decompression. Here, we introduce MDCompress, a new trajectory compression format and accompanying software implementation that meets these goals. MDCompress produces compressed trajectory files that are 15-37% smaller than those generated by the widely-used XTC format, while achieving faster compression and decompression speeds through efficient multithreading. The MDCompress software and library are released under an open license (BSD-3) and may be downloaded at https://github.com/refresh-bio/mdcompress and is also available as a Zenodo repository at 10.5281/zenodo.19218347.
Understanding pan-cancer level mutational landscape offers critical insights into the molecular mechanisms underlying tumorigenesis. While patient-level machine learning techniques have been widely employed to identify tumor subtypes, cohort-level clustering-where entire cancer types are grouped based on shared molecular features-has largely relied on classical statistical methods. In this study, we introduce a novel unsupervised contrastive learning framework to cluster 43 cancer types based on coding mutation data derived from the COSMIC database. For each cancer type, we construct two complementary mutation signatures: a gene-level profile capturing nucleotide substitution patterns across the most frequently mutated genes, and a chromosome-level profile representing normalized substitution frequencies across chromosomes. These dual views are encoded using TabNet encoders and optimized via a multi-scale contrastive learning objective (NT-Xent loss) to learn unified cancer-type embeddings. We demonstrate that the resulting latent representations yield biologically meaningful clusters of cancer types, aligning with known mutational processes and tissue origins. Our work represents the first application of contrastive learning to cohort-level cancer clustering, offering a scalable and interpretable framework for mutation-driven cancer subtyping. Data and Code are available at: https://github.com/25Nov/MS-ConTab. Supplementary material includes Supplementary Table 1- 3 and Supplementary Figure 1, which provide additional data supporting the main results.