We introduce ChemPro, a progressive benchmark with 4100 natural language question-answer pairs in Chemistry, across 4 coherent sections of difficulty designed to assess the proficiency of Large Language Models (LLMs) in a broad spectrum of general chemistry topics. We include Multiple Choice Questions and Numerical Questions spread across fine-grained information recall, long-horizon reasoning, multi-concept questions, problem-solving with nuanced articulation, and straightforward questions in a balanced ratio, effectively covering Bio-Chemistry, Inorganic-Chemistry, Organic-Chemistry and Physical-Chemistry. ChemPro is carefully designed analogous to a student's academic evaluation for basic to high-school chemistry. A gradual increase in the question difficulty rigorously tests the ability of LLMs to progress from solving basic problems to solving more sophisticated challenges. We evaluate 45+7 state-of-the-art LLMs, spanning both open-source and proprietary variants, and our analysis reveals that while LLMs perform well on basic chemistry questions, their accuracy declines with different types and levels of complexity. These findings highlight the critical limitations of LLMs in
Biodiversity loss is accelerating at an unprecedented pace, threatening ecosystem stability, economic resilience, and human well-being, with billions required to reverse current trends. Against this backdrop, biodiversity finance has emerged as a rapidly expanding but highly fragmented field spanning ecology, economics, finance, accounting, and policy. However, it remains emerging and complex, with the majority of relevant knowledge being produced in non-finance journals. This study employs quantitative bibliometric analysis to examine a corpus of 189,456 references underlying 3,998 articles related to biodiversity and finance. The analysis identifies eight primary research streams within the field that concern (1) strategic and financial approaches in global biodiversity conservation, (2) the impact and implementation of payments for environmental services (PES) in developing countries, (3) neoliberal influences and implications in environmental conservation, (4) biodiversity offsets and conservation, (5) ecosystem services and biodiversity, (6) integrating conservation and community interests in biodiversity management, (7) balancing agricultural intensification with biodiversity
Biodiversity loss is a critical planetary boundary, yet its connection to computing remains largely unexamined. Prior sustainability efforts in computing have focused on carbon and water, overlooking biodiversity due to the lack of appropriate metrics and modeling frameworks. This paper presents the first end-to-end analysis of biodiversity impact from computing systems. We introduce two new metrics--Embodied Biodiversity Index (EBI) and Operational Biodiversity Index (OBI)--to quantify biodiversity impact across the lifecycle, and present FABRIC, a modeling framework that links computing workloads to biodiversity impacts. Our evaluation highlights the need to consider biodiversity alongside carbon and water in sustainable computing design and optimization. The code is available at https://github.com/TianyaoShi/FABRIC.
Given ongoing, human-induced, loss of wild species we propose the Target and Cost Analysis (TCA) approach as a means of incorporating biodiversity within government appraisals of public spending. Influenced by how carbon is priced in countries around the world, the resulting biodiversity shadow price reflects the marginal cost of meeting government targets while avoiding disagreements on the use of willingness to pay measures to value biodiversity. Examples of how to operationalize TCA are developed at different scales and for alternative biodiversity metrics, including extinction risk for Europe and species richness in the UK. Pricing biodiversity according to agreed targets allows trade-offs with other wellbeing-enhancing uses of public funds to be sensibly undertaken without jeopardizing those targets, and is compatible with international guidelines on Cost Benefit Analysis.
Over the last decade several attempts have been made to extend biodiversity studies in ways that would allow researchers to explore how biodiversity-ecosystem functioning relationships may change across different spatial and temporal scales. Unfortunately, the studies based on these attempts often overlooked the serious issues that can arise when quantifying biodiversity effects at larger scales, specifically the fact that biodiversity effects measured across space and time can contain trivial effects that are unrelated to the role of biodiversity per se -- or even effects that are non-biological in nature due to being simple artefacts of how properties and entities are counted and quantified. Here we outline and describe three such pseudo-biodiversity effects: Population-level effects, Independence effects, and Arithmetic effects. Population-level effects are those related to temporal changes due to individual species population growth or development, and are thus independent of biodiversity. Independence and Arithmetic effects (which we explore here primarily in a spatial context) arise either as a simple consequence of the fact that not all species are present everywhere -- i.e.
To enhance large language models (LLMs) for chemistry problem solving, several LLM-based agents augmented with tools have been proposed, such as ChemCrow and Coscientist. However, their evaluations are narrow in scope, leaving a large gap in understanding the benefits of tools across diverse chemistry tasks. To bridge this gap, we develop ChemToolAgent, an enhanced chemistry agent over ChemCrow, and conduct a comprehensive evaluation of its performance on both specialized chemistry tasks and general chemistry questions. Surprisingly, ChemToolAgent does not consistently outperform its base LLMs without tools. Our error analysis with a chemistry expert suggests that: For specialized chemistry tasks, such as synthesis prediction, we should augment agents with specialized tools; however, for general chemistry questions like those in exams, agents' ability to reason correctly with chemistry knowledge matters more, and tool augmentation does not always help.
The impact of international tourism on biodiversity risks has received considerable attention, yet quantitative research in this field remains relatively limited. This study constructs a biodiversity risk index for 155 countries and regions spanning the years 2001 to 2019, analysing how international tourism influences biodiversity risks in destination countries. The results indicate that the growth of international tourism significantly elevates biodiversity risks, with these effects displaying both lagging and cumulative characteristics. Furthermore, spatial analysis shows that international tourism also intensifies biodiversity risks in neighbouring countries. The extent of its impact varies according to the tourism model and destination. In addition, government regulations and international financial assistance play a crucial role in mitigating the biodiversity risks associated with international tourism.
Statistical inference on biodiversity has a rich history going back to RA Fisher. An influential ecological theory suggests the existence of a fundamental biodiversity number, denoted $α$, which coincides with the precision parameter of a Dirichlet process (DP). In this paper, motivated by this theory, we develop Bayesian nonparametric methods for statistical inference on biodiversity, building on the literature on Gibbs-type priors. We argue that $σ$-diversity is the most natural extension of the fundamental biodiversity number and discuss strategies for its estimation. Furthermore, we develop novel theory and methods starting with an Aldous-Pitman (AP) process, which serves as the building block for any Gibbs-type prior with a square-root growth rate. We propose a modeling framework that accommodates the hierarchical structure of Linnean taxonomy, offering a more refined approach to quantifying biodiversity. The analysis of a large and comprehensive dataset on Amazon tree flora provides a motivating application.
The spatial distribution of the chemical reservoirs in protoplanetary disks is key to elucidate the composition of planets, especially habitable ones. However, the partitioning of the main elements among the refractory and volatile phases is still elusive. Key parameters such as the carbon-to-oxygen C/O elemental ratio and the ionization fraction remain poorly constrained, with the latter potentially orders of magnitude lower than in the interstellar medium. Moreover, the thermal structure of the gas is also poorly known, despite its deep influence on gas-phase chemistry. In this context, ortho-to-para ratios could provide selective and sensitive probes. Recent ALMA observations have measured the spatially resolved column density of ortho-and para-H2CO in the transition disk orbiting TW Hya and derived the radial profile of the ortho-to-para ratio. Yet, current disk models do not include the nuclear-spin-resolved chemistry required to interpret these observations. The present work aims to fill this gap, by combining a parametric disk physical model of TW Hya with the UGAN network, updated to include a comprehensive description of the nuclear-spin-resolved chemistry of formaldehyde.
This paper describes a cascading multimodal pipeline for high-resolution biodiversity mapping across Europe, integrating species distribution modeling, biodiversity indicators, and habitat classification. The proposed pipeline first predicts species compositions using a deep-SDM, a multimodal model trained on remote sensing, climate time series, and species occurrence data at 50x50m resolution. These predictions are then used to generate biodiversity indicator maps and classify habitats with Pl@ntBERT, a transformer-based LLM designed for species-to-habitat mapping. With this approach, continental-scale species distribution maps, biodiversity indicator maps, and habitat maps are produced, providing fine-grained ecological insights. Unlike traditional methods, this framework enables joint modeling of interspecies dependencies, bias-aware training with heterogeneous presence-absence data, and large-scale inference from multi-source remote sensing inputs.
Accelerated materials discovery is critical for addressing global challenges. However, developing new laboratory workflows relies heavily on real-world experimental trials, and this can hinder scalability because of the need for numerous physical make-and-test iterations. Here we present MATTERIX, a multiscale, graphics processing unit-accelerated robotic simulation framework designed to create high-fidelity digital twins of chemistry laboratories, thus accelerating workflow development. This multiscale digital twin simulates robotic physical manipulation, powder and liquid dynamics, device functionalities, heat transfer and basic chemical reaction kinetics. This is enabled by integrating realistic physics simulation and photorealistic rendering with a modular graphics processing unit-accelerated semantics engine, which models logical states and continuous behaviors to simulate chemistry workflows across different levels of abstraction. MATTERIX streamlines the creation of digital twin environments through open-source asset libraries and interfaces, while enabling flexible workflow design via hierarchical plan definition and a modular skill library that incorporates learning-based
Multimodal scientific reasoning remains a significant challenge for large language models (LLMs), particularly in chemistry, where problem-solving relies on symbolic diagrams, molecular structures, and structured visual data. Here, we systematically evaluate 40 proprietary and open-source multimodal LLMs, including GPT-5, o3, Gemini-2.5-Pro, and Qwen2.5-VL, on a curated benchmark of Olympiad-style chemistry questions drawn from over two decades of U.S. National Chemistry Olympiad (USNCO) exams. These questions require integrated visual and textual reasoning across diverse modalities. We find that many models struggle with modality fusion, where in some cases, removing the image even improves accuracy, indicating misalignment in vision-language integration. Chain-of-Thought prompting consistently enhances both accuracy and visual grounding, as demonstrated through ablation studies and occlusion-based interpretability. Our results reveal critical limitations in the scientific reasoning abilities of current MLLMs, providing actionable strategies for developing more robust and interpretable multimodal systems in chemistry. This work provides a timely benchmark for measuring progress in
Biodiversity loss driven by agricultural intensification is a pressing global issue, with significant implications for ecosystem stability and human well-being. Existing policy instruments have so far proven insufficient in halting this decline, which raises the need to explore the possible feedback loops that are pivotal to ecosystem degradation. We design a minimal integrated bio-economic agent-based model to qualitatively explore macro-level biodiversity trends, as influenced by individual farmer behavior within simple decision-making processes. Our model predicts further biodiversity decline under a business-as-usual scenario, primarily due to intensified land consolidation. We evaluate two policy options: reducing pesticide use and subsidizing small farmers. While pesticide reduction rapidly benefits biodiversity in the beginning, it eventually leads to increased land consolidation and further biodiversity loss. In contrast, subsidizing small farmers by reallocating a small fraction of existing subsidies, stabilizes farm sizes and enhances biodiversity in the long run. The most effective strategy results from combining both policies, leveraging pesticide reduction alongside ta
The rapid expansion of urban areas challenges biodiversity conservation, requiring innovative ecosystem management. This study explores the role of Artificial Intelligence (AI) in urban biodiversity conservation, its applications, and a framework for implementation. Key findings show that: (a) AI enhances species detection and monitoring, achieving over 90% accuracy in urban wildlife tracking and invasive species management; (b) integrating data from remote sensing, acoustic monitoring, and citizen science enables large-scale ecosystem analysis; and (c) AI decision tools improve conservation planning and resource allocation, increasing prediction accuracy by up to 18.5% compared to traditional methods. The research presents an AI-Driven Framework for Urban Biodiversity Management, highlighting AI's impact on monitoring, conservation strategies, and ecological outcomes. Implementation strategies include: (a) standardizing data collection and model validation, (b) ensuring equitable AI access across urban contexts, and (c) developing ethical guidelines for biodiversity monitoring. The study concludes that integrating AI in urban biodiversity conservation requires balancing innovation
Multimodal Foundation Models (FMs) offer a path to learn general-purpose representations from heterogeneous ecological data, easily transferable to downstream tasks. However, practical biodiversity modelling remains fragmented; separate pipelines and models are built for each dataset and objective, which limits reuse across regions and taxa. In response, we present BioAnalyst, to our knowledge the first multimodal Foundation Model tailored to biodiversity analysis and conservation planning in Europe at $0.25^{\circ}$ spatial resolution targeting regional to national-scale applications. BioAnalyst employs a transformer-based architecture, pre-trained on extensive multimodal datasets that align species occurrence records with remote sensing indicators, climate and environmental variables. Post pre-training, the model is adapted via lightweight roll-out fine-tuning to a range of downstream tasks, including joint species distribution modelling, biodiversity dynamics and population trend forecasting. The model is evaluated on two representative downstream use cases: (i) joint species distribution modelling and with 500 vascular plant species (ii) monthly climate linear probing with temp
Transformative changes in our production and consumption habits are needed to halt biodiversity loss. Organizations are the way we humans have organized our everyday life, and much of our negative environmental impacts, also called carbon and biodiversity footprints, are caused by organizations. Here we explore how the accounts of any organization can be exploited to develop an integrated carbon and biodiversity footprint account. As a metric we utilize spatially explicit potential global loss of species across all ecosystem types and argue that it can be understood as the biodiversity equivalent. The utility of the biodiversity equivalent for biodiversity could be like what carbon dioxide equivalent is for climate. We provide a global country specific dataset that organizations, experts and researchers can use to assess consumption-based biodiversity footprints. We also argue that the current integration of financial and environmental accounting is superficial, and provide a framework for a more robust financial value-transforming accounting model. To test the methodologies, we utilized a Finnish university as a living lab. Assigning an offsetting cost to the footprints significan
Biodiversity research requires complete and detailed information to study ecosystem dynamics at different scales. Employing data-driven methods like Machine Learning is getting traction in ecology and more specific biodiversity, offering alternative modelling pathways. For these methods to deliver accurate results there is the need for large, curated and multimodal datasets that offer granular spatial and temporal resolutions. In this work, we introduce BioCube, a multimodal, fine-grained global dataset for ecology and biodiversity research. BioCube incorporates species observations through images, audio recordings and descriptions, environmental DNA, vegetation indices, agricultural, forest, land indicators, and high-resolution climate variables. All observations are geospatially aligned under the WGS84 geodetic system, spanning from 2000 to 2020. The dataset is available at https://huggingface.co/datasets/ BioDT/BioCube, the acquisition and processing code base at https://github.com/BioDT/bfm-data.
Artificial Intelligence (AI) is revolutionizing biodiversity research by enabling advanced data analysis, species identification, and habitats monitoring, thereby enhancing conservation efforts. Ensuring reproducibility in AI-driven biodiversity research is crucial for fostering transparency, verifying results, and promoting the credibility of ecological findings.This study investigates the reproducibility of deep learning (DL) methods within the biodiversity domain. We design a methodology for evaluating the reproducibility of biodiversity-related publications that employ DL techniques across three stages. We define ten variables essential for method reproducibility, divided into four categories: resource requirements, methodological information, uncontrolled randomness, and statistical considerations. These categories subsequently serve as the basis for defining different levels of reproducibility. We manually extract the availability of these variables from a curated dataset comprising 61 publications identified using the keywords provided by biodiversity experts. Our study shows that the dataset is shared in 47% of the publications; however, a significant number of the publicat
In the contemporary era, biodiversity conservation emerges as a paramount challenge, necessitating innovative approaches to monitoring, preserving, and enhancing the natural world. This paper explores the integration of blockchain technology in biodiversity conservation, offering a novel perspective on how digital resilience can be built within ecological contexts. Blockchain, with its decentralized and immutable ledger and tokenization affordances, presents a groundbreaking solution for the accurate monitoring and tracking of environmental assets, thereby addressing the critical need for transparency and trust in conservation efforts. Unlike previous more theoretical approaches, by addressing the research question of how blockchain supports digital resilience in biodiversity conservation, this study presents a grounded framework that justifies which blockchain features are essential to decipher specific data contribution and data leveraging processes in an effort to protect our planet's biodiversity, while boosting potential economic benefits for all actors involved, from local farmers, to hardware vendors and artificial intelligence experts, to investors and regular users, volunt
Efficient chemical kinetic model inference and application in combustion are challenging due to large ODE systems and widely separated time scales. Machine learning techniques have been proposed to streamline these models, though strong nonlinearity and numerical stiffness combined with noisy data sources make their application challenging. Here, we introduce ChemKANs, a novel neural network framework with applications both in model inference and simulation acceleration for combustion chemistry. ChemKAN's novel structure augments the generic Kolmogorov Arnold Network Ordinary Differential Equations (KAN-ODEs) with knowledge of the information flow through the relevant kinetic and thermodynamic laws. This chemistry-specific structure combined with the expressivity and rapid neural scaling of the underlying KAN-ODE algorithm instills in ChemKANs a strong inductive bias, streamlined training, and higher accuracy predictions compared to standard benchmarks, while facilitating parameter sparsity through shared information across all inputs and outputs. In a model inference investigation, we benchmark the robustness of ChemKANs to sparse data containing up to 15% added noise, and superfl