Researchers point to four potential issues related to the popularisation of quantum science and technology. These include a lack of explaining underlying quantum concepts of quantum 2.0 technology, framing quantum science and technology as spooky and enigmatic, framing quantum technology narrowly in terms of public good and having a strong focus on quantum computing. To date, no research has yet assessed whether these potential issues are actually present in popular communication about quantum science. In this content analysis, we have examined the presence of these potential issues in 501 TEDx talks with quantum science and technology content. Results show that while most experts (70%) explained at least one underlying quantum concept (superposition, entanglement or contextuality) of quantum 2.0 technology, only 28% of the non-experts did so. Secondly, the spooky/enigmatic frame was present in about a quarter of the talks. Thirdly, a narrow public good frame was found, predominantly by highlighting the benefits of quantum science and technology (found in over 6 times more talks than risks). Finally, the main focus was on quantum computing at the expense of other quantum technologi
In the field of environmental science, it is crucial to have robust evaluation metrics for large language models to ensure their efficacy and accuracy. We propose EnviroExam, a comprehensive evaluation method designed to assess the knowledge of large language models in the field of environmental science. EnviroExam is based on the curricula of top international universities, covering undergraduate, master's, and doctoral courses, and includes 936 questions across 42 core courses. By conducting 0-shot and 5-shot tests on 31 open-source large language models, EnviroExam reveals the performance differences among these models in the domain of environmental science and provides detailed evaluation standards. The results show that 61.3% of the models passed the 5-shot tests, while 48.39% passed the 0-shot tests. By introducing the coefficient of variation as an indicator, we evaluate the performance of mainstream open-source large language models in environmental science from multiple perspectives, providing effective criteria for selecting and fine-tuning language models in this field. Future research will involve constructing more domain-specific test sets using specialized environment
Modeling environmental ecosystems is essential for effective resource management, sustainable development, and understanding complex ecological processes. However, traditional methods frequently struggle with the inherent complexity, interconnectedness, and limited data of such systems. Foundation models, with their large-scale pre-training and universal representations, offer transformative opportunities by integrating diverse data sources, capturing spatiotemporal dependencies, and adapting to a broad range of tasks. This survey presents a comprehensive overview of foundation model applications in environmental science, highlighting advancements in forward prediction, data generation, data assimilation, downscaling, model ensembling, and decision-making across domains. We also detail the development process of these models, covering data collection, architecture design, training, tuning, and evaluation. By showcasing these emerging methods, we aim to foster interdisciplinary collaboration and advance the integration of cutting-edge machine learning for sustainable solutions in environmental science.
This paper introduces the Environmental Justice in Technology (EJIT) Principles, a framework to help reorient technological development toward social and ecological justice and collective flourishing. In response to prevailing models of technological innovation that prioritize speed, scale, and profit while neglecting systemic injustice, the EJIT principles offer an alternative: a set of guiding values that foreground interdependence, repair, and community self-determination. Drawing inspiration from the 1991 principles of environmental justice, this framework extends their commitments into the technological domain, treating environmental justice not as a peripheral concern but as a necessary foundation for building equitable and regenerative futures. We situate the EJIT principles within the broader landscape of environmental justice, design justice, and post-growth computing, proposing them as a values infrastructure for resisting extractive defaults and envisioning technological systems that operate in reciprocity with people and the planet. In doing so, this article aims to support collective efforts to transform not only what technologies we build, but how, why, and for whom.
As belief around the potential of computational social science grows, fuelled by recent advances in machine learning, data scientists are ostensibly becoming the new experts in education. Scholars engaged in critical studies of education and technology have sought to interrogate the growing datafication of education yet tend not to use computational methods as part of this response. In this paper, we discuss the feasibility and desirability of the use of computational approaches as part of a critical research agenda. Presenting and reflecting upon two examples of projects that use computational methods in education to explore questions of equity and justice, we suggest that such approaches might help expand the capacity of critical researchers to highlight existing inequalities, make visible possible approaches for beginning to address such inequalities, and engage marginalised communities in designing and ultimately deploying these possibilities. Drawing upon work within the fields of Critical Data Studies and Science and Technology Studies, we further reflect on the two cases to discuss the possibilities and challenges of reimagining computational methods for critical research in
The answers on the current status and future development of Quantum Science and Technology are presented.
Ontologies play a critical role in Semantic Web technologies by providing a structured and standardized way to represent knowledge and enabling machines to understand the meaning of data. Several taxonomies and ontologies have been generated, but individuals target one domain, and only some of those have been found expensive in time and manual effort. Also, they need more coverage of unconventional topics representing a more holistic and comprehensive view of the knowledge landscape and interdisciplinary collaborations. Thus, there needs to be an ontology covering Science and Technology and facilitate multidisciplinary research by connecting topics from different fields and domains that may be related or have commonalities. To address these issues, we present an automatic Science and Technology Ontology (S&TO) that covers unconventional topics in different science and technology domains. The proposed S&TO can promote the discovery of new research areas and collaborations across disciplines. The ontology is constructed by applying BERTopic to a dataset of 393,991 scientific articles collected from Semantic Scholar from October 2021 to August 2022, covering four fields of sci
Current definitions of Information Science are inadequate to comprehensively describe the nature of its field of study and for addressing the problems that are arising from intelligent technologies. The ubiquitous rise of artificial intelligence applications and their impact on society demands the field of Information Science acknowledge the sociotechnical nature of these technologies. Previous definitions of Information Science over the last six decades have inadequately addressed the environmental, human, and social aspects of these technologies. This perspective piece advocates for an expanded definition of Information Science that fully includes the sociotechnical impacts information has on the conduct of research in this field. Proposing an expanded definition of Information Science that includes the sociotechnical aspects of this field should stimulate both conversation and widen the interdisciplinary lens necessary to address how intelligent technologies may be incorporated into society and our lives more fairly.
This study investigates the interconnectivity of firms and Environmental Justice Organizations (EJOs) involved in socio-environmental conflicts worldwide, using data from the Environmental Justice Atlas (EJAtlas). By constructing a multilayer network that links firms, conflicts, and EJOs, the research applies social network analysis to evaluate the simultaneous involvement of these actors across multiple disputes. Both projected networks of firms and EJOs have been analysed by aggregating nodes by categories and countries to reveal structural differences. Findings reveal a stark contrast between the interconnectedness of firms and EJOs. Multinational corporations form a cohesive global network, enabling them to coordinate strategies and exert influence across regions. Conversely, EJOs are fragmented, often operating in isolated clusters with limited interconnection but forming a robust, decentralized and self-organized global network. Firms network present a strong dependence on pertaining conflict category while EJOs network does not depend on conflict category. This structural difference suggests a risk of systemic and structural coordination for firms towards exploitative expans
Computational aspects increasingly shape environmental sciences. Actually, transdisciplinary modelling of complex and uncertain environmental systems is challenging computational science (CS) and also the science-policy interface. Large spatial-scale problems falling within this category - i.e. wide-scale transdisciplinary modelling for environment (WSTMe) - often deal with factors (a) for which deep-uncertainty may prevent usual statistical analysis of modelled quantities and need different ways for providing policy-making with science-based support. Here, practical recommendations are proposed for tempering a peculiar - not infrequently underestimated - source of uncertainty. Software errors in complex WSTMe may subtly affect the outcomes with possible consequences even on collective environmental decision-making. Semantic transparency in CS and free software are discussed as possible mitigations.
Artificial Intelligence (AI) is changing the world, but its impacts on the environment and human well-being remain uncertain. We conducted a systematic literature review of 1,291 studies selected from 6,655 records, identifying the main impacts of AI and how they are assessed. The evidence reveals an uneven landscape: 72% of environmental studies focus narrowly on energy use and CO2 emissions, while only 11% consider systemic effects. Well-being research is largely conceptual and overlooks subjective dimensions. Strikingly, 83% of environmental studies portray AI's impacts as positive, while well-being analyses show a near-even split overall (44% positive; 46% negative). However, this split masks differences across well-being dimensions. While the impacts of AI on income and health are expected to be positive, its impacts on inequality, social cohesion, and employment are expected to be negative. Based on our findings, we suggest several areas for future research. Environmental assessments should incorporate water, material, and biodiversity impacts, and apply a full life-cycle perspective, while well-being research should prioritise empirical analyses. Evaluating AI's overall impa
The large instantaneous sensitivity, a wide frequency coverage and flexible observation modes with large number of beams in the sky are the main features of the SKA observatory's two telescopes, the SKA-Low and the SKA-Mid, which are located on two different continents. Owing to these capabilities, the SKAO telescopes are going to be a game-changer for radio astronomy in general and pulsar astronomy in particular. The eleven articles in this special issue on pulsar science with the SKA Observatory describe its impact on different areas of pulsar science. In this lead article, a brief description of the two telescopes highlighting the relevant features for pulsar science is presented followed by an overview of each accompanying article, exploring the inter-relationship between different pulsar science use cases.
The ability of a nation to participate in the global knowledge economy depends to some extent on its capacities in science and technology. In an effort to assess the capacity of different countries in science and technology, this article updates a classification scheme developed by RAND to measure science and technology capacity for 150 countries of the world.
Mauve is a low-cost small satellite developed and operated by Blue Skies Space Ltd. The payload features a 13 cm telescope connected with a fibre that feeds into a UV-Vis spectrometer. The detector covers the 200-700 nm range in a single shot, obtaining low resolution spectra at R~20-65. Mauve has launched on 28th November 2025, reaching a 510 km Low-Earth Sun-synchronous orbit. The satellite will enable UV and visible observations of a variety of stellar objects in our Galaxy, filling the gaps in the ultraviolet space-based data. The researchers that have already joined the mission have defined the science themes, observational strategy and targets that Mauve will observe in the first year of operations. To date 10 science themes have been developed by the Mauve science collaboration for year 1, with observational strategies that include both long duration monitoring and short cadence snapshots. Here, we describe these themes and the science that Mauve will undertake in its first year of operations.
Given the growing use of Artificial Intelligence (AI) and machine learning (ML) methods across all aspects of environmental sciences, it is imperative that we initiate a discussion about the ethical and responsible use of AI. In fact, much can be learned from other domains where AI was introduced, often with the best of intentions, yet often led to unintended societal consequences, such as hard coding racial bias in the criminal justice system or increasing economic inequality through the financial system. A common misconception is that the environmental sciences are immune to such unintended consequences when AI is being used, as most data come from observations, and AI algorithms are based on mathematical formulas, which are often seen as objective. In this article, we argue the opposite can be the case. Using specific examples, we demonstrate many ways in which the use of AI can introduce similar consequences in the environmental sciences. This article will stimulate discussion and research efforts in this direction. As a community, we should avoid repeating any foreseeable mistakes made in other domains through the introduction of AI. In fact, with proper precautions, AI can be
The ILC Technology Network (ITN) was established in 2022 by the ILC International Development Team, a subcommittee of the International Committee for Future Accelerators, to advance engineering studies toward the realisation of the International Linear Collider (ILC). While the ITN work packages focus on engineering activities for the ILC, their topics are also relevant to a broad range of accelerator applications in particle physics and beyond. These work packages are being carried out now by laboratories in Asia and Europe in close collaboration. This report summarises the current status of the ITN activities.
Community science observational datasets are useful in epidemiology and ecology for modeling species distributions, but the heterogeneous nature of the data presents significant challenges for standardization, data quality assurance and control, and workflow management. In this paper, we present a data workflow for cleaning and harmonizing multiple community science datasets, which we implement in a case study using eBird, iNaturalist, GBIF, and other datasets to model the impact of highly pathogenic avian influenza in populations of birds in the subantarctic. We predict population sizes for several species where the demographics are not known, and we present novel estimates for potential mortality rates from HPAI for those species, based on a novel aggregated dataset of mortality rates in the subantarctic.
Data science and technology offer transformative tools and methods to science. This review article highlights latest development and progress in the interdisciplinary field of data-driven plasma science (DDPS). A large amount of data and machine learning algorithms go hand in hand. Most plasma data, whether experimental, observational or computational, are generated or collected by machines today. It is now becoming impractical for humans to analyze all the data manually. Therefore, it is imperative to train machines to analyze and interpret (eventually) such data as intelligently as humans but far more efficiently in quantity. Despite the recent impressive progress in applications of data science to plasma science and technology, the emerging field of DDPS is still in its infancy. Fueled by some of the most challenging problems such as fusion energy, plasma processing of materials, and fundamental understanding of the universe through observable plasma phenomena, it is expected that DDPS continues to benefit significantly from the interdisciplinary marriage between plasma science and data science into the foreseeable future.
Normalization of citation scores using reference sets based on Web-of-Science Subject Categories (WCs) has become an established ("best") practice in evaluative bibliometrics. For example, the Times Higher Education World University Rankings are, among other things, based on this operationalization. However, WCs were developed decades ago for the purpose of information retrieval and evolved incrementally with the database; the classification is machine-based and partially manually corrected. Using the WC "information science & library science" and the WCs attributed to journals in the field of "science and technology studies," we show that WCs do not provide sufficient analytical clarity to carry bibliometric normalization in evaluation practices because of "indexer effects." Can the compliance with "best practices" be replaced with an ambition to develop "best possible practices"? New research questions can then be envisaged.
Large language models (LLMs) have exhibited exceptional capabilities in natural language understanding and generation, image recognition, and multimodal tasks, charting a course towards AGI and emerging as a central issue in the global technological race. This manuscript conducts a comprehensive review of the core technologies that support LLMs from a user standpoint, including prompt engineering, knowledge-enhanced retrieval augmented generation, fine tuning, pretraining, and tool learning. Additionally, it traces the historical development of Science of Science (SciSci) and presents a forward looking perspective on the potential applications of LLMs within the scientometric domain. Furthermore, it discusses the prospect of an AI agent based model for scientific evaluation, and presents new research fronts detection and knowledge graph building methods with LLMs.