共找到 20 条结果
Mathematical models are increasingly a part of microbiological research. Here, we share our perspective on how modeling advances the discipline by: (i) enforcing logical consistency, (ii) enabling quantitative prediction, (iii) extracting hidden parameters from data, and (iv) generating intuitive understanding. We map a spectrum of modeling frameworks, from whole-cell simulations to minimal logistic growth equations, and provide interactive examples for some common frameworks. Building on this overview, we outline pragmatic criteria for choosing an appropriate level of description to capture phenomena of interest. Finally, we present a case study in modeling of microbial ecosystems from our own work to illustrate how mechanistic modeling can yield generalizable intuition. This perspective aims to be an introductory roadmap for integrating mathematical modeling into experimental microbiology.
Advancements in artificial intelligence (AI) have transformed many scientific fields, with microbiology and microbiome research now experiencing significant breakthroughs through machine learning applications. This review provides a comprehensive overview of AI-driven approaches tailored for microbiology and microbiome studies, emphasizing both technical advancements and biological insights. We begin with an introduction to foundational AI techniques, including primary machine learning paradigms and various deep learning architectures, and offer guidance on choosing between traditional machine learning and sophisticated deep learning methods based on specific research goals. The primary section on application scenarios spans diverse research areas, from taxonomic profiling, functional annotation \& prediction, microbe-X interactions, microbial ecology, metabolic modeling, precision nutrition, clinical microbiology, to prevention \& therapeutics. Finally, we discuss challenges in this field and highlight some recent breakthroughs. Together, this review underscores AI's transformative role in microbiology and microbiome research, paving the way for innovative methodologies an
The adoption of open science has quickly changed how artificial intelligence (AI) policy research is distributed globally. This study examines the regional trends in the citation of preprints, specifically focusing on the impact of two major disruptive events: the COVID-19 pandemic and the release of ChatGPT, on research dissemination patterns in the United States, Europe, and South Korea from 2015 to 2024. Using bibliometrics data from the Web of Science, this study tracks how global disruptive events influenced the adoption of preprints in AI policy research and how such shifts vary by region. By marking the timing of these disruptive events, the analysis reveals that while all regions experienced growth in preprint citations, the magnitude and trajectory of change varied significantly. The United States exhibited sharp, event-driven increases; Europe demonstrated institutional growth; and South Korea maintained consistent, linear growth in preprint adoption. These findings suggest that global disruptions may have accelerated preprint adoption, but the extent and trajectory are shaped by local research cultures, policy environments, and levels of open science maturity. This paper
The detrending moving average (DMA) algorithm is one of the best performing methods to quantify the long-term correlations in nonstationary time series. Many long-term correlated time series in real systems contain various trends. We investigate the effects of polynomial trends on the scaling behaviors and the performances of three widely used DMA methods including backward algorithm (BDMA), centered algorithm (CDMA) and forward algorithm (FDMA). We derive a general framework for polynomial trends and obtain analytical results for constant shifts and linear trends. We find that the behavior of the CDMA method is not influenced by constant shifts. In contrast, linear trends cause a crossover in the CDMA fluctuation functions. We also find that constant shifts and linear trends cause crossovers in the fluctuation functions obtained from the BDMA and FDMA methods. When a crossover exists, the scaling behavior at small scales comes from the intrinsic time series while that at large scales is dominated by the constant shifts or linear trends. We also derive analytically the expressions of crossover scales and show that the crossover scale depends on the strength of the polynomial trend,
In the contemporary world, with the incubation of advanced technologies and tremendous outbursts of research works, analyzing big data to incorporate research strategies becomes more helpful using the tools and techniques presented in the current research scenario. This paper indeed tries to tackle the most prominent challenges relating to big data analysis by utilizing a text mining approach to analyze research data published in the field of production management as a case to begin with. The study has been conducted by considering research data of International Journal of Production Research (IJPR) indexed in Scopus between 1961-2017 by dividing the analysis incurred into 3 fragments being 1961-1990, 1991-2010 and finally 2011-2017 as a case to highlight the focus of journal. This has indeed provided multi-faceted benefits such as increasing the effectiveness of the procured data with well-established comparisons between R and Python Programming along with providing detailed research trends on the research work incubated. The results of the study highlighted some most prominent topics in the existing IJPR literature such as system's optimization, supplier selection, process design
There has been a tremendous rise in the growth of online social networks all over the world in recent years. It has facilitated users to generate a large amount of real-time content at an incessant rate, all competing with each other to attract enough attention and become popular trends. While Western online social networks such as Twitter have been well studied, the popular Chinese microblogging network Sina Weibo has had relatively lower exposure. In this paper, we analyze in detail the temporal aspect of trends and trend-setters in Sina Weibo, contrasting it with earlier observations in Twitter. We find that there is a vast difference in the content shared in China when compared to a global social network such as Twitter. In China, the trends are created almost entirely due to the retweets of media content such as jokes, images and videos, unlike Twitter where it has been shown that the trends tend to have more to do with current global events and news stories. We take a detailed look at the formation, persistence and decay of trends and examine the key topics that trend in Sina Weibo. One of our key findings is that retweets are much more common in Sina Weibo and contribute a l
Microorganisms are ubiquitous in nature, and microbial activities are closely intertwined with the entire life cycle system and human life. Developing novel technologies for the detection, characterization and manipulation of microorganisms promotes their applications in clinical, environmental and industrial areas. Over the last two decades, terahertz (THz) technology has emerged as a new optical tool for microbiology. The great potential originates from the unique advantages of THz waves including the high sensitivity to water and inter-/intra-molecular motions, the non-invasive and label-free detecting scheme, and their low photon energy. THz waves have been utilized as a stimulus to alter microbial functions, or as a sensing approach for quantitative measurement and qualitative differentiation. This review specifically focuses on recent research progress of THz technology applied in the field of microbiology, including two major parts of THz biological effects and the microbial detection applications. In the end of this paper, we summarize the research progress and discuss the challenges currently faced by THz technology in microbiology, along with potential solutions. We also
New Mobile technologies have created a new social dimension where individuals can develop increased levels of their social awareness by keeping in touch with old friends, making new friends, dispense new data or product, and getting information in many more aspects of everyday lives, making one to become more knowledgeable which is very beneficial especially for students. Social networks, in particular enable users to share and discuss common interests and provide infrastructures for integrating various user experiences: synchronous and asynchronous communication, game-playing, sharing links and files. The trend of using social networks and social media to deliver and exchange knowledge could bring a new era of social learning in which learners make use all four language skills of reading, writing, listening and speaking. Unlike a traditional e-leaning paradigm with pre-defined curriculum and standard textbooks, social knowledge could be aggregated on demand, just in time, and in context of engaging challenges from social networks, making learning more exciting, social and, game-like experience. Social learning environment engages the learners in discussion, collaboration, explorat
Measuring and forecasting opinion trends from real-time social media is a long-standing goal of big-data analytics. Despite its importance, there has been no conclusive scientific evidence so far that social media activity can capture the opinion of the general population. Here we develop a method to infer the opinion of Twitter users regarding the candidates of the 2016 US Presidential Election by using a combination of statistical physics of complex networks and machine learning based on hashtags co-occurrence to develop an in-domain training set approaching 1 million tweets. We investigate the social networks formed by the interactions among millions of Twitter users and infer the support of each user to the presidential candidates. The resulting Twitter trends follow the New York Times National Polling Average, which represents an aggregate of hundreds of independent traditional polls, with remarkable accuracy. Moreover, the Twitter opinion trend precedes the aggregated NYT polls by 10 days, showing that Twitter can be an early signal of global opinion trends. Our analytics unleash the power of Twitter to uncover social trends from elections, brands to political movements, and
News media often report that the trend of some public health outcome has changed. These statements are frequently based on longitudinal data, and the change in trend is typically found to have occurred at the most recent data collection time point - if no change had occurred the story is less likely to be reported. Such claims may potentially influence public health decisions on a national level. We propose two measures for quantifying the trendiness of trends. Assuming that reality evolves in continuous time we define what constitutes a trend and a change in trend, and introduce a probabilistic Trend Direction Index. This index has the interpretation of the probability that a latent characteristic has changed monotonicity at any given time conditional on observed data. We also define an index of Expected Trend Instability quantifying the expected number of changes in trend on an interval. Using a latent Gaussian Process model we show how the Trend Direction Index and the Expected Trend Instability can be estimated in a Bayesian framework and use the methods to analyze the proportion of smokers in Denmark during the last 20 years, and the development of new COVID-19 cases in Italy
Artificial Intelligence (AI) has rapidly evolved over the past decade and has advanced in areas such as language comprehension, image and video recognition, programming, and scientific reasoning. Recent AI technologies based on large language models and foundation models are approaching or surpassing artificial general intelligence. These systems demonstrate superior performance in complex problem solving, natural language processing, and multi-domain tasks, and can potentially transform fields such as science, industry, healthcare, and education. However, these advancements have raised concerns regarding the safety and trustworthiness of advanced AI, including risks related to uncontrollability, ethical conflicts, long-term socioeconomic impacts, and safety assurance. Efforts are being expended to develop internationally agreed-upon standards to ensure the safety and reliability of AI. This study analyzes international trends in safety and trustworthiness standardization for advanced AI, identifies key areas for standardization, proposes future directions and strategies, and draws policy implications. The goal is to support the safe and trustworthy development of advanced AI and e
Stance detection on social media is an emerging opinion mining paradigm for various social and political applications in which sentiment analysis may be sub-optimal. There has been a growing research interest for developing effective methods for stance detection methods varying among multiple communities including natural language processing, web science, and social computing. This paper surveys the work on stance detection within those communities and situates its usage within current opinion mining techniques in social media. It presents an exhaustive review of stance detection techniques on social media, including the task definition, different types of targets in stance detection, features set used, and various machine learning approaches applied. The survey reports state-of-the-art results on the existing benchmark datasets on stance detection, and discusses the most effective approaches. In addition, this study explores the emerging trends and different applications of stance detection on social media. The study concludes by discussing the gaps in the current existing research and highlights the possible future directions for stance detection on social media.
The Earth possesses many environmental extremes that mimic conditions on extraterrestrial worlds. The stratosphere at 30-40 km altitude closely resembles the surface of Mars in terms of pressure, temperature, and radiation levels (UV, proton, and Galactic cosmic rays). While microbial life in the troposphere is well documented, the true upper limit of Earth's biosphere remains unclear. The stratosphere offers a promising environment to explore microbial survival in such extreme conditions. Despite its significance to astrobiology, this region remains largely unexplored due to difficulties in access and avoiding contamination. To address this, we have developed SAMPLE (Stratospheric Altitude Microbiology Probe for Life Existence), a balloon-borne payload designed to collect dust samples from the stratosphere and return them in conditions suitable for lab analysis. The entire system is novel and designed in-house, with weight- and stress-optimized components. The main payload includes three pre-sterilized sampling trays and a controller that determines altitude and governs tray operation. One tray will remain closed during flight (airborne control) and another on the ground (cleanroo
Two key identifying assumptions used to justify difference-in-differences are parallel trends and no anticipation, yet both may fail in practice. I propose a class of assumptions on anticipation and derive closed-form, sharp bounds on the average treatment effect on the treated while simultaneously relaxing parallel trends. Deviations from both assumptions are jointly disciplined using observed pre-trends. When some anticipation is imposed, the identified set under joint deviations can be shorter than under parallel trends violations alone. These bounds inform a sensitivity analysis assessing the robustness of qualitative conclusions to anticipation and parallel trends violations. I illustrate with an empirical application.
Motion prediction, recently popularized as world models, refers to the anticipation of future agent states or scene evolution, which is rooted in human cognition, bridging perception and decision-making. It enables intelligent systems, such as robots and self-driving cars, to act safely in dynamic, human-involved environments, and informs broader time-series reasoning challenges. With advances in methods, representations, and datasets, the field has seen rapid progress, reflected in quickly evolving benchmark results. Yet, when state-of-the-art methods are deployed in the real world, they often struggle to generalize to open-world conditions and fall short of deployment standards. This reveals a gap between research benchmarks, which are often idealized or ill-posed, and real-world complexity. To address this gap, this survey revisits the generalization and deployability of motion prediction models, with an emphasis on applications of robotics, autonomous driving, and human motion. We first offer a comprehensive taxonomy of motion prediction methods, covering representations, modeling strategies, application domains, and evaluation protocols. We then study two key challenges: (1) h
The rapid adoption of online social media platforms has transformed the way of communication and interaction. On these platforms, discussions in the form of trending topics provide a glimpse of events happening around the world in real-time. Also, these trends are used for political campaigns, public awareness, and brand promotions. Consequently, these trends are sensitive to manipulation by malicious users who aim to mislead the mass audience. In this article, we identify and study the characteristics of users involved in the manipulation of Twitter trends in Pakistan. We propose 'Manipify', a framework for automatic detection and analysis of malicious users for Twitter trends. Our framework consists of three distinct modules: i) user classifier, ii) hashtag classifier, and ii) trend analyzer. The user classifier introduces a novel approach to automatically detect manipulators using tweet content and user behaviour features. Also, the module classifies human and bot users. Next, the hashtag classifier categorizes trending hashtags into six categories assisting in examining manipulators behaviour across different categories. Finally, the trend analyzer module examines users, hashta
This study addresses from the Optimal Experimental Design perspective the use of the isothermal experimentation procedure to precisely estimate the parameters defining models used in predictive microbiology. Starting from a case study set out in the literature, and taking the Baranyi model as the primary model, and the Ratkowsky square-root model as the secondary, D- and c-optimal designs are provided for isothermal experiments, taking the temperature both as a value fixed by the experimenter and as a variable to be designed. The designs calculated show that those commonly used in practice are not efficient enough to estimate the parameters of the secondary model, leading to greater uncertainty in the predictions made via these models. Finally, an analysis is carried out to determine the effect on the efficiency of the possible reduction in the final experimental time.
The progress of some AI paradigms such as deep learning is said to be linked to an exponential growth in the number of parameters. There are many studies corroborating these trends, but does this translate into an exponential increase in energy consumption? In order to answer this question we focus on inference costs rather than training costs, as the former account for most of the computing effort, solely because of the multiplicative factors. Also, apart from algorithmic innovations, we account for more specific and powerful hardware (leading to higher FLOPS) that is usually accompanied with important energy efficiency optimisations. We also move the focus from the first implementation of a breakthrough paper towards the consolidated version of the techniques one or two year later. Under this distinctive and comprehensive perspective, we study relevant models in the areas of computer vision and natural language processing: for a sustained increase in performance we see a much softer growth in energy consumption than previously anticipated. The only caveat is, yet again, the multiplicative factor, as future AI increases penetration and becomes more pervasive.
The Tor Network has been a significant part of the Internet for years. Tor was originally started in the Naval Research Laboratory for anonymous Internet browsing and Internet-based communication. From being used for anonymous communications, it has now segmented into various other use-cases like censorship circumvention, performing illegal activities, etc. In this paper, we perform empirical measurements on the Tor network to analyze the trends in Tor over the years. We gather our measurements data through our measurement scripts, past research in this domain, and aggregated data provided by the Tor metrics directory. We use this data to analyze trends and understand the incidents that caused fluctuations in the trends of different data parameters. We collect measurements data for Tor parameters like Tor users, onion services, Tor relays, and bridges, etc. We also study censorshiprelated events and study trends by analyzing censorship-related metrics. Finally, we touch upon the location diversity in Tor and study how the Tor circuit selection and construction are impacted by the bandwidth distribution of Tor relays across geographies.
Ethane is the most abundant non-methane hydrocarbon in the Earth's atmosphere and an important precursor of tropospheric ozone through various chemical pathways. Ethane is also an indirect greenhouse gas (global warming potential), influencing the atmospheric lifetime of methane through the consumption of the hydroxyl radical (OH). Understanding the development of trends and identifying trend reversals in atmospheric ethane is therefore crucial. Our dataset consists of four series of daily ethane columns obtained from ground-based FTIR measurements. As many other decadal time series, our data are characterized by autocorrelation, heteroskedasticity, and seasonal effects. Additionally, missing observations due to instrument failure or unfavorable measurement conditions are common in such series. The goal of this paper is therefore to analyze trends in atmospheric ethane with statistical tools that correctly address these data features. We present selected methods designed for the analysis of time trends and trend reversals. We consider bootstrap inference on broken linear trends and smoothly varying nonlinear trends. In particular, for the broken trend model, we propose a bootstrap