In the context of novel view synthesis, 3D Gaussian Splatting (3DGS) has recently emerged as an efficient and competitive counterpart to Neural Radiance Field (NeRF), enabling high-fidelity photorealistic rendering in real time. Beyond novel view synthesis, the explicit and compact nature of 3DGS enables a wide range of downstream applications that require geometric and semantic understanding. This survey provides a comprehensive overview of recent progress in 3DGS applications. It first reviews the reconstruction preliminaries of 3DGS, followed by the problem formulation, 2D foundation models, and related NeRF-based research areas that inform downstream 3DGS applications. We then categorize 3DGS applications into three foundational tasks: segmentation, editing, and generation, alongside additional functional applications built upon or tightly coupled with these foundational capabilities. For each, we summarize representative methods, supervision strategies, and learning paradigms, highlighting shared design principles and emerging trends. Commonly used datasets and evaluation protocols are also summarized, along with comparative analyses of recent methods across public benchmarks.
Fast-track procedures play an important role in the context of conditional registration of medical devices, such as listing processes for digital health applications. They offer the potential for earlier patient access to innovative products and involve two registration steps. The applicants can apply first for conditional registration. A successful conditional registration provides a limited funding or approval period and time to prepare the application for permanent registration (the second registration step). For conditional registration, products have to fulfill only a part of the requirements necessary for permanent registration. There is interest in valid and efficient study designs for fast-track procedures. This will be addressed in this paper. A motivating example is the German fast-track registration process of digital health applications (DiGA) for reimbursement by statutory health insurances. The main focus of the paper is the systematic statistical investigation of the utility of adaptive designs in the context of fast-track registration processes like the DiGA fast-track. We demonstrate that, in most cases, such designs are much more efficient than the current standar
This paper studies interpretable and fair artificial intelligence architectures for understanding English reading. Introduced transformer-based models, integrating advanced attention mechanisms and gradient-based feature attribution. The model's lack of interpretability, reduction of algorithmic bias, and unreliable performance in learning environments are the current issues faced in natural language teaching. A unified technical pipeline has been constructed, including adversarial bias correction methods, token-level attribution analysis, and multi-head attention heatmap visualization. Experimental validation was conducted using a large-scale labeled English reading comprehension dataset, and the data partitioning scheme and parameter optimization procedures have been determined. The method significantly outperforms the state-of-the-art models for this task in terms of accuracy and macro-average F1 score; in some aspects, it even surpasses or closely matches the results of human evaluations. In multi-week user experiments, the explainable transformer improved teachers' trust and operability in feedback-based assessments within the scoring system. The proposed method aims to ensure
This paper presents properties and approximations of a random variable based on the zero-order modified Bessel function that results from the compounding of a zero-mean Gaussian with a $χ^2_1$-distributed variance. This family of distributions is a special case of the McKay family of Bessel distributions and of a family of generalized Laplace distributions. It is found that the Bessel distribution can be approximated with a null-location Laplace distribution, which corresponds to the compounding of a zero-mean Gaussian with a $χ^2_2$-distributed variance. Other useful properties and representations of the Bessel distribution are discussed, including a closed form for the cumulative distribution function that makes use of the modified Struve functions. Another approximation of the Bessel distribution that is based on an empirical power-series approximation is also presented. The approximations are tested with the application to the typical problem of statistical hypothesis testing. It is found that a Laplace distribution of suitable scale parameter can approximate quantiles of the Bessel distribution with better than 10% accuracy, with the computational advantage associated with the
The development of Internet of Things (IoT) technologies has led to the widespread adoption of monitoring networks for a wide variety of applications, such as smart cities, environmental monitoring, and precision agriculture. A major research focus in recent years has been the development of graph-based techniques to improve the quality of data from sensor networks, a key aspect for the use of sensed data in decision-making processes, digital twins, and other applications. Emphasis has been placed on the development of machine learning and signal processing techniques over graphs, taking advantage of the benefits provided by the use of structured data through a graph topology. Many technologies such as the graph signal processing (GSP) or the successful graph neural networks (GNNs) have been used for data quality enhancement tasks. In this survey, we focus on graph-based models for data quality control in monitoring sensor networks. Furthermore, we delve into the technical details that are commonly leveraged for providing powerful graph-based solutions for data quality tasks in sensor networks, including missing value imputation, outlier detection, or virtual sensing. To conclude,
Deep learning, a branch of artificial intelligence, is a data-driven method that uses multiple layers of interconnected units or neurons to learn intricate patterns and representations directly from raw input data. Empowered by this learning capability, it has become a powerful tool for solving complex problems and is the core driver of many groundbreaking technologies and innovations. Building a deep learning model is challenging due to the algorithm's complexity and the dynamic nature of real-world problems. Several studies have reviewed deep learning concepts and applications. However, the studies mostly focused on the types of deep learning models and convolutional neural network architectures, offering limited coverage of the state-of-the-art deep learning models and their applications in solving complex problems across different domains. Therefore, motivated by the limitations, this study aims to comprehensively review the state-of-the-art deep learning models in computer vision, natural language processing, time series analysis and pervasive computing, and robotics. We highlight the key features of the models and their effectiveness in solving the problems within each domain
AI is the workhorse of modern data analytics and omnipresent across many sectors. Large Language Models and multi-modal foundation models are today capable of generating code, charts, visualizations, etc. How will these massive developments of AI in data analytics shape future data visualizations and visual analytics workflows? What is the potential of AI to reshape methodology and design of future visual analytics applications? What will be our role as visualization researchers in the future? What are opportunities, open challenges and threats in the context of an increasingly powerful AI? This Visualization Viewpoint discusses these questions in the special context of biomedical data analytics as an example of a domain in which critical decisions are taken based on complex and sensitive data, with high requirements on transparency, efficiency, and reliability. We map recent trends and developments in AI on the elements of interactive visualization and visual analytics workflows and highlight the potential of AI to transform biomedical visualization as a research field. Given that agency and responsibility have to remain with human experts, we argue that it is helpful to keep the
The concept of conditional expectation is important in applications of probability and statistics in many areas such as reliability engineering, economy, finance, and actuarial sciences due to its property of being the best predictor of a random variable as a function of another random variable. This concept also is essential in the martingale theory and theory of Markov processes. Even though, there has been studied and published many interesting properties of conditional expectations with respect to a sigma-algebra generated by a random variable it remains an attractive subject having interesting applications in many fields. In this paper, we present some new properties of the conditional expectation of a random variable given another random variable and describe useful applications in problems of per-share-price of stock markets. The copula and dependence properties of conditional expectations as random variables are also studied. We present also some new equalities having interesting applications and results in martingale theory and Markov processes. Keywords: Conditional expectation, sigma algebra, per-share price, order statistics, prediction Conflicts of interest statement:
A distance mean function measures the average distance of points from the elements of a given set of points (focal set) in the space. The level sets of a distance mean function are called generalized conics. In case of infinite focal points the average distance is typically given by integration over the focal set. The paper contains a survey on the applications of taxicab distance mean functions and generalized conics' theory in geometric tomography: bisection of the focal set and reconstruction problems by coordinate X-rays. The theoretical results are illustrated by implementations in Maple, methods and examples as well.
Architectures for quantum computing based on neutral atoms have risen to prominence as candidates for both near and long-term applications. These devices are particularly well suited to solve independent set problems, as the combinatorial constraints can be naturally encoded in the low-energy Hilbert space due to the Rydberg blockade mechanism. Here, we approach this connection with a focus on a particular device architecture and explore the ubiquity and utility of independent set problems by providing examples of real-world applications. After a pedagogical introduction of basic graph theory concepts of relevance, we briefly discuss how to encode independent set problems in Rydberg Hamiltonians. We then outline the major classes of independent set problems and include associated example applications with industry and social relevance. We determine a wide range of sectors that could benefit from efficient solutions of independent set problems -- from telecommunications and logistics to finance and strategic planning -- and display some general strategies for efficient problem encoding and implementation on neutral-atom platforms.
We explore the existence of a continuous marginal law with respect to the Lebesgue measure for each component $(X,Y,Z)$ of the solution to coupled quadratic forward-backward stochastic differential equations (QFBSDEs) {for which the drift coefficient of the forward component is either bounded and measurable or Hölder continuous. Our approach relies on a combination of the existence of a weak {\it decoupling field} (see \cite{Delarue2}), the integration with respect to space time local time (see \cite{Ein2006}), the analysis of the backward Kolmogorov equation associated to the forward component along with an Itô-Tanaka trick (see \cite{FlanGubiPrio10})}. The framework of this paper is beyond all existing papers on density analysis for Markovian BSDEs and constitutes a major refinement of the existing results. We also derive a comonotonicity theorem for the control variable in this frame and thus extending the works \cite{ChenKulWei05}, \cite{DosDos13}. As applications of our results, we first analyse the regularity of densities of solution to coupled FBSDEs. In the second example, we consider a regime switching term structure interest rate models (see for e.g., \cite{ChenMaYin17})
This chapter is devoted to a discussion of applications of nuclear fission. It covers some aspects of the topics of nuclear reactors, nuclear safeguards and non-proliferation, reactor anti-neutrinos and nuclear medicine. It is, however, limited in scope and the reader is encouraged to explore the many other exciting sub-areas of the applications of nuclear fission.
With the advent of blockchain-enabled IoT applications, there is an increased need for related software patterns, middleware concepts, and testing practices to ensure adequate quality and productivity. IoT and blockchain each provide different design goals, concepts, and practices that must be integrated, including the distributed actor model and fault tolerance from IoT and transactive information integrity over untrustworthy sources from blockchain. Both IoT and blockchain are emerging technologies and both lack codified patterns and practices for development of applications when combined. This paper describes PlaTIBART, which is a platform for transactive IoT blockchain applications with repeatable testing that combines the Actor pattern (which is a commonly used model of computation in IoT) together with a custom Domain Specific Language (DSL) and test network management tools. We show how PlaTIBART has been applied to develop, test, and analyze fault-tolerant IoT blockchain applications.
We briefly review the ideas that have shaped modern optics and have led to various applications of light ranging from spectroscopy to astrophysics, and street lights to quantum communication. The review is primarily focused on the modern applications of classical light and nonclassical light. Specific attention has been given to the applications of squeezed, antibunched, and entangled states of radiation field. Applications of Fock states (especially single photon states) in the field of quantum communication are also discussed.
The aim of this paper is to discuss some applications of general topology in computer algorithms including modeling and simulation, and also in computer graphics and image processing. While the progress in these areas heavily depends on advances in computing hardware, the major intellectual achievements are the algorithms. The applications of general topology in other branches of mathematics are not discussed, since they are not applications of mathematics outside of mathematics.
Context-awareness in personalized mobile applications is a growing area of study. Social context is one of the most important sources of information in human-activity based applications. In this paper, we mainly focus on social relational context that represents the interpersonal relationship between individuals, and the role or influence of such context on users' diverse phone call activities in their real world life. Individuals different phone call activities such as making a phone call to a particular person or responding an incoming call may differ from person-to-person based on their interpersonal relationships such as family, friend, or colleague. However, it is very difficult to make the device understandable about such semantic relationships between individuals and the relevant context-aware applications. To address this issue, in this paper, we explore the data-centric social relational context that can play a significant role in building context-aware personalized mobile applications for various purposes in our real world life.
In this work we propose a new non-monotonic activation function: the modulus. The majority of the reported research on nonlinearities is focused on monotonic functions. We empirically demonstrate how by using the modulus activation function on computer vision tasks the models generalize better than with other nonlinearities - up to a 15% accuracy increase in CIFAR100 and 4% in CIFAR10, relative to the best of the benchmark activations tested. With the proposed activation function the vanishing gradient and dying neurons problems disappear, because the derivative of the activation function is always 1 or -1. The simplicity of the proposed function and its derivative make this solution specially suitable for TinyML and hardware applications.
In this paper a novel system for detecting meaningful deviations in a mobile application's network behavior is proposed. The main goal of the proposed system is to protect mobile device users and cellular infrastructure companies from malicious applications. The new system is capable of: (1) identifying malicious attacks or masquerading applications installed on a mobile device, and (2) identifying republishing of popular applications injected with a malicious code. The detection is performed based on the application's network traffic patterns only. For each application two types of models are learned. The first model, local, represents the personal traffic pattern for each user using an application and is learned on the device. The second model, collaborative, represents traffic patterns of numerous users using an application and is learned on the system server. Machine-learning methods are used for learning and detection purposes. This paper focuses on methods utilized for local (i.e., on mobile device) learning and detection of deviations from the normal application's behavior. These methods were implemented and evaluated on Android devices. The evaluation experiments demonstrat
In many applications, the common assumption that a driving noise process affecting a system is independent or Markovian may not be realistic, but the noise process may be assumed to be stationary. To study such problems, this paper investigates stochastic stability properties of a class of non-Markovian processes, where the existence of a stationary measure, asymptotic mean stationarity and ergodicity conditions are studied. Applications in feedback quantization and stochastic control are presented.
Deep learning applications have been thriving over the last decade in many different domains, including computer vision and natural language understanding. The drivers for the vibrant development of deep learning have been the availability of abundant data, breakthroughs of algorithms and the advancements in hardware. Despite the fact that complex industrial assets have been extensively monitored and large amounts of condition monitoring signals have been collected, the application of deep learning approaches for detecting, diagnosing and predicting faults of complex industrial assets has been limited. The current paper provides a thorough evaluation of the current developments, drivers, challenges, potential solutions and future research needs in the field of deep learning applied to Prognostics and Health Management (PHM) applications.