共找到 20 条结果
We present a Maehara-style construction of Craig interpolants for the three-valued propositional logic of here and there (HT), also known as Gödel's $G_3$. The method adapts a recent interpolation technique that operates on classically encoded logic programs to a variation of a sequent calculus for HT by Mints. The approach is characterized by two stages: First, a preliminary interpolant is constructed, a formula that is an interpolant in some sense, but not yet the desired HT formula. In the second stage, an actual HT interpolant is obtained from this preliminary interpolant. With the classical encoding, the preliminary interpolant is a classical Craig interpolant for classical encodings of the two input HT formulas. In the presented adaptation, the sequent system operates directly on HT formulas, and the preliminary interpolant is in a nonclassical logic that generalizes HT by an additional logic operator.
We present automated theorem provers for the first-order logic of here and there (HT). They are based on a native sequent calculus for the logic of HT and an axiomatic embedding of the logic of HT into intuitionistic logic. The analytic proof search in the sequent calculus is optimized by using free variables and skolemization. The embedding is used in combination with sequent, tableau and connection calculi for intuitionistic first-order logic. All provers are evaluated on a large benchmark set of first-order formulas, providing a foundation for the development of more efficient HT provers.
Large language models are increasingly used for many applications. To prevent illicit use, it is desirable to be able to detect AI-generated text. Training and evaluation of such detectors critically depend on suitable benchmark datasets. Several groups took on the tedious work of collecting, curating, and publishing large and diverse datasets for this task. However, it remains an open challenge to ensure high quality in all relevant aspects of such a dataset. For example, the DetectRL benchmark exhibits relatively simple patterns of AI-generation in 98.5% of the Claude-LLM data. These patterns may include introductory words such as "Sure! Here is the academic article abstract:", or instances where the LLM rejects the prompted task. In this work, we demonstrate that detectors trained on such data use such patterns as shortcuts, which facilitates spoofing attacks on the trained detectors. We consequently reprocessed the DetectRL dataset with several cleansing operations. Experiments show that such data cleansing makes direct attacks more difficult. The reprocessed dataset is publicly available.
Norm, the formal theoretical linguist, and Claudette, the computational language scientist, have a lovely time discussing whether modern language models can inform important questions in the language sciences. Just as they are about to part ways until they meet again, 25 of their closest friends show up -- from linguistics, neuroscience, cognitive science, psychology, philosophy, and computer science. We use this discussion to highlight what we see as some common underlying issues: the String Statistics Strawman (the mistaken idea that LMs can't be linguistically competent or interesting because they, like their Markov model predecessors, are statistical models that learn from strings) and the As Good As it Gets Assumption (the idea that LM research as it stands in 2026 is the limit of what it can tell us about linguistics). We clarify the role of LM-based work for scientific insights into human language and advocate for a more expansive research program for the language sciences in the AI age, one that takes on the commentators' concerns in order to produce a better and more robust science of both human language and of LMs.
We present HERE, an active 3D scene reconstruction framework based on neural radiance fields, enabling high-fidelity implicit mapping. Our approach centers around an active learning strategy for camera trajectory generation, driven by accurate identification of unseen regions, which supports efficient data acquisition and precise scene reconstruction. The key to our approach is epistemic uncertainty quantification based on evidential deep learning, which directly captures data insufficiency and exhibits a strong correlation with reconstruction errors. This allows our framework to more reliably identify unexplored or poorly reconstructed regions compared to existing methods, leading to more informed and targeted exploration. Additionally, we design a hierarchical exploration strategy that leverages learned epistemic uncertainty, where local planning extracts target viewpoints from high-uncertainty voxels based on visibility for trajectory generation, and global planning uses uncertainty to guide large-scale coverage for efficient and comprehensive reconstruction. The effectiveness of the proposed method in active 3D reconstruction is demonstrated by achieving higher reconstruction c
The effectiveness of human-robot interaction often hinges on the ability to cultivate engagement - a dynamic process of cognitive involvement that supports meaningful exchanges. Many existing definitions and models of engagement are either too vague or lack the ability to generalize across different contexts. We introduce IM HERE, a novel framework that models engagement effectively in human-human, human-robot, and robot-robot interactions. By employing an effort-based description of bilateral relationships between entities, we provide an accurate breakdown of relationship patterns, simplifying them to focus placement and four key states. This framework captures mutual relationships, group behaviors, and actions conforming to social norms, translating them into specific directives for autonomous systems. By integrating both subjective perceptions and objective states, the model precisely identifies and describes miscommunication. The primary objective of this paper is to automate the analysis, modeling, and description of social behavior, and to determine how autonomous systems can behave in accordance with social norms for full social integration while simultaneously pursuing thei
Virtual reality games always provide the player with the most verisimilitude experience. With the advancement of VR hardware, it may become mainstream how people feel and attach to a virtual world. The paper discusses a possible solution to finding a better balance between the two classical genres of VR games, sensory stimulation and storytelling. To this end, we designed a game named "Bury Me Here," in which players can find an emotional bond between the game protagonist and themselves. The game includes four sections, the departure from the hometown, the travel on the train, the work in the office, and the life in the penthouse. At the game's end, the protagonist returns to his country yard and spends the rest of his life there. All the sections are designed to tell a stranger's life story to the player, making them experience someone else's life path and bonding an emotional connection between the player and the protagonist through storytelling. Results show that the game provides an immersive visual experience and has emotive sparks echo in players' minds.
Graph Foundation Models (GFMs) are emerging as a significant research topic in the graph domain, aiming to develop graph models trained on extensive and diverse data to enhance their applicability across various tasks and domains. Developing GFMs presents unique challenges over traditional Graph Neural Networks (GNNs), which are typically trained from scratch for specific tasks on particular datasets. The primary challenge in constructing GFMs lies in effectively leveraging vast and diverse graph data to achieve positive transfer. Drawing inspiration from existing foundation models in the CV and NLP domains, we propose a novel perspective for the GFM development by advocating for a ``graph vocabulary'', in which the basic transferable units underlying graphs encode the invariance on graphs. We ground the graph vocabulary construction from essential aspects including network analysis, expressiveness, and stability. Such a vocabulary perspective can potentially advance the future GFM design in line with the neural scaling laws. All relevant resources with GFM design can be found here.
Small primordial black holes could be captured by rocky planets or asteroids, consume their liquid cores from inside and leave hollow structures. We calculate the surface density and surface tension of a hollow structure around a black hole and compare them with the density and compressive strength of various materials that appear in nature to find the allowed parameter space. For example, granite or iron can support a hollow asteroid/planetoid/moon of the size of up to $0.1 R_\oplus$. Along the same lines, future civilizations might build spherical structures around black holes to harvest their energy. Using the strongest material that we currently know how to make (multiwall carbon nanotube), to withstand gravity of one solar mass black hole, the shell must be constructed at distances larger than $10^4 R_\odot$. Alternatively, a fast black hole can leave a narrow tunnel in a solid object while passing through it. For example, a $10^{22}$g black hole should leave a tunnel with a radius of $0.1$ micron, which is large enough to be seen by an optical microscope. We could look for such micro-tunnels here on Earth in very old rocks, or even glass or other solid structures in very old
Variational quantum algorithms use non-convex optimization methods to find the optimal parameters for a parametrized quantum circuit in order to solve a computational problem. The choice of the circuit ansatz, which consists of parameterized gates, is crucial to the success of these algorithms. Here, we propose a gate which fully parameterizes the special unitary group $\mathrm{SU}(N)$. This gate is generated by a sum of non-commuting operators, and we provide a method for calculating its gradient on quantum hardware. In addition, we provide a theorem for the computational complexity of calculating these gradients by using results from Lie algebra theory. In doing so, we further generalize previous parameter-shift methods. We show that the proposed gate and its optimization satisfy the quantum speed limit, resulting in geodesics on the unitary group. Finally, we give numerical evidence to support the feasibility of our approach and show the advantage of our gate over a standard gate decomposition scheme. In doing so, we show that not only the expressibility of an ansatz matters, but also how it's explicitly parameterized.
Long-baseline (LBL) accelerator neutrino oscillation experiments, such as NOvA and T2K in the current generation, and DUNE-LBL and HK-LBL in the coming years, will measure the remaining unknown oscillation parameters with excellent precision. These analyses assume external input on the so-called ``solar parameters,'' $θ_{12}$ and $Δm^2_{21}$, from solar experiments such as SNO, SK, and Borexino, as well as reactor experiments like KamLAND. Here we investigate their role in long-baseline experiments. We show that, without external input on $Δm^2_{21}$ and $θ_{12}$, the sensitivity to detecting and quantifying CP violation is significantly, but not entirely, reduced. Thus long-baseline accelerator experiments can actually determine $Δm^2_{21}$ and $θ_{12}$, and thus all six oscillation parameters, without input from \emph{any} other oscillation experiment. In particular, $Δm^2_{21}$ can be determined; thus DUNE-LBL and HK-LBL can measure both the solar and atmospheric mass splittings in their long-baseline analyses alone. While their sensitivities are not competitive with existing constraints, they are very orthogonal probes of solar parameters and provide a key consistency check of
This study looks at motion of particles using mathematical methods of chronometric invariants (physical observable values in General Relativity). It is shown that aside for mass-bearing particles and light-like particles "zero-particles" can exist in fully degenerated space-time (zero-space). For a regular observer zero-particles move instantly, thus transferring long-range action. Further we show existence of two separate areas in unhomogeneous space-time, where observable time flows into future and into past, while this duality is not found in homogeneous space-time. These areas are referred to as our world, where time flows into future and as the mirror Universe, where time flows in past. The areas are separated with a space-time membrane, referred to as zero-space, where observable time stops.
The synthesis of energy systems is a two-stage optimization problem where design decisions have to be implemented here-and-now (first stage), while for the operation of installed components, we can wait-and-see (second stage). To identify a sustainable design, we need to account for both economical and environmental criteria leading to multi-objective optimization problems. However, multi-objective optimization leads not to one optimal design but to multiple Pareto-efficient design options in general. Thus, the decision maker usually has to decide manually which design should finally be implemented. In this paper, we propose the flexible here-and-now decision (flex-hand) approach for automatic identification of one single design for multi-objective optimization. The approach minimizes the distance of the Pareto front based on one fixed design to the Pareto front allowing multiple designs. Uncertainty regarding parameters of future operations can be easily included through a robust extension of the flex-hand approach. Results of a real-world case study show that the obtained design is highly flexible to adapt operation to the considered objective functions. Thus, the design provides
Different notions of equivalence, such as the prominent notions of strong and uniform equivalence, have been studied in Answer-Set Programming, mainly for the purpose of identifying programs that can serve as substitutes without altering the semantics, for instance in program optimization. Such semantic comparisons are usually characterized by various selections of models in the logic of Here-and-There (HT). For uniform equivalence however, correct characterizations in terms of HT-models can only be obtained for finite theories, respectively programs. In this article, we show that a selection of countermodels in HT captures uniform equivalence also for infinite theories. This result is turned into coherent characterizations of the different notions of equivalence by countermodels, as well as by a mixture of HT-models and countermodels (so-called equivalence interpretations). Moreover, we generalize the so-called notion of relativized hyperequivalence for programs to propositional theories, and apply the same methodology in order to obtain a semantic characterization which is amenable to infinite settings. This allows for a lifting of the results to first-order theories under a very
% context {Nano-diamonds are an enticing and enigmatic dust component yet their origin is still unclear. They have been unequivocally detected in only a few astronomical objects, yet they are the most abundant of the pre-solar grains, both in terms of mass and number.} %aims {Our goal is to derive a viable set of nano-diamond optical constants and optical properties to enable their modelling in any type of astrophysical object where, primarily, the local (inter)stellar radiation field is well-determined.} % methods {The complex indices of refraction, $m(n,k)$, of nano-diamonds, constrained by available laboratory measurements, were calculated as a function of size, surface hydrogenation, and internal (dis)order, using the THEMIS a-C(:H) methodology optEC$_{\rm (s)}$(a).} % results {To demonstrate the utility of the optical properties (the efficiency factors $Q_{\rm ext}$, $Q_{\rm sca}$, and $Q_{\rm abs}$), calculated using the derived $m(n,k)$ data, we show that nano-diamonds could be abundant in the interstellar medium (ISM) and yet remain undetectable there.} % conclusions {The derived optical constants provide a means to explore the existence and viability of nano-diamonds in a
We motivate and address a human-in-the-loop variant of the monocular viewpoint estimation task in which the location and class of one semantic object keypoint is available at test time. In order to leverage the keypoint information, we devise a Convolutional Neural Network called Click-Here CNN (CH-CNN) that integrates the keypoint information with activations from the layers that process the image. It transforms the keypoint information into a 2D map that can be used to weigh features from certain parts of the image more heavily. The weighted sum of these spatial features is combined with global image features to provide relevant information to the prediction layers. To train our network, we collect a novel dataset of 3D keypoint annotations on thousands of CAD models, and synthetically render millions of images with 2D keypoint information. On test instances from PASCAL 3D+, our model achieves a mean class accuracy of 90.7%, whereas the state-of-the-art baseline only obtains 85.7% mean class accuracy, justifying our argument for human-in-the-loop inference.
We present the results of analysis of ``snapshot'' spectra of 253 metal-poor halo stars -3.8 < [Fe/H] < -1.5 obtained in the HERES survey. The spectra are analysed using an automated line profile analysis method based on the Spectroscopy Made Easy codes of Valenti & Piskunov. Elemental abundances of moderate precision have been obtained for 22 elements, C, Mg, Al, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Zn, Sr, Y, Zr, Ba, La, Ce, Nd, Sm, and Eu, where detectable. Among the sample of 253 stars, we find 8 r-II stars and 35 r-I stars. We also find three stars with strong enhancements of Eu which are s-process rich. A significant number of new very metal-poor stars are confirmed: 49 stars with [Fe/H] < -3 and 181 stars with -3 < [Fe/H] < -2. We find one star with [Fe/H] < -3.5. We find the scatter in the abundance ratios of Mg, Ca, Sc, Ti, Cr, Fe, Co, and Ni, with respect to Fe and Mg, to be similar to the estimated relative errors and thus the cosmic scatter to be small, perhaps even non-existent. The elements C, Sr, Y, Ba and Eu, and perhaps Zr, show scatter at [Fe/H] < -2.5 significantly larger than can be explained from the errors in the analysis, implying scatt
In the theory of answer set programming, two groups of rules are called strongly equivalent if, informally speaking, they have the same meaning in any context. The relationship between strong equivalence and the propositional logic of here-and-there allows us to establish strong equivalence by deriving rules of each group from rules of the other. In the process, rules are rewritten as propositional formulas. We extend this method of proving strong equivalence to an answer set programming language that includes operations on integers. The formula representing a rule in this language is a first-order formula that may contain comparison symbols among its predicate constants, and symbols for arithmetic operations among its function constants. The paper is under consideration for acceptance in TPLP.
The bias-variance decomposition is a central result in statistics and machine learning, but is typically presented only for the squared error. We present a generalization of the bias-variance decomposition where the prediction error is a Bregman divergence, which is relevant to maximum likelihood estimation with exponential families. While the result is already known, there was not previously a clear, standalone derivation, so we provide one for pedagogical purposes. A version of this note previously appeared on the author's personal website without context. Here we provide additional discussion and references to the relevant prior literature.
We introduce a new approach to deriving approximate analytical solutions of a harmonic oscillator damped by purely nonlinear, or combinations of linear and nonlinear damping forces. Our approach is based on choosing a suitable trial solution, i.e. an ansatz, which is the product of the time-dependent amplitude and the oscillatory (trigonometric) function that has the same frequency but different initial phase, compared to the undamped case. We derive the equation for the amplitude decay using the connection of the energy dissipation rate with the power of the total damping force and the approximation that the amplitude changes slowly over time compared to the oscillating part of the ansatz. By matching our ansatz to the initial conditions, we obtain the equations for the corresponding initial amplitude and initial phase. Here we demonstrate the validity of our approach in the case of damping quadratic in velocity, Coulomb damping, and a combination of the two, i.e. in this paper we consider purely nonlinear damping, while the dynamics with combinations of damping linear in velocity and nonlinear damping will be analyzed in a follow-up paper. In the case of damping quadratic in velo