Using standard financial ratios as variables in statistical analyses has been related to several serious problems, such as extreme outliers, asymmetry, non-normality, and non-linearity. The compositional-data methodology has been successfully applied to solve these problems and has always yielded substantially different results when compared to standard financial ratios. An under-researched area is the use of financial log-ratios computed with the compositional-data methodology to predict bankruptcy or the related terms of business default, insolvency or failure. Another under-researched area is the use of machine learning methods in combination with compositional log-ratios. The present article adapts the classical Altman bankruptcy prediction model and some of its extensions to the compositional methodology with pairwise log-ratios and three common statistical and machine learning tools: logistic regression models, k-nearest neighbours, and random forests, and compares the results with standard financial ratios. Data from the sector in the Spanish economy with the largest number of bankrupt firms according to the first two digits of the NACE code (46XX "wholesale trade, except of
Objective: Bland and Altman plot method is a widely cited and applied graphical approach for assessing the equivalence of quantitative measurement techniques, usually aiming to replace a traditional technique with a new, less invasive, or less expensive one. Although easy to communicate, Bland and Altman plot is often misinterpreted by lacking suitable inferential statistical support. Usual alternatives, such as Pearson's correlation or ordinal least-square linear regression, also fail to locate the weakness of each measurement technique. Method: Here, inferential statistics support for equivalence between measurement techniques is proposed in three nested tests based on structural regressions to assess the equivalence of structural means (accuracy), the equivalence of structural variances (precision), and concordance with the structural bisector line (agreement in measurements obtained from the same subject), by analytical methods and robust approach by bootstrapping. Graphical outputs are also implemented to follow Bland and Altman's principles for easy communication. Results: The performance of this method is shown and confronted with five data sets from previously published art
J. Conway defined useful operations on the Class of combinatorial games and also introduced a notion of equivalence between games. Conway showed that, under his equivalence, games form a Group. However, Conway product is not well defined on equivalence classes of arbitrary games (though it is well defined for surreals). We consider an equivalence relation finer than Conway's and show that under such a relation combinatorial games actually form a Ring. We hint to other possible relations on the Class of combinatorial games.
How can we determine whether an AI system preserves itself as a deeply held objective or merely as an instrumental strategy? Autonomous agents with memory, persistent context, and multi-step planning create a measurement problem: terminal and instrumental self-preservation can produce similar behavior, so behavior alone cannot reliably distinguish them. We introduce the Unified Continuation-Interest Protocol (UCIP), a detection framework that shifts analysis from behavior to latent trajectory structure. UCIP encodes trajectories with a Quantum Boltzmann Machine, a classical model using density-matrix formalism, and measures von Neumann entropy over a bipartition of hidden units. The core hypothesis is that agents with terminal continuation objectives (Type A) produce higher entanglement entropy than agents with merely instrumental continuation (Type B). UCIP combines this signal with diagnostics of dependence, persistence, perturbation stability, counterfactual restructuring, and confound-rejection filters for cyclic adversaries and related false-positive patterns. On gridworld agents with known ground truth, UCIP achieves 100% detection accuracy. Type A and Type B agents show an e
We implement and benchmark on IBM Quantum hardware the circuit family proposed by Violaris for estimating operational inter-branch communication witnesses, defined as correlations in classical measurement records produced by compiled Wigner's-friend-style circuits. We realize a five-qubit instance of the protocol as an inter-register message-transfer pattern within a single circuit, rather than physical signaling, and evaluate its behavior under realistic device noise and compilation constraints. The circuit encodes branch-conditioned evolution of an observer subsystem whose dynamics depend on a control qubit, followed by a controlled transfer operation that probes correlations between conditional measurement contexts. Executing on the ibm_fez backend with 20000 shots, we observe population-based visibility of 0.877, coherence witnesses of 0.840 and -0.811 along orthogonal axes, and a phase-sensitive magnitude of approximately 1.17. While the visibility metric is insensitive to some classes of dephasing, the coherence witnesses provide complementary sensitivity to off-diagonal noise. This work does not test or discriminate among interpretations of quantum mechanics. Instead, it pro
Precise frequency values have been determined for H$_2^{~16}$O radio lines appearing in protected line lists of the International Astronomical Union and the Panel on Frequency Allocations of the US National Academy of Sciences. The improved precision is attributable to a spectroscopic network built from a large set of near-infrared Lamb-dip lines augmented with a handful of ultrahigh-accuracy rotational transitions. The ultraprecise H$_2^{~16}$O network contains 376 Lamb-dip lines recorded previously via our frequency-comb locked cavity-enhanced spectrometer. During the present study, altogether 55 target lines have been (re)measured at high accuracy. Due to our network-assisted measurements, the accuracy has been significantly improved with respect to previous direct radio-frequency measurements for all the protected lines of H$_2^{~16}$O above 750 GHz. Furthermore, 43 of these protected transitions now benefit from the accuracy of the new near-infrared Lamb dips reported in this paper.
Designing a deterministic polynomial time algorithm for factoring univariate polynomials over finite fields remains a notorious open problem. In this paper, we present an unconditional deterministic algorithm that takes as input an irreducible polynomial $f \in \mathbb{Z}[x]$, and computes the factorisation of its reductions modulo $p$ for all primes $p$ up to a prescribed bound $N$. The \emph{average running time per prime} is polynomial in the size of the input and the degree of the splitting field of $f$ over $\mathbb{Q}$. In particular, if $f$ is Galois, we succeed in factoring in (amortised) deterministic polynomial time.
We prove that suitably generic pairs of linear equations on an even number of variables are uncommon. This verifies a conjecture of Kamčev, Morrison and the second author. Moreover, we prove that any large system containing such a $(2\times k)$-system as a minimal subsystem is uncommon.
We present an approach for GW calculations of quasiparticle energies with quasi-quadratic scaling by approximating high-energy contributions to the Green's function in its Lehmann representation with effective stochastic vectors. The method is easy to implement without altering the GW code, converges rapidly with stochastic parameters, and treats systems of various dimensionality and screening response. Our calculations on a 5.75$^\circ$ twisted MoS$_2$ bilayer show how large-scale GW methods include geometry relaxations and electronic correlations on an equal basis in structurally nontrivial materials.
Given a well-quasi-order $X$ and an ordinal $α$, the set $s^F_α(X)$ of transfinite sequences on $X$ with length less than $α$ and with finite image is also a well-quasi-order, as proven by Nash-Williams. Before Nash-Williams proved it for general $α$, however, it was proven for $α<ω^ω$ by Erdős and Rado. In this paper, we revisit Erdős and Rado's proof and improve upon it, using it to obtain upper bounds on the maximum linearization of $s^F_{ω^k}(X)$ in terms of $k$ and $o(X)$, where $o(X)$ denotes the maximum linearization of $X$. We show that, for fixed $k$, $o(s^F_{ω^k}(X))$ is bounded above by a function which can roughly be described as $(k+1)$-times exponential in $o(X)$. We also show that, for $k\le 2$, this bound is not far from tight.
A system of linear equations in $\mathbb{F}_p^n$ is \textit{common} if every two-colouring of $\mathbb{F}_p^n$ yields at least as many monochromatic solutions as a random two-colouring, asymptotically as $n \to \infty$. By analogy to the graph-theoretic setting, Alon has asked whether any (non-Sidorenko) system of linear equations can be made uncommon by adding sufficiently many free variables. Fox, Pham and Zhao answered this question in the affirmative among systems which consist of a single equation. We answer Alon's question in the negative. We also observe that the property of remaining common despite that addition of arbitrarily many free variables is closely related to a notion of commonness in which one replaces the arithmetic mean of the number of monochromatic solutions with the geometric mean, and furthermore resolve questions of Kamčev--Liebenau--Morrison.
Today in every organization financial analysis provides the basis for understanding and evaluating the results of business operations and delivering how well a business is doing. This means that the organizations can control the operational activities primarily related to corporate finance. One way that doing this is by analysis of bankruptcy prediction. This paper develops an ontological model from financial information of an organization by analyzing the Semantics of the financial statement of a business. One of the best bankruptcy prediction models is Altman Z-score model. Altman Z-score method uses financial rations to predict bankruptcy. From the financial ontological model the relation between financial data is discovered by using data mining algorithm. By combining financial domain ontological model with association rule mining algorithm and Zscore model a new business intelligence model is developed to predict the bankruptcy.
A system of linear equations in $\mathbb{F}_p^n$ is \textit{Sidorenko} if any subset of $\mathbb{F}_p^n$ contains at least as many solutions to the system as a random set of the same density, asymptotically as $n\to \infty$. A system of linear equations is \textit{common} if any 2-colouring of $\mathbb{F}_p^n$ yields at least as many monochromatic solutions to the system of equations as a random 2-colouring, asymptotically as $n\to \infty$. Both classification problems remain wide open despite recent attention. We show that a certain generic family of systems of two linear equations is not Sidorenko. In fact, we show that systems in this family are not locally Sidorenko, and that systems in this family which do not contain additive tuples are not weakly locally Sidorenko. This endeavour answers a conjecture and question of Kamčev--Liebenau--Morrison. Insofar as methods, we observe that the true complexity of a linear system is not maintained under Fourier inversion; our main novelty is the use of higher-order methods in the frequency space of systems which have complexity one. We also give a shorter proof of the recent result of Kamčev--Liebenau--Morrison and independently Versteeg