We present a lattice QCD spectroscopy study of the conjectured H dibaryon for 5 different channels at nine different temperatures. The H dibaryon operator is constructed with five different channels which are flavor singlet, flavor 27-plet, $ΛΛ$, $N Ξ$ and $ΣΣ$. The nine different temperatures range from $T/T_c =0.24$ to $T/T_c = 1.90$. The simulations are performed on anisotropic lattice with $N_f=2+1$ flavours of clover fermion at quark mass which corresponds to $m_π=384(4) {\rm MeV} $. The thermal ensembles were provided by the FASTSUM collaboration and the zero temperature ensembles by the Hadspec collaboration. The simulations show that the mass of H-dibaryon for 27-plet channel is the largest at different temperatures, while the mass for $ΣΣ$ channel is the lightest. We also calculate the spectral function of the correlation function of H dibaryon for five channels. The spectral density distributions exhibit similar behavior for the five channels. The mass differences $Δm = m_H - 2\,m_Λ $ of H-dibaryon and $Λ$ pair at $T/T_c =0.24 $ for five channels are also estimated. The results show that $Δm = m_H - 2\,m_Λ $ for channels of 27-plet and $ΛΛ$ is positive, while $Δm = m_H -
We propose the Manifold Function Encoder (MFE) for identifying different functions defined on different manifolds. Both a manifold in Euclidean space and a function defined on this manifold can be viewed as bounded linear functionals on a suitable space of continuous functions. From this perspective, we treat manifold functions as elements of the dual space. By expanding them in the dual space based on appropriate approximating sequence of bases, we obtain a corresponding method for encoding manifold functions, that is MFE. Especially, we prove that MFE achieves super-algebraic convergence based on smooth bases commonly used in spectral methods, such as Legendre polynomials and Fourier basis. We further extend MFE to handle more complex cases, including joint manifold functions of different dimensions and manifold functions with different measures. In addition, we show the approximation theory for MFE-based operator learning, in particular learning the solution mappings of PDEs defined on varying domains, together with several numerical experiments including the 2-d Poisson equation and the 3-d elasticity problem on the real-world bearing.
Deep neural networks are vulnerable to adversarial examples (AEs), which have adversarial transferability: AEs generated for the source model can mislead another (target) model's predictions. However, the transferability has not been understood in terms of to which class target model's predictions were misled (i.e., class-aware transferability). In this paper, we differentiate the cases in which a target model predicts the same wrong class as the source model ("same mistake") or a different wrong class ("different mistake") to analyze and provide an explanation of the mechanism. We find that (1) AEs tend to cause same mistakes, which correlates with "non-targeted transferability"; however, (2) different mistakes occur even between similar models, regardless of the perturbation size. Furthermore, we present evidence that the difference between same mistakes and different mistakes can be explained by non-robust features, predictive but human-uninterpretable patterns: different mistakes occur when non-robust features in AEs are used differently by models. Non-robust features can thus provide consistent explanations for the class-aware transferability of AEs.
Social network analysis is a popular discipline among the social and behavioural sciences, in which the relationships between different social entities are modelled as a network. One of the most popular problems in social network analysis is finding communities in its network structure. Usually, a community in a social network is a functional sub-partition of the graph. However, as the definition of community is somewhat imprecise, many algorithms have been proposed to solve this task, each of them focusing on different social characteristics of the actors and the communities. In this work we propose to use novel combinations of affinity functions, which are designed to capture different social mechanics in the network interactions. We use them to extend already existing community detection algorithms in order to combine the capacity of the affinity functions to model different social interactions than those exploited by the original algorithms.
Different measurements of the Hubble constant ($H_{0}$) are not consistent and a tension between the CMB based methods and cosmic distance ladder based methods has been observed. Measurements from various distance based methods are also inconsistent. To aggravate the problem, same cosmological probe (Type Ia SNe for instance) calibrated through different methods also provide different value of $H_{0}$. We compare various distance ladder based methods through the already available unique data obtained from Hubble Space Telescope (HST). Our analysis is based on parametric (T-test) as well as non-parametric statistical methods such as the Mann-Whitney U test and Kolmogorov-Smirnov test. Our results show that different methods provide different values of $H_0$ and the differences are statistically significant. The biases in the calibration would not account for these differences as the data has been taken from a single telescope with common calibration scheme. The unknown physical effects or issues with the empirical relations of distance measurement from different probes could give rise to these differences.
We investigate the photon emission in pion-pion and pion-proton scattering in the soft-photon limit where the photon energy $ω\to 0$. The expansions of the $π^{-} π^{0} \to π^{-} π^{0} γ$ and the $π^{\pm} p \to π^{\pm} p γ$ amplitudes, satisfying the energy-momentum relations, to the orders $ω^{-1}$ and $ω^{0}$ are derived. We show that these terms can be expressed completely in terms of the on-shell amplitudes for $π^{-} π^{0} \to π^{-} π^{0}$ and $π^{\pm} p \to π^{\pm} p$, respectively, and their partial derivatives with respect to $s$ and $t$. The~structure term which is non singular for $ω\to 0$ is determined to the order $ω^{0}$ from the gauge-invariance constraint using the generalized Ward identities for pions and the proton. For the reaction $π^{-} π^{0} \to π^{-} π^{0} γ$ we discuss in detail the soft-photon theorems in the versions of both F.E. Low and S. Weinberg. We show that these two versions are different and must not be confounded. Weinberg's version gives the pole term of a Laurent expansion in $ω$ of the amplitude for $π^{-} π^{0} \to π^{-} π^{0} γ$ around the phase-space point of zero radiation. Low's version gives an approximate expression for the above amplitud
Most general population web surveys are based on online panels maintained by commercial survey agencies. However, survey agencies differ in their panel selection and management strategies. Little is known if these different strategies cause differences in survey estimates. This paper presents the results of a systematic study designed to analyze the differences in web survey results between agencies. Six different survey agencies were commissioned with the same web survey using an identical standardized questionnaire covering factual health items. Five surveys were fielded at the same time. A calibration approach was used to control the effect of demographics on the outcome. Overall, the results show differences between probability and non-probability surveys in health estimates, which were reduced but not eliminated by weighting. Furthermore, the differences between non-probability surveys before and after weighting are larger than expected between random samples from the same population.
We demonstrate the different effect of different baryons impurities on the static properties of nuclei within the framework of the relativistic mean-field model. Systematic calculations show that $Λ_c^+$ and $Λ_b$ has the same attracting role as $Λ$ hyperon does in lighter hypernuclei. $Ξ^-$ and $Ξ_c^0$ hyperon has the attracting role only for the protons distribution, and has a repulsive role for the neutrons distribution. On the contrary, $Ξ^0$ and $Ξ^+_c$ hyperon attracts surrounding neutrons and reveals a repulsive force to the protons. We find that the different effect of different baryons impurities on the nuclear core is due to the different third component of their isospin.
In this paper, we show that different body parts do not play equally important roles in recognizing a human action in video data. We investigate to what extent a body part plays a role in recognition of different actions and hence propose a generic method of assigning weights to different body points. The approach is inspired by the strong evidence in the applied perception community that humans perform recognition in a foveated manner, that is they recognize events or objects by only focusing on visually significant aspects. An important contribution of our method is that the computation of the weights assigned to body parts is invariant to viewing directions and camera parameters in the input data. We have performed extensive experiments to validate the proposed approach and demonstrate its significance. In particular, results show that considerable improvement in performance is gained by taking into account the relative importance of different body parts as defined by our approach.
Hydrothermal liquefaction (HTL) followed by catalytic hydrotreating of the produced biocrude is increasingly gaining ground as an effective technology for the conversion of biomass into liquid biofuels. A strong advantage of HTL resides in its great flexibility towards the feedstock, since it is able to treat a large number of different organic substrates, ranging from dry to wet residual biomass. Nevertheless, the characteristics of biocrudes from different typologies of organic materials result in different challenges to be met during the hydrotreating step, leading to differences in heteroatoms removal and in the typology and composition of the targeted products. In this work, biocrudes were catalytically hydrotreated with a commercial NiMo/Al2O3 catalyst at different temperatures and pressures. Sewage sludge biocrude was found to be very promising for the production of straight-chain hydrocarbons in the diesel range, with considerable heteroatoms removal even at mild hydrotreating conditions. Similar results were shown by algal biocrude, although complete denitrogenation is challenging. Upgraded biocrudes from lignocellulosic feedstock (miscanthus) showed high yields in the gas
Different ensembles of quantum states can have the same average nonpure state. Distinguishing between such constructions, via different mixing procedures of the same nonpure quantum state, is known to entail signaling. In parallel, different superpositions of pure quantum states can lead to the same pure state. We show that the possibility of distinguishing between such preparations, via different interferometric setups leading to the same pure quantum state, also implies signaling. The implication holds irrespective of whether the distinguishing procedure is deterministic or probabilistic.
Current design constraints have encouraged the studies of aeroacoustics fields around compressible jet flows. The present work addresses the numerical study of subgrid scale modeling for unsteady turbulent jet flows as a preliminary step for future aeroacoustic analyses of main engine rocket plumes. An in-house large eddy simulation (LES) tool is developed in order to reproduce high fidelity results of compressible jet flows. In the present study, perfectly expanded jets are considered because the authors want to emphasize the effects of the jet mixing phenomena. The large eddy simulation formulation is written using the finite difference approach, with an explicit time integration and using a second order spatial discretization. The energy equation is carefully discretized in order to model the energy equation of the filtered Navier-Stokes formulation. The classical Smagorinsky model, the dynamic Smagorinsky model and the Vreman models are the chosen subgrid scale closures for the present work. Numerical simulations of perfectly expanded jets are performed and compared with the literature in order to validate and compare the performance of each subgrid closure in the solver.
A quantum bit encoding converter between qubits of different forms is experimentally demonstrated, paving the way to efficient networks for optical quantum computing and communication.
Defect prediction aims at identifying software components that are likely to cause faults before a software is made available to the end-user. To date, this task has been modeled as a two-class classification problem, however its nature also allows it to be formulated as a one-class classification task. Previous studies show that One-Class Support Vector Machine (OCSVM) can outperform two-class classifiers for within-project defect prediction, however it is not effective when employed at a finer granularity (i.e., commit-level defect prediction). In this paper, we further investigate whether learning from one class only is sufficient to produce effective defect prediction model in two other different scenarios (i.e., granularity), namely cross-version and cross-project defect prediction models, as well as replicate the previous work at within-project granularity for completeness. Our empirical results confirm that OCSVM performance remain low at different granularity levels, that is, it is outperformed by the two-class Random Forest (RF) classifier for both cross-version and cross-project defect prediction. While, we cannot conclude that OCSVM is the best classifier, our results st
It now appears phenomenologically that the third family of fundamental fermions may be essentially different fron the first two. Particularly the high value (174GeV?) of the top quark mass suggests a special role. In the standard model all three families are treated similarly [becoming exactly the same at asymptotically high energies] so we need to extend the model to accommodate the goal of a really different third family. In this article I describe not one but two such viable extensions, quite different one from another. The first is the 331 model which predicts dileptonic gauge bosons. The second is the $Q_6$ model which predicts additional leptons between 50 and 200 GeV. One expects there are many other models of this general type characterized by the prediction of new particles at accessible masses. Supersymmetrization will not be discussed here.
We utilize Kepler data to study the precision differential photometric variability of solar-type and cooler stars at different timescales, ranging from half an hour to 3 months. We define a diagnostic that characterizes the median differential intensity change between data bins of a given timescale. We apply the same diagnostics to SOHO data that has been rendered comparable to Kepler. The Sun exhibits similar photometric variability on all timescales as comparable solar-type stars in the Kepler field (it is not unusually quiet). The previously-defined photometric "range" serves as our activity proxy (driven by starspot coverage). We revisit the fraction of comparable stars in the Kepler field that are more active than the Sun. The exact active fraction depends on what is meant by "more active than the Sun", and on the magnitude limit of the sample of stars considered. This active fraction is between a quarter and a third (depending on the timescale). We argue that a reliable result requires timescales of half a day or longer and stars brighter than Kepler magnitude of 14, otherwise non-stellar noise distorts it. We also analyze main sequence stars grouped by temperature from 6500-
Automatic identification of the differences between two versions of a file is a common and basic task in several applications of mining code repositories. Git, a version control system, has a diff utility and users can select algorithms of diff from the default algorithm Myers to the advanced Histogram algorithm. From our systematic mapping, we identified three popular applications of diff in recent studies. On the impact on code churn metrics in 14 Java projects, we obtained different values in 1.7% to 8.2% commits based on the different diff algorithms. Regarding bug-introducing change identification, we found 6.0% and 13.3% in the identified bug-fix commits had different results of bug-introducing changes from 10 Java projects. For patch application, we found that the Histogram is more suitable than Myers for providing the changes of code, from our manual analysis. Thus, we strongly recommend using the Histogram algorithm when mining Git repositories to consider differences in source code.
Do different fields of knowledge require different research strategies? A numerical model exploring different virtual knowledge landscapes, revealed two diverging optimal search strategies. Trend following is maximized when the popularity of new discoveries determine the number of individuals researching it. This strategy works best when many researchers explore few large areas of knowledge. In contrast, individuals or small groups of researchers are better in discovering small bits of information in dispersed knowledge landscapes. Bibliometric data of scientific publications showed a continuous bipolar distribution of these strategies, ranging from natural sciences, with highly cited publications in journals containing a large number of articles, to the social sciences, with rarely cited publications in many journals containing a small number of articles. The natural sciences seem to adapt their research strategies to landscapes with large concentrated knowledge clusters, whereas social sciences seem to have adapted to search in landscapes with many small isolated knowledge clusters. Similar bipolar distributions were obtained when comparing levels of insularity estimated by indic
We consider and compare two different approaches to the fractional subdiffusion and transport in washboard potentials. One is based on the concept of random fractal time and is associated with the fractional Fokker-Planck equation. Another approach is based on the fractional generalized Langevin dynamics and is associated with anti-persistent fractional Brownian motion and its generalizations. Profound differences between these two different approaches sharing the common adjective "fractional" are explained in spite of some similarities they share in the absence of a nonlinear force. In particular, we show that the asymptotic dynamics in tilted washboard potentials obey two different universality classes independently of the form of potential.
Over the real numbers, the Kronecker sum is the unique operation on matrices which exponentiates to the Kronecker product. Kronecker quotients provide an algebraic view of decompositions of matrices in terms of Kronecker products. This article explores families of operations, Kronecker differences, which are a kind of "inverse" for Kronecker sums. The correspondence between Kronecker differences and Kronecker quotients is explored. Furthermore, we show that a certain class of Kronecker differences may be characterized by families of matrices with these families again being expressed as Kronecker products. This approach provides a different "nonlinear" view towards tensor decomposition.