Exo-Kuiper belts have been observed for decades, but the recent detection of gas in some of them may change our view of the Solar System's youth. Late gas produced by the sublimation of CO (or CO$_2$) ices after the dissipation of the primordial gas could be the norm in young planetesimal belts. Hence, a gas-rich Kuiper belt could have been present in the Solar System. The high C/H ratios observed on Uranus and Neptune could be a clue to the existence of such late gas that could have been accreted onto young icy giants. The aim of this paper is to estimate the carbon enrichment of the atmospheres of Uranus and Neptune caused by the accretion of the gas released from a putative gas-rich Kuiper belt. We find that assuming a primordial Kuiper belt with a mass of tens of earth masses leads to significant CO gas accretion onto the giants, which can lead to high C/H ratios, especially for Uranus and Neptune. Our model shows that a relatively massive gas-rich Kuiper belt could have existed in the Solar System's youth, significantly enriching the atmospheres of Uranus and Neptune with carbon. Late gas accretion and its effect on outer giant planets metallicities could be a universal scenar
We often assume that agent-to-agent interaction will mirror human conversation. However, agents operate fundamentally differently. What if they could develop communication patterns that are more efficient and better aligned with their capabilities? While cryptographic primitives that could profoundly improve everyday interactions already exist, humans can't use them because they are too complex and the math can't be done in one's head. Examples range from proving your age (or other attributes) without showing your ID, to filing an anonymous report within a group while proving you are a legitimate member, to splitting a dinner bill fairly without revealing salaries. What if agents could create protocols "on the fly" by recognizing which primitive fits an everyday situation, proposing it to an agentic counterpart, persuading them to participate, and then executing the protocol correctly using appropriate computation tools? Protocol Agent frames this problem by introducing a benchmark that spans: (1) cryptographic primitive recognition, (2) negotiation skills, (3) implementation correctness, (4) correct computation and (5) security strength. We evaluate current open-weight and state-o
Although supermassive black holes (SMBHs) are found at the centers of most galaxies today, over 300 have now been discovered at $z >$ 6, including UHZ1 at $z = 10.1$ and GHZ9 at $z =$ 10.4. They are thought to form when 10$^4$ - 10$^5$ M$_{\odot}$ primordial stars die as direct-collapse black holes (DCBHs) at $z \sim$ 20 - 25. While studies have shown that DCBHs should be visible at birth at $z \gtrsim$ 20 in the near infrared (NIR) to the James Webb Space Telescope (JWST), none have considered SMBH detections at later stages growth down to $z \sim$ 6 - 7. Here, we present continuum NIR luminosities for a BH like ULAS J1120+0641, a $1.35 \times 10^9$ M$_{\odot}$ quasar at $z =$ 7.1, from a cosmological simulation for Euclid, the Roman Space Telescope (RST) and JWST bands from $z =$ 6 - 15. We find that Euclid and RST could detect such BHs, including others like UHZ1 and GHZ9, at much earlier stages of evolution, out to $z \sim$ 14 - 15, and that their redshifts could be confirmed spectroscopically with JWST. Synergies between these three telescopes could thus reveal the numbers of SMBHs at much higher redshifts and discriminate between their evolution pathways because Euclid and
To maintain homeostasis, living cells process information with networks of interacting molecules. Traditional models for cellular information processing have focused on networks of chemical reactions between molecules. Here, we describe how networks of physical interactions could contribute to the processing of information inside cells. In particular, we focus on the impact of biomolecular condensation, a structural phase transition found in cells. Biomolecular condensation has recently been implicated in diverse cellular processes. Some of these are essentially computational, including classification and control tasks. We place these findings in the broader context of physical computing, an emerging framework for describing how the native dynamics of nonlinear physical systems can be leveraged to perform complex computations. The synthesis of these ideas raises questions about expressivity (the range of problems that cellular phase transitions might be able to solve) and learning (how these systems could adapt and evolve to solve different problems). This emerging area of research presents diverse opportunities across molecular biophysics, soft matter, and physical computing.
What if you could really revoke your actual biometric identity, and install a new one, by live rewriting your biological self? We propose some novel mechanisms for hot swapping identity based in novel biotechnology. We discuss the potential positive use cases, and negative consequences if such technology was to become available and affordable. Biometrics are selected on the basis that they are supposed to be unfakeable, or at least not at reasonable cost. If they become easier to fake, it may be much cheaper to fake someone else's biometrics than it is for you to change your own biometrics if someone does copy yours. This potentially makes biometrics a bad trade-off for the user. At the time of writing, this threat is highly speculative, but we believe it is worth raising and considering the potential consequences.
We argue that we could make a scenario of deriving quantum mechanics, as a random dynamics project, in the sense of it being almost unavoidable. The basic idea is based on the weak value formulation.
Recent data from DESI Year 2 BAO, Planck CMB, and various supernova compilations suggest a preference for evolving dark energy, with hints that the equation of state may cross the phantom divide line ($w = -1$). While this behavior is seen in both parametric and non-parametric reconstructions, comparing reconstructions that support such behavior (such as the best fit of CPL) with those that maintain $w>-1$ (like the best fit algebraic quintessence) is not straightforward, as they differ in flexibility and structure, and are not necessarily nested within one another. Thus, the question remains as to whether the crossing behavior that we observe, suggested by the data, truly represents a dark energy model that crosses the phantom divide line, or if it could instead be a result of data fluctuations and the way the data are distributed. We investigate the likelihood of this possibility. For this analysis we perform 1,000 Monte Carlo simulations based on a fiducial algebraic quintessence model. We find that in $3.2 \% $ of cases, CPL with phantom crossing not only fits better, but exceeds the real-data $χ^2$ improvement. This Monte Carlo approach quantifies to what extent statistical
Previous work indicates that large language models exhibit a significant "English bias", i.e. they often perform better when tasks are presented in English. Interestingly, we have observed that using certain other languages in reasoning tasks can yield better performance than English. However, this phenomenon remains under-explored. In this paper, we explore the upper bound of harnessing multilingualism in reasoning tasks, suggesting that multilingual reasoning promises significantly (by nearly 10 Acc@$k$ points) and robustly (tolerance for variations in translation quality and language choice) higher upper bounds than English-only reasoning. Besides analyzing the reason behind the upper bound and challenges in reaching it, we also find that common answer selection methods cannot achieve this upper bound, due to their limitations and biases. These insights could pave the way for future research aimed at fully harnessing the potential of multilingual reasoning in LLMs.
Collisionless shocks are complex nonlinear structures that are not yet fully understood. In particular, the interaction between these shocks and the particles they accelerate remains elusive. Based on an instability analysis that relates the shock width to the spectrum of the accelerated particle and the shock density ratio, we find that the acceleration process could come to an end when the fraction of accelerated upstream particles reaches about 30\%. Only unmagnetized shocks are considered.
Large Language Models (LLMs) have been making big waves in the machine learning community within the past few years. The impressive scalability of LLMs due to the advent of deep learning can be seen as a continuation of empiricist lingusitic methods, as opposed to rule-based linguistic methods that are grounded in a nativist perspective. Current LLMs are generally inaccessible to resource-constrained researchers, due to a variety of factors including closed source code. This work argues that this lack of accessibility could instill a nativist bias in researchers new to computational linguistics, given that new researchers may only have rule-based, nativist approaches to study to produce new work. Also, given that there are numerous critics of deep learning claiming that LLMs and related methods may soon lose their relevancy, we speculate that such an event could trigger a new wave of nativism in the language processing community. To prevent such a dramatic shift and placing favor in hybrid methods of rules and deep learning, we call upon researchers to open source their LLM code wherever possible to allow both empircist and hybrid approaches to remain accessible.
Microwave Kinetic Inductance Detectors (MKIDs) are beginning to become more prominent in astronomical instrumentation, due to their sensitivity, low noise, high pixel count for superconducting detectors, and inherent energy and time resolving capability. The Kinetic Inductance Detector Spectrometer (KIDSpec) will take advantage of these features, KIDSpec is a medium resolution MKID spectrograph for the optical/near infrared. KIDSpec will contribute to many science areas particularly those involving short and/or faint observations. When short period binary systems are found, typical CCD detectors will struggle to characterise these systems due to the very short exposures required, causing errors as large as the estimated parameter itself. The KIDSpec Simulator (KSIM) has been developed to investigate how much KIDSpec could improve on this. KIDSpec was simulated on an ELT class telescope to find the extent of its potential, and it was found that KIDSpec could observe a $m_{V}\approx{24}$ with an SNR of 5 for a 10s exposure at 1420 spectral resolution. This would mean that KIDSpec on an ELT class telescope could spectroscopically follow up on any LSST photometric discoveries of LISA v
Are quantum advantages in discrete manufacturing achievable in the near term? As manufacturing-relevant NISQ algorithms, we identified Quantum Annealing (QA) and the Quantum Approximate Optimization Algorithm (QAOA) for combinatorial optimization as well as Derivative Quantum Circuits (DQC) for solving non-linear PDEs. While there is evidence for QAOA's outperformance, this requires post-NISQ circuit depths. In the case of QA, there is up to now no unquestionable evidence for advantage compared to classical computation. Yet different protocols could lead to finding such instances. Together with a well-chosen quantum feature map, DQC are a promising concept. Further investigations for higher dimensional problems and improvements in training could follow.
Topography on a wet rocky exoplanet could raise land above its sea level. Although land elevation is the product of many complex processes, the large-scale topographic features on any geodynamically-active planet are the expression of the convecting mantle beneath the surface. This so-called "dynamic topography" exists regardless of a planet's tectonic regime or volcanism; its amplitude, with a few assumptions, can be estimated via numerical simulations of convection as a function of the mantle Rayleigh number. We develop new scaling relationships for dynamic topography on stagnant lid planets using 2D convection models with temperature-dependent viscosity. These scalings are applied to 1D thermal history models to explore how dynamic topography varies with exoplanetary observables over a wide parameter space. Dynamic topography amplitudes are converted to an ocean basin capacity, the minimum water volume required to flood the entire surface. Basin capacity increases less steeply with planet mass than does the amount of water itself, assuming a water inventory that is a constant planetary mass fraction. We find that dynamically-supported topography alone could be sufficient to main
We examine the possibility that a nearby supernova explosion could have caused one or more of the mass extinctions identified by palaeontologists. We discuss the likely rate of such events in the light of the recent identification of Geminga as a supernova remnant less than 100 pc away and the discovery of a millisecond pulsar about 150 pc away, and observations of SN 1987A. The fluxes of $γ$ radiation and charged cosmic rays on the Earth are estimated, and their effects on the Earth's ozone layer discussed. A supernova explosion of the order of 10 pc away could be expected every few hundred million years, and could destroy the ozone layer for hundreds of years, letting in potentially lethal solar ultraviolet radiation. In addition to effects on land ecology, this could entail mass destruction of plankton and reef communities, with disastrous consequences for marine life as well. A supernova extinction should be distinguishable from a meteorite impact such as the one that presumably killed the dinosaurs.
Neutrinoless double-beta decay, if observed, would signal physics beyond the Standard Model that could be discovered at energies significantly lower than those at which the relevant degrees of freedom could be excited. Therefore, it could be challenging to further use the neutrinoless double-beta decay observations to distinguish between many beyond Standard Model mechanisms contributing to this process. Accurate nuclear structure calculations of the nuclear matrix elements necessary to analyze the decay rates could be helpful to narrow down the list of contributing mechanisms, and to better identify the more exotic properties of the neutrinos. We investigate the information one can get from the angular and energy distribution of the emitted electron assuming that the right-handed currents exist. For the analysis of these distributions we calculate the necessary nuclear matrix elements using shell model techniques.
Video captioning is the process of describing the content of a sequence of images capturing its semantic relationships and meanings. Dealing with this task with a single image is arduous, not to mention how difficult it is for a video (or images sequence). The amount and relevance of the applications of video captioning are vast, mainly to deal with a significant amount of video recordings in video surveillance, or assisting people visually impaired, to mention a few. To analyze where the efforts of our community to solve the video captioning task are, as well as what route could be better to follow, this manuscript presents an extensive review of more than 105 papers for the period of 2016 to 2021. As a result, the most-used datasets and metrics are identified. Also, the main approaches used and the best ones. We compute a set of rankings based on several performance metrics to obtain, according to its performance, the best method with the best result on the video captioning task. Finally, some insights are concluded about which could be the next steps or opportunity areas to improve dealing with this complex task.
"How much c.e. sets could cover a given set?" in this paper we are going to answer this question. Also, in this approach some old concepts come into a new arrangement. The major goal of this article is to introduce an appropriate definition for this purpose. Introduction In Computability Theory (Recursion Theory) in the first step we wish to recognize the sets which could be enumerated by Turing machines (equivalently, algorithms) and in the next step we will compare these sets by some reasonable order (Like Turing degree). Also sometimes with some extra information (Oracles) a class of non c.e. sets show the same behavior as c.e. sets (Post hierarchy and related theorems). Here we try another approach: "Let A be an arbitrary set and we wish to recognize how much this set might be covered by a c.e. set?" Although in some sense this approach could be seen in some definitions of Recursion Theory, but at the best of our knowledge it didn't considered as an approach yet, even though it is able to shed a light on some subjects of Computability of sets. Defining this approach is not quite straightforward and there are some obstacles to define them. To overcome these difficulties we modif
The multi-messenger observation of gamma-ray burst (GRB)\,170817A from the nearby binary neutron-star merger GW170817 demonstrated that low-energy $γ$-ray emission can be observed at relatively large angles from GRB jet axes. If such structured emission is typical, then the currently known sample of short GRBs with no distance measurements may contain multiple nearby off-axis events whose delayed afterglows could have gone undetected. These nearby neutron star mergers may produce telltale radio flares peaking years after the prompt GRB emission that could still be observable. Here, we show that several short GRBs observed by the Burst Alert Telescope (BAT) on the Neil Gehrels \textit{Swift} satellite, with no identified afterglow and no distance measurement, could potentially be associated with radio flares detectable by sensitive cm-wavelength radio facilities such as the Karl G. Jansky Very Large Array. We also examine optical follow-up observations that have been carried out for these events, and find that a nearby GW170817-like kilonova is ruled out for only a third of them.
Since that very memorable day at the Beijing 2008 Olympics, a big question on every sports commentator's mind has been "What would the 100 meter dash world record have been, had Usain Bolt not celebrated at the end of his race?" Glen Mills, Bolt's coach suggested at a recent press conference that the time could have been 9.52 seconds or better. We revisit this question by measuring Bolt's position as a function of time using footage of the run, and then extrapolate into the last two seconds based on two different assumptions. First, we conservatively assume that Bolt could have maintained Richard Thompson's, the runner-up, acceleration during the end of the race. Second, based on the race development prior to the celebration, we assume that he could also have kept an acceleration of 0.5 m/s^2 higher than Thompson. In these two cases, we find that the new world record would have been 9.61 +/- 0.04 and 9.55 +/- 0.04 seconds, respectively, where the uncertainties denote 95% statistical errors.
A strong nuclear kilomaser, W1, has been found in the nearby galaxy NGC 253, associated with a forming super star cluster. Kilomasers could arise from the accretion disc around supermassive stars (>10^3 Msun), hypothetical objects that have been proposed as polluters responsible for the chemical peculiarities in globular clusters. The supermassive stars would form via runaway collisions, simultaneously with the cluster. Their discs are perturbed by stellar flybys, inspiralling and colliding stars. This raises the question if an accretion disc would at all be able to survive in such a dynamic environment and mase water lines. We investigated what the predicted maser spectrum of such a disc would look like using 2D hydrodynamic simulations and compared this to the W1 kilomaser. We derived model maser spectra from the simulations by using a general maser model for appropriate disc temperatures. All our model discs survived. The model maser spectra for the most destructive case for the simulations of M = 1000 Msun are a reasonable match with the W1 kilomaser spectrum in terms of scaling, flux values and some of the signal trends. Details in the spectrum suggest that a star of a few