Despite LiDAR (Light Detection and Ranging) being an effective privacy-preserving alternative to RGB cameras to perceive human activities, it remains largely underexplored in the context of multi-modal contrastive pre-training for human activity understanding (e.g., human activity recognition (HAR), retrieval, or person re-identification (RE-ID)). To close this gap, our work explores learning the correspondence between LiDAR point clouds, human skeleton poses, IMU data, and text in a joint embedding space. More specifically, we present DeSPITE, a Deep Skeleton-Pointcloud-IMU-Text Embedding model, which effectively learns a joint embedding space across these four modalities. At the heart of our empirical exploration, we have combined the existing LIPD and Babel datasets, which enabled us to synchronize data of all four modalities, allowing us to explore the learning of a new joint embedding space. Our experiments demonstrate novel human activity understanding tasks for point cloud sequences enabled through DeSPITE, including Skeleton<->Pointcloud<->IMU matching, retrieval, and temporal moment retrieval. Furthermore, we show that DeSPITE is an effective pre-training strat
Language models encode task-relevant knowledge in internal representations that far exceeds their output performance, but whether mechanistic interpretability methods can bridge this knowledge-action gap has not been systematically tested. We compared four mechanistic interpretability methods -- concept bottleneck steering (Steerling-8B), sparse autoencoder feature steering, logit lens with activation patching, and linear probing with truthfulness separator vector steering (Qwen 2.5 7B Instruct) -- for correcting false-negative triage errors using 400 physician-adjudicated clinical vignettes (144 hazards, 256 benign). Linear probes discriminated hazardous from benign cases with 98.2% AUROC, yet the model's output sensitivity was only 45.1%, a 53-percentage-point knowledge-action gap. Concept bottleneck steering corrected 20% of missed hazards but disrupted 53% of correct detections, indistinguishable from random perturbation (p=0.84). SAE feature steering produced zero effect despite 3,695 significant features. TSV steering at high strength corrected 24% of missed hazards while disrupting 6% of correct detections, but left 76% of errors uncorrected. Current mechanistic interpretabi
We study the problem of constructing concurrent objects in a setting where $P$ processes run in parallel and interact through a shared memory that is subject to write contention. Our goal is to transform hardware primitives that are subject to write contention into ones that handle contention gracefully. We give contention-resolution algorithms for several basic primitives, and analyze them under a relaxed, roughly-synchronous stochastic scheduler, where processes run at roughly the same rate up to a constant factor with high probability. Specifically, we construct read/write registers and CAS registers that have latency $O(\log P)$ w.h.p. under our scheduler model, using $O(1)$ hardware read/write registers and, in the case of our CAS construction, one hardware CAS register. Our algorithms guarantee performance even when their operations are invoked by an adaptive adversary that is able to see the entire history of operations so far, including their timing and return values. This allows them to be used as building blocks inside larger programs; using this compositionality property, we obtain several other constructions (LL/SC, fetch-and-increment, bounded max registers, and counte
The universal genetic code presents a fundamental paradox in molecular biology. Recent advances in synthetic biology have demonstrated that the code is remarkably flexible--organisms can survive with 61 codons instead of 64, natural variants have reassigned codons 38+ times, and fitness costs of recoding stem primarily from secondary mutations rather than code changes themselves. Yet despite billions of years of evolution and this proven flexibility, approximately 99% of life maintains an identical 64-codon genetic code. This extreme conservation cannot be fully explained by current evolutionary theory, which predicts far more variation given the demonstrated viability of alternatives. I propose that this paradox--evolutionary flexibility coupled with mysterious conservation--reveals unrecognized constraints on biological information systems. This paper presents testable predictions to distinguish between competing explanations: extreme network effects, hidden optimization parameters, or potentially, computational architecture constraints that transcend standard evolutionary pressures.
Accurate perception, state estimation and mapping are essential for safe robotic navigation as planners and controllers rely on these components for safety-critical decisions. However, existing mapping approaches often assume perfect pose estimates, an unrealistic assumption that can lead to incorrect obstacle maps and therefore collisions. This paper introduces a framework for certifiably-correct mapping that ensures that the obstacle map correctly classifies obstacle-free regions despite the odometry drift in vision-based localization systems (VIO}/SLAM). By deflating the safe region based on the incremental odometry error at each timestep, we ensure that the map remains accurate and reliable locally around the robot, even as the overall odometry error with respect to the inertial frame grows unbounded. Our contributions include two approaches to modify popular obstacle mapping paradigms, (I) Safe Flight Corridors, and (II) Signed Distance Fields. We formally prove the correctness of both methods, and describe how they integrate with existing planning and control modules. Simulations using the Replica dataset highlight the efficacy of our methods compared to state-of-the-art tech
We present a new 8.5 ks Chandra observation of Abell 1885, obtained as part of the Cluster Evolution Reference Ensemble At Low-z (CEREAL) survey of ~200 low-z galaxy groups and clusters. These data reveal that Abell 1885 is a strong cool core, with a central cooling time of 0.43 Gyr, and that the central galaxy hosts an X-ray luminous point source at its center (L=2.3x10^42 erg/s), indicative of a rapidly accreting supermassive black hole. In the context of the larger CEREAL sample, we constrain the fraction of clusters at z~0.15 with X-ray bright central AGN to be no more than 4.1%. Including radio data from LOFAR, GMRT, ASKAP, and the VLA and optical integral field unit data from SDSS MaNGA, we probe the details of cooling, feeding, and feedback in this system. These data reveal that cooling of the intracluster medium is highly suppressed on large (>10 kpc) scales despite a central supermassive black hole that is in the early stages of the self-regulation cycle (characterized by rapid accretion, physically small jets, and no large-scale low-frequency radio emission). To reconcile the large-scale quenching with a lack of visible large-scale feedback, we propose that the timesca
The magnetic property of the Heusler alloys can be predicted by the famous Slater-Pauling (S-P) rule, which states the total magnetic moment ($m_t$) of such materials can be expressed as $m _t\,=\,(N_V-24)\,μ_B/f.u.$, where $N_V$ is the total valence electron count (VEC). Consequently, no Heusler alloys having VEC = 24 are theoretically expected as well as experimentally reported to have any magnetic ordering. Recently, a special class of Heusler alloys with 50\% concentration of $p$-block elements (anti-Heusler) have been identified, although none of such reported compounds belong to the VEC 24 category. Here, we report a new anti-Heusler alloy, Al$_2$MnCu, that undergoes long-range ferromagnetic (FM) ordering with $T_{\rm C}\sim$315 K and a large magnetic moment of $\sim$1.8 $μ_B$/f.u. despite having VEC 24. A phenomenological model based on molecular orbital hybridization is also proposed to understand the magnetism and unusual deviation from the standard S-P rule.
In financial market microstructure, there are two enigmatic empirical laws: (i) the market-order flow has predictable persistence due to metaorder splitters by institutional investors, well formulated as the Lillo-Mike-Farmer model. However, this phenomenon seems paradoxical given the diffusive and unpredictable price dynamics; (ii) the price impact $I(Q)$ of a large metaorder $Q$ follows the square-root law, $I(Q)\propto \sqrt{Q}$. Here we theoretically reveal why price dynamics follows Brownian motion despite predictable order flow by unifying these enigmas. We generalize the Lillo-Mike-Farmer model to nonlinear price-impact dynamics, which is mapped to an exactly solvable Lévy-walk model. Our exact solution shows that the price dynamics remains diffusive under the square-root law, even under persistent order flow. This work illustrates the crucial role of the square-root law in mitigating large price movements by large metaorders, thereby leading to the Brownian price dynamics, consistently with the efficient market hypothesis over long timescales.
In this technical report, we establish the asymptotic stability of MPC under plant-model mismatch for problems where the origin remains a steady state despite mismatch. This class of problems includes, but is not limited to, inventory management, path-planning, and control of systems in deviation variables. Our results differ from prior results on the inherent robustness of MPC, which guarantee only convergence to a neighborhood of the origin, the size of which scales with the magnitude of the mismatch. For MPC with quadratic costs, continuous differentiability of the system dynamics is sufficient to demonstrate exponential stability of the closed-loop system despite mismatch. For MPC with general costs, a joint comparison function bound and scaling condition guarantee asymptotic stability despite mismatch. The results are illustrated in numerical simulations, including the classic upright pendulum problem. The tools developed to establish these results can address the stability of offset-free MPC, an open and interesting question in the MPC research literature.
Short-in-time, broad-in-energy attosecond or few-femtosecond pulses can excite coherent superpositions of several electronic states in molecules. This results in ultrafast charge oscillations known as charge migration. A key open question in the emerging field of attochemistry is whether these electron dynamics, which due to decoherence often last only for a few femtoseconds, can influence longer-time scale nuclear rearrangements. Herein, we address this question through full-dimensional quantum dynamics simulations of the coupled electron-nuclear dynamics initiated by ionization and coherent excitation of ethylene. The simulations of this prototype organic chromophore predict electronic coherences with half-lives of less than 1 fs. Despite their brevity, these electronic coherences induce vibrational coherences along the derivative coupling vectors that persist for at least 50 fs. These results suggest that short-lived electronic coherences can impart long-lasting legacies on nuclear motion, a finding of potential importance to the interpretation of attosecond experiments and the development of strategies for attochemical control.
A distribution over instances of a sampling problem is said to exhibit transport disorder chaos if perturbing the instance by a small amount of random noise dramatically changes the stationary distribution (in Wasserstein distance). Seeking to provide evidence that some sampling tasks are hard on average, a recent line of work has demonstrated that disorder chaos is sufficient to rule out "stable" sampling algorithms, such as gradient methods and some diffusion processes. We demonstrate that disorder chaos does not preclude polynomial-time sampling by canonical algorithms in canonical models. We show that with high probability over a random graph $\boldsymbol{G} \sim G(n,1/2)$: (1) the hardcore model (at fugacity $λ= 1$) on $\boldsymbol{G}$ exhibits disorder chaos, and (2) Glauber dynamics run for $O(n)$ time can approximately sample from the hardcore model on $\boldsymbol{G}$ (in Wasserstein distance).
As demand for LLM inference grows, it is becoming increasingly important that providers and their customers can verify that inference processes are performed correctly, without errors or tampering. However, re-running the same inference process twice often leads to different results due to benign numerical noise, making it difficult to distinguish legitimate variation from actual problems. To address this problem, we introduce Token-DiFR (Token-Divergence-From-Reference), a method for verifying inference outputs by comparing generated tokens against predictions made by a trusted reference implementation conditioned on the same random seed. Sampling seed synchronization tightly constrains valid outputs, leaving providers minimal room to deviate from correct inference, which allows output tokens themselves to serve as auditable evidence of correctness at zero additional cost to the provider. Token-DiFR reliably identifies sampling errors, simulated bugs, and model quantization, detecting 4-bit quantization with AUC $>$ 0.999 within 300 output tokens. For applications requiring sample-efficient forward-pass verification, we additionally introduce Activation-DiFR, a scheme that uses
\textit{Auditing} data accesses helps preserve privacy and ensures accountability by allowing one to determine who accessed (potentially sensitive) information. A prior formal definition of register auditability was based on the values returned by read operations, \emph{without accounting for cases where a reader might learn a value without explicitly reading it or gain knowledge of data access without being an auditor}. This paper introduces a refined definition of auditability that focuses on when a read operation is \emph{effective}, rather than relying on its completion and return of a value. Furthermore, we formally specify the constraints that \textit{prevent readers from learning values they did not explicitly read or from auditing other readers' accesses.} Our primary algorithmic contribution is a wait-free implementation of a \emph{multi-writer, multi-reader register} that tracks effective reads while preventing unauthorized audits. The key challenge is ensuring that a read is auditable as soon as it becomes effective, which we achieve by combining value access and access logging into a single atomic operation. Another challenge is recording accesses without exposing them
Large language model (LLM) agents show promise in an increasing number of domains. In many proposed applications, it is expected that the agent reasons over accumulated experience presented in an input prompt. We propose the OEDD (Operationalize Experience Despite Distraction) corpus, a human-annotator-validated body of scenarios with pre-scripted agent histories where the agent must make a decision based on disparate experiential information in the presence of a distractor. We evaluate three state-of-the-art LLMs (GPT-3.5 Turbo, GPT-4o, and Gemini 1.5 Pro) using a minimal chain-of-thought prompting strategy and observe that when (1) the input context contains over 1,615 tokens of historical interactions, (2) a crucially decision-informing premise is the rightful conclusion over two disparate environment premises, and (3) a trivial, but distracting red herring fact follows, all LLMs perform worse than random choice at selecting the better of two actions. Our code and test corpus are publicly available at: https://github.com/sonnygeorge/OEDD .
We solve the global asymptotic stability problem of an unstable reaction-diffusion Partial Differential Equation (PDE) subject to input delay and state quantization developing a switched predictor-feedback law. To deal with the input delay, we reformulate the problem as an actuated transport PDE coupled with the original reaction-diffusion PDE. Then, we design a quantized predictor-based feedback mechanism that employs a dynamic switching strategy to adjust the quantization range and error over time. The stability of the closed-loop system is proven properly combining backstepping with a small-gain approach and input-to-state stability techniques, for deriving estimates on solutions, despite the quantization effect and the system's instability. We also extend this result to the input quantization case.
Multi-agent learning is intrinsically harder, more unstable and unpredictable than single agent optimization. For this reason, numerous specialized heuristics and techniques have been designed towards the goal of achieving convergence to equilibria in self-play. One such celebrated approach is the use of dynamically adaptive learning rates. Although such techniques are known to allow for improved convergence guarantees in small games, it has been much harder to analyze them in more relevant settings with large populations of agents. These settings are particularly hard as recent work has established that learning with fixed rates will become chaotic given large enough populations.In this work, we show that chaos persists in large population congestion games despite using adaptive learning rates even for the ubiquitous Multiplicative Weight Updates algorithm, even in the presence of only two strategies. At a technical level, due to the non-autonomous nature of the system, our approach goes beyond conventional period-three techniques Li-Yorke by studying fundamental properties of the dynamics including invariant sets, volume expansion and turbulent sets. We complement our theoretical
Due to its empirical success in few-shot classification and reinforcement learning, meta-learning has recently received significant interest. Meta-learning methods leverage data from previous tasks to learn a new task in a sample-efficient manner. In particular, model-agnostic methods look for initialization points from which gradient descent quickly adapts to any new task. Although it has been empirically suggested that such methods perform well by learning shared representations during pretraining, there is limited theoretical evidence of such behavior. More importantly, it has not been shown that these methods still learn a shared structure, despite architectural misspecifications. In this direction, this work shows, in the limit of an infinite number of tasks, that first-order ANIL with a linear two-layer network architecture successfully learns linear shared representations. This result even holds with overparametrization; having a width larger than the dimension of the shared representations results in an asymptotically low-rank solution. The learned solution then yields a good adaptation performance on any new task after a single gradient step. Overall, this illustrates how
We present the first results from a 100-day Swift, NICER and ground-based X-ray/UV/optical reverberation mapping campaign of the Narrow-Line Seyfert 1 Mrk 335, when it was in an unprecedented low X-ray flux state. Despite dramatic suppression of the X-ray variability, we still observe UV/optical lags as expected from disk reverberation. Moreover, the UV/optical lags are consistent with archival observations when the X-ray luminosity was >10 times higher. Interestingly, both low- and high-flux states reveal UV/optical lags that are 6-11 times longer than expected from a thin disk. These long lags are often interpreted as due to contamination from the broad line region, however the u band excess lag (containing the Balmer jump from the diffuse continuum) is less prevalent than in other AGN. The Swift campaign showed a low X-ray-to-optical correlation (similar to previous campaigns), but NICER and ground-based monitoring continued for another two weeks, during which the optical rose to the highest level of the campaign, followed ~10 days later by a sharp rise in X-rays. While the low X-ray countrate and relatively large systematic uncertainties in the NICER background make this mea
Antibiotics resistance has caused much complication in the treatment of diseases, where the pathogen is no longer susceptible to specific antibiotics and the use of such antibiotics are no longer effective for treatment. A recent study that utilizes digital organisms suggests that complete elimination of specific antibiotic resistance is unlikely after the disuse of antibiotics, assuming that there are no fitness costs for maintaining resistance once resistance are established. Fitness cost are referred to as reaction to change in environment, where organism improves its' abilities in one area at the expense of the other. Our goal in this study is to use digital organisms to examine the rate of gain and loss of resistance where fitness costs have incurred in maintaining resistance. Our results showed that GC-content based fitness cost during de-selection by removal of antibiotic-induced selective pressure portrayed similar trends in resistance compared to that of no fitness cost, at all stages of initial selection, repeated de-selection and re-introduction of selective pressure. Paired t-test suggested that prolonged stabilization of resistance after initial loss is not statistical
We investigate the solvability of the Byzantine Reliable Broadcast and Byzantine Broadcast Channel problems in distributed systems affected by Mobile Byzantine Faults. We show that both problems are not solvable even in one of the most constrained system models for mobile Byzantine faults defined so far. By endowing processes with an additional local failure oracle, we provide a solution to the Byzantine Broadcast Channel problem.