We respond to Aronow et al. (2025)'s paper arguing that randomized controlled trials (RCTs) are "enough," while nonparametric identification in observational studies is not. We agree with their position with respect to experimental versus observational research, but question what it would mean to extend this logic to the scientific enterprise more broadly. We first investigate what is meant by "enough," arguing that this is fundamentally a sociological claim about the relationship between statistical work and larger social and institutional processes, rather than something that can be decided from within the logic of statistics. For a more complete conception of "enough," we outline all that would need to be known -- not just knowledge of propensity scores, but knowledge of many other spatial and temporal characteristics of the social world. Even granting the logic of the critique in Aronow et al. (2025), its practical importance is a question of the contexts under study. We argue that we should not be satisfied by appeals to intuition about the complexity of "naturally occurring" propensity score functions. Instead, we call for more empirical metascience to begin to characterize t
Psycholinguistic research suggests that humans may build a representation of linguistic input that is 'good-enough' for the task at hand. This study examines what architectural features make language models learn human-like good-enough language processing. We focus on the number of layers and self-attention heads in Transformers. We create a good-enough language processing (GELP) evaluation dataset (7,680 examples), which is designed to test the effects of two plausibility types, eight construction types, and three degrees of memory cost on language processing. To annotate GELP, we first conduct a crowdsourcing experiment whose design follows prior psycholinguistic studies. Our model evaluation against the annotated GELP then reveals that the full model as well as models with fewer layers and/or self-attention heads exhibit a good-enough performance. This result suggests that models with shallower depth and fewer heads can learn good-enough language processing.
This article argues that security is not enough to fully capture what is at stake in government exceptional access to encrypted data. A conception of privacy as security has little to say about ``lawful-surveillance protocols'' -- an active research agenda in cryptography that aims to enable government exceptional access without compromising systemic security. But the limitations are not contingent on the success of this agenda. The normative landscape today cannot be explained if security is all there is to privacy. And fundamental objections to Apple's abandoned client-side scanning system gesture beyond security. This article's contribution is modest: to show that there must be more to privacy than the security mold it has taken. A richer understanding is needed both to assess policy and to guide research on lawful-surveillance protocols.
This is a first paper in a project on extending the dream principalization and resolution methods of [ATW24], [McQ20] and [Que22] to quasi-excellent, logarithmic and relative settings. We show that the main results of [ATW24] extend to regular schemes with enough derivations and are functorial with respect to all regular morphisms. This is already strong enough to formally imply that the same results hold in other categories, such as complex and p-adic analytic spaces. Our method has many common points with that of [ATW24], but the accent is now shifted towards the study of weighted centers and their coordinate presentations. Not only we hope that this is a bit simpler and more conceptual, this method will be easily applied in the logarithmic and relative settings in the sequel.
We extend Makkai duality between coherent toposes and ultracategories to a duality between toposes with enough points and ultraconvergence spaces. Our proof generalizes and simplifies Makkai's original proof. Our main result can also be seen as an extension to ionads of Barr's equivalence between topological spaces and relational modules for the ultrafilter monad. In view of the correspondence between toposes and geometric theories, we obtain a strong conceptual completeness theorem, in the sense of Makkai, for geometric theories with enough Set-models. The same result has recently been obtained independently by Saadia (arXiv:2506.23935) and by Hamad (arXiv:2507.07922). Both of their proofs rely on groupoid representations of toposes, which our proof here does not assume.
We extend Deligne's original argument showing that locally coherent topoi have enough points, clarified using collage diagrams. We show that our refinement of Deligne's technique can be adapted to recover every existing result of this kind, including the most recent results about $κ$-coherent $κ$-topoi. Our presentation allows us to relax the cardinality assumptions typically imposed on the sites involved. We show that a larger class of locally finitely presentable toposes have enough points and that a closed subtopos of a topos with enough points has enough points.
This article explores an approach to addressing the Close Enough Traveling Salesman Problem (CETSP). The objective is to streamline the mathematical formulation by introducing reformulations that approximate the Euclidean distances and simplify the objective function. Additionally, the use of convex sets in the constraint design offers computational benefits. The proposed methodology is empirically validated on real-world CETSP instances, with the aid of computational strategies such as a fragmented CPLEX-based approach. Results demonstrate its effectiveness in managing computational resources without compromising solution quality. Furthermore, the article analyzes the behavior of the proposed mathematical formulations, providing comprehensive insights into their performance.
Traditionally, results given by the direct numerical simulation (DNS) of Navier-Stokes equations are widely regarded as reliable benchmark solutions of turbulence, as long as grid spacing is fine enough (i.e. less than the minimum Kolmogorov scale) and time-step is small enough, say, satisfying the Courant-Friedrichs-Lewy condition. Is this really true? In this paper a two-dimensional sustained turbulent Kolmogorov flow is investigated numerically by the two numerical methods with detailed comparisons: one is the traditional `direct numerical simulation' (DNS), the other is the `clean numerical simulation' (CNS). The results given by DNS are a kind of mixture of the false numerical noise and the true physical solution, which however are mostly at the same order of magnitude due to the butterfly-effect of chaos. On the contrary, the false numerical noise of the results given by CNS is much smaller than the true physical solution of turbulence in a long enough interval of time so that a CNS result is very close to the true physical solution and thus can be used as a benchmark solution. It is found that numerical noise as a kind of artificial tiny disturbances can lead to huge deviati
We prove a completely explicit and effective upper bound for the Néron--Tate height of rational points of curves of genus at least $2$ over number fields, provided that they have enough automorphisms with respect to the Mordell--Weil rank of their jacobian. Our arguments build on Arakelov theory for arithmetic surfaces. Our bounds are practical, and we illustrate this by explicitly computing the rational points of a certain genus $2$ curve whose jacobian has Mordell--Weil rank $2$.
Is the Standard Model Charge-Parity (CP) violation ever enough to generate the observed baryon asymmetry? Yes! We introduce a mechanism of baryogenesis (and dark matter production) that can generate the entire observed baryon asymmetry of the Universe using $\textit{only}$ the CP violation within Standard Model systems -- a fête which no other mechanism currently proposed can achieve. Baryogenesis proceeds through a Mesogenesis scenario but with well motivated additional dark sector dynamics: a $\textit{morphon}$ field generates present day mass contributions for the particle mediating the decay responsible for baryogenesis. The effect is an enhancement of baryon production whilst evading present day collider constraints. The CP violation comes entirely from Standard Model contributions to neutral meson systems. Meanwhile, the dark dynamics generate gravitational waves that may be searched for with current and upcoming Pulsar Timing Arrays, as we demonstrate with an example. This mechanism, $\textit{Mesogenesis with a Morphing Mediator}$, motivates probing a new parameter space as well as improving the sensitivity of existing Mesogenesis searches at hadron and electron colliders.
The success of iterative pruning methods in achieving state-of-the-art sparse networks has largely been attributed to improved mask identification and an implicit regularization induced by pruning. We challenge this hypothesis and instead posit that their repeated cyclic training schedules enable improved optimization. To verify this, we show that pruning at initialization is significantly boosted by repeated cyclic training, even outperforming standard iterative pruning methods. The dominant mechanism how this is achieved, as we conjecture, can be attributed to a better exploration of the loss landscape leading to a lower training loss. However, at high sparsity, repeated cyclic training alone is not enough for competitive performance. A strong coupling between learnt parameter initialization and mask seems to be required. Standard methods obtain this coupling via expensive pruning-training iterations, starting from a dense network. To achieve this with sparse training instead, we propose SCULPT-ing, i.e., repeated cyclic training of any sparse mask followed by a single pruning step to couple the parameters and the mask, which is able to match the performance of state-of-the-art i
Evolution Strategies (ES) are effective gradient-free optimization methods that can be competitive with gradient-based approaches for policy search. ES only rely on the total episodic scores of solutions in their population, from which they estimate fitness gradients for their update with no access to true gradient information. However this makes them sensitive to deceptive fitness landscapes, and they tend to only explore one way to solve a problem. Quality-Diversity methods such as MAP-Elites introduced additional information with behavior descriptors (BD) to return a population of diverse solutions, which helps exploration but leads to a large part of the evaluation budget not being focused on finding the best performing solution. Here we show that behavior information can also be leveraged to find the best policy by identifying promising search areas which can then be efficiently explored with ES. We introduce the framework of Quality with Just Enough Diversity (JEDi) which learns the relationship between behavior and fitness to focus evaluations on solutions that matter. When trying to reach higher fitness values, JEDi outperforms both QD and ES methods on hard exploration tas
The Resource Public Key Infrastructure (RPKI) protocol was standardized to add cryptographic security to Internet routing. With over 50% of Internet resources protected with RPKI today, the protocol already impacts significant parts of Internet traffic. In addition to its growing adoption, there is also increasing political interest in RPKI. The White House indicated in its Roadmap to Enhance Internet Routing Security, on 4 September 2024, that RPKI is a mature and readily available technology for securing inter-domain routing. The Roadmap attributes the main obstacles towards wide adoption of RPKI to a lack of understanding, lack of prioritization, and administrative barriers. This work presents the first comprehensive study of the maturity of RPKI as a viable production-grade technology. We find that current RPKI implementations still lack production-grade resilience and are plagued by software vulnerabilities, inconsistent specifications, and operational challenges, raising significant security concerns. The deployments lack experience with full-fledged strict RPKI-validation in production environments and operate in fail-open test mode. We provide recommendations to improve RPK
In this work, we address various segmentation tasks, each traditionally tackled by distinct or partially unified models. We propose OMG-Seg, One Model that is Good enough to efficiently and effectively handle all the segmentation tasks, including image semantic, instance, and panoptic segmentation, as well as their video counterparts, open vocabulary settings, prompt-driven, interactive segmentation like SAM, and video object segmentation. To our knowledge, this is the first model to handle all these tasks in one model and achieve satisfactory performance. We show that OMG-Seg, a transformer-based encoder-decoder architecture with task-specific queries and outputs, can support over ten distinct segmentation tasks and yet significantly reduce computational and parameter overhead across various tasks and datasets. We rigorously evaluate the inter-task influences and correlations during co-training. Code and models are available at https://github.com/lxtGH/OMG-Seg.
Visual localization is the task of estimating the camera pose from which a given image was taken and is central to several 3D computer vision applications. With the rapid growth in the popularity of AR/VR/MR devices and cloud-based applications, privacy issues are becoming a very important aspect of the localization process. Existing work on privacy-preserving localization aims to defend against an attacker who has access to a cloud-based service. In this paper, we show that an attacker can learn about details of a scene without any access by simply querying a localization service. The attack is based on the observation that modern visual localization algorithms are robust to variations in appearance and geometry. While this is in general a desired property, it also leads to algorithms localizing objects that are similar enough to those present in a scene. An attacker can thus query a server with a large enough set of images of objects, \eg, obtained from the Internet, and some of them will be localized. The attacker can thus learn about object placements from the camera poses returned by the service (which is the minimal information returned by such a service). In this paper, we d
We establish a bi-equivalence between the bi-category of topoi with enough points and a localisation of a bi-subcategory of topological groupoids
The axiom of choice ensures precisely that, in ZFC, every set is projective: that is, a projective object in the category of sets. In constructive ZF (CZF) the existence of enough projective sets has been discussed as an additional axiom taken from the interpretation of CZF in Martin-Loef's intuitionistic type theory. On the other hand, every non-empty set is injective in classical ZF, which argument fails to work in CZF. The aim of this paper is to shed some light on the problem whether there are (enough) injective sets in CZF. We show that no two element set is injective unless the law of excluded middle is admitted for negated formulas, and that the axiom of power set is required for proving that there are strongly enough injective sets. The latter notion is abstracted from the singleton embedding into the power set, which ensures enough injectives both in every topos and in IZF. We further show that it is consistent with CZF to assume that the only injective sets are the singletons. In particular, assuming the consistency of CZF one cannot prove in CZF that there are enough injective sets. As a complement we revisit the duality between injective and projective sets from the poi
In this brief note, we show that there exist smooth 4-manifolds (with nonempty boundary) containing pairs of exotically knotted 2-spheres that remain exotic after one (either external or internal) stabilization. It follows that the ``one is enough'' theorem of Auckly-Kim-Melvin-Ruberman-Schwartz does not hold for closed surfaces whose homology classes are characteristic.
We discuss price competition when positive network effects are the only other factor in consumption choices. We show that partitioning consumers into two groups creates a rich enough interaction structure to induce negative marginal demand and produce pure price equilibria where both firms profit. The crucial condition is one group has centripetal influence while the other has centrifugal influence. The result is contrary to when positive network effects depend on a single aggregate variable and challenges the prevalent assumption that demand must be micro-founded on a distribution of consumer characteristics with specific properties, highlighting the importance of interaction structures in shaping market outcomes.
Close Enough Traveling Salesman Problem (CETSP) is a well-known variant of TSP whereby the agent may complete its mission at any point within a target neighborhood. Heuristics based on overlapped neighborhoods, known as Steiner Zones (SZ), have gained attention in addressing CETSP. While SZs offer effective approximations to the original graph, their inherent overlap imposes constraints on search space, potentially conflicting with global optimization objectives. Here we show how such limitations can be converted into advantages in a Close Enough Orienteering Problem (CEOP) by aggregating prizes across overlapped neighborhoods. We further extend classic CEOP with Non-uniform Neighborhoods (CEOP-N) by introducing non-uniform costs for prize collection. To tackle CEOP and CEOP-N, we develop a new approach featuring a Randomized Steiner Zone Discretization (RSZD) scheme coupled with a hybrid algorithm based on Particle Swarm Optimization (PSO) and Ant Colony System (ACS), CRaSZe-AntS. The RSZD scheme identifies sub-regions for PSO exploration, and ACS determines the discrete visiting sequence. We evaluate the RSZD's discretization performance on CEOP instances derived from established