Imaging systematics refers to the inhomogeneous distribution of a galaxy sample caused by varying observing conditions and astrophysical foregrounds. Current mitigation methods correct the galaxy density fluctuations caused by imaging systematics assuming that all galaxies in a sample have the same galaxy density fluctuations. Under this assumption, the corrected sample cannot perfectly recover the true correlation function. We name this effect sub-sample systematics. For a galaxy sample, even if its overall sample statistics (redshift distribution n(z), galaxy bias b(z)), are accurately measured, n(z), b(z) can still vary across the observed footprint. It makes the correlation function amplitude of galaxy clustering higher, while correlation functions for galaxy-galaxy lensing and cosmic shear do not have noticeable change. Such a combination could potentially degenerate with physical signals on small angular scales, such as the amplitude of galaxy clustering, the impact of neutrino mass on the matter power spectrum, etc. Sub-sample systematics cannot be corrected using imaging systematics mitigation approaches that rely on the cross-correlation signal between imaging systematics
GW231123 is an exceptional gravitational-wave event consistent with the merger of two massive, highly-spinning black holes. Reliable inference of the source properties is crucial for accurate interpretation of its astrophysical implications. However, characterization of GW231123 is challenging: only few signal cycles are observed and different signal models result in systematically different parameters. We investigate whether the interpretation of GW231123 is robust against model systematics and Gaussian detector noise. We show that the model systematics observed in GW231123 can be reproduced for a simulated signal based on the numerical-relativity surrogate model NRSur7dq4. Simulating data using the maximum-likelihood NRSur7dq4 waveform for GW231123 and no noise realization, we closely recover the systematics observed for the real signal. We then explore how the headline properties of GW231123 are impacted by Gaussian detector noise. Using the NRSur7dq4 maximum-likelihood waveform and different noise realizations, we consistently find support for large masses, high spin magnitudes (median $χ_1\geq 0.7$), and high spin precession (median $χ_\mathrm{p}\geq 0.68$). The spin in the di
Upcoming cosmological surveys will achieve increasingly precise constraints in cosmological parameter estimation. To guarantee the robustness of cosmological analyses, it is essential to account for and model systematic effects that can bias cosmological constraints, shifting the best fit parameters away from their fiducial values. It is possible to approximately infer the biases that un-modelled systematic effects might introduce in cosmological parameter estimation by means of the Fisher matrix formalism. In this paper, we introduce a new application of this formalism, where by inverting the process, we investigate whether a specific missing or mis-modelled systematic effect can explain away a given tension between two different probes or experiments. We showcase the proposed methodology by examining two representative systematics: galaxy intrinsic alignments and baryonic feedback. As the method is agnostic to the systematic effect and can be applied to a wider range of scenarios, we discuss more possible future applications. While the proposed approach is accurate in the limit of small offsets in the cosmological parameters, where the likelihood can be considered linear in both
Large near-future galaxy surveys offer sufficient statistical power to make our cosmology analyses data-driven, limited primarily by systematic errors. Understanding the impact of systematics is therefore critical. We perform an end-to-end analysis to investigate the impact of some of the systematics that affect large-scale structure studies by doing an inference analysis using simulated density maps with various systematics; these include systematics caused by photometric redshifts (photo-$z$s), Galactic dust, structure induced by the telescope observing strategy and observing conditions, and incomplete covariance matrices. Specifically, we consider the impacts of incorrect photo-$z$ distributions (photometric biases, scatter, outliers; spectroscopic calibration biases), dust map resolution, incorrect dust law, selecting none or only some contaminant templates for deprojection, and using a diagonal covariance matrix instead of a full one. We quantify the biases induced by these systematics on cosmological parameter estimation using tomographic galaxy angular power spectra, with a focus on identifying whether the maximum plausible level of each systematic has an adverse impact on t
Ground-based transmission spectroscopy is often dominated by systematics, which obstructs our ability to leverage the advantages of larger aperture sizes compared to space-based observations. These systematics could be time-correlated, uniform across all spectroscopic light curves, or wavelength-correlated, which could significantly affect the characterization of exoplanet atmospheres. Gaussian Processes were introduced in transmission spectroscopy by Gibson et al. (2012) to model correlated systematics in a non-parametric way. The technique uses auxiliary information about the observation and independently fits each spectroscopic light curve to provide robust atmospheric retrievals. However, this method assumes that the uncertainties in the transmission spectrum are uncorrelated in wavelength, which can cause discrepancies and degrade the precision of atmospheric retrievals. To address this limitation, we explore a 2D GP framework formulated by Fortune et al. (2024) to simultaneously model time- and wavelength-correlated systematics. We present its application to ground-based observations of TOI-4153b obtained using the 2-m Himalayan Chandra Telescope (HCT). As we move towards det
We introduce a novel, fast, and efficient generative model built upon scattering covariances, the most recent iteration of the scattering transforms statistics. This model is designed to augment by several orders of magnitude the number of map simulations in datasets of computationally expensive CMB instrumental systematics simulations, including their non-Gaussian and inhomogeneous features. Unlike conventional neural network-based algorithms, this generative model requires only a minimal number of training samples, making it highly compatible with the computational constraints of typical CMB simulation campaigns. We validate the method using realistic simulations of CMB systematics, which are particularly challenging to emulate, and perform extensive statistical tests to confirm its ability to produce new statistically independent approximate realizations. Remarkably, even when trained on as few as 10 simulations, the emulator closely reproduces key summary statistics -- including the angular power spectrum, scattering coefficients, and Minkowski functionals -- and provides pixel-to-pixel covariance estimates with substantially reduced sample noise compared to those obtained with
In this paper we present to what extent recent experimental results obtained for $π^{0}$ suppression in O-O collisions at $\sqrt{s_{NN}}$=5.36 TeV fit into the systematics for much heavier systems. The systematics with which the comparison is made was published a few years ago \cite{Pet_1} in terms of charged particles suppression $R_{AA}$ as a function of $\langle N_{part} \rangle$ and $\langle dN_{ch}/dη\rangle$. The values of $R_{AA}$ at $\langle dN_{ch}/dη\rangle$ and $\langle N_{part} \rangle$ experimentally measured and estimated by Glauber MC, respectively, for O-O collisions are in good agreement with the systematics obtained for A-A collisions.
We simulate a variety of optical systematics for Taurus, a balloon-borne cosmic microwave background (CMB) polarisation experiment, to assess their impact on large-scale E-mode polarisation measurements and constraints of the optical depth to reionisation τ. We model a one-month flight of Taurus from Wanaka, New Zealand aboard a super-pressure balloon (SPB). We simulate night-time scans of both the CMB and dust foregrounds in the 150GHz band, one of Taurus's four observing bands. We consider a variety of possible systematics that may affect Taurus's observations, including non-gaussian beams, pointing reconstruction error, and half-wave plate (HWP) non-idealities. For each of these, we evaluate the residual power in the difference between maps simulated with and without the systematic, and compare this to the expected signal level corresponding to Taurus's science goals. Our results indicate that most of the HWP-related systematics can be mitigated to be smaller than sample variance by calibrating with Planck's TT spectrum and using an achromatic HWP model, with a preference for five layers of sapphire to ensure good systematic control. However, additional beam characterization wil
The detailed modelling of stellar oscillations is a powerful approach to characterising stars. However, poor treatment of systematics in theoretical models leads to misinterpretations of stars. Here we propose a more principled statistical treatment for the systematics to be applied to fitting individual mode frequencies with a typical stellar model grid. We introduce a correlated noise model based on a Gaussian Process (GP) kernel to describe the systematics given that mode frequency systematics are expected to be highly correlated. We show that tuning the GP kernel can reproduce general features of frequency variations for changing model input physics and fundamental parameters. Fits with the correlated noise model better recover stellar parameters than traditional methods which either ignore the systematics or treat them as uncorrelated noise.
Observations of the 21 cm signal face significant challenges due to bright astrophysical foregrounds that are several orders of magnitude higher than the brightness of the hydrogen line, along with various systematics. Successful 21 cm experiments require accurate calibration and foreground mitigation. Errors introduced during the calibration process such as systematics, can disrupt the intrinsic frequency smoothness of the foregrounds, leading to power leakage into the Epoch of Reionisation (EoR) window. Therefore, it is essential to develop strategies to effectively address these challenges. In this work, we adopt a stringent approach to identify and address suspected systematics, including malfunctioning antennas, frequency channels corrupted by radio frequency interference (RFI), and other dominant effects. We implement a statistical framework that utilises various data products from the data processing pipeline to derive specific criteria and filters. These criteria and filters are applied at intermediate stages to mitigate systematic propagation from the early stages of data processing. Our analysis focuses on observations from the Murchison Widefield Array (MWA) Phase I conf
Through a very careful analysis Kolopanis and collaborators identified a negative power spectrum (PS) systematic. The 21 cm cosmology community has assumed that any observational systematics would add power, as negative PS are non-physical. In addition to the mystery of their origin, negative PS systematics raise the spectre of artificially lowering upper limits on the 21 cm PS. It appears that the source of the negative PS systematics is a subtle interaction between choices in how the PS estimate is calculated and baseline-dependent systematic power. In this paper we present a statistical model of baseline dependent systematics to explore how negative PS systematics can appear and their statistical characteristics. This leads us to recommendations on when and how to consider negative PS systematics when reporting observational 21 cm cosmology upper limits.
Time-series photometry and spectroscopy of transiting exoplanets allow us to study their atmospheres. Unfortunately, the required precision to extract atmospheric information surpasses the design specifications of most general purpose instrumentation, resulting in instrumental systematics in the light curves that are typically larger than the target precision. Systematics must therefore be modelled, leaving the inference of light curve parameters conditioned on the subjective choice of models and model selection criteria. This paper aims to test the reliability of the most commonly used systematics models and model selection criteria. As we are primarily interested in recovering light curve parameters rather than the favoured systematics model, marginalisation over systematics models is introduced as a more robust alternative than simple model selection. This can incorporate uncertainties in the choice of systematics model into the error budget as well as the model parameters. Its use is demonstrated using a series of simulated transit light curves. Stochastic models, specifically Gaussian processes, are also discussed in the context of marginalisation over systematics models, and
Instrumental systematics need to be controlled to high precision for upcoming Cosmic Microwave Background (CMB) experiments. The level of contamination caused by these systematics is often linked to the scan strategy, and scan strategies for satellite experiments can significantly mitigate these systematics. However, no detailed study has been performed for ground-based experiments. Here we show that under the assumption of constant elevation scans (CESs), the ability of the scan strategy to mitigate these systematics is strongly limited, irrespective of the detailed structure of the scan strategy. We calculate typical values and maps of the quantities coupling the scan to the systematics, and show how these quantities vary with the choice of observing elevations. These values and maps can be used to calculate and forecast the magnitude of different instrumental systematics without requiring detailed scan strategy simulations. As a reference point, we show that inclusion of even a single boresight rotation angle significantly improves over sky rotation alone for mitigating these systematics. A standard metric for evaluating cross-linking is related to one of the parameters studied
Galaxy clustering measurements are a key probe of the matter density field in the Universe. With the era of precision cosmology upon us, surveys rely on precise measurements of the clustering signal for meaningful cosmological analysis. However, the presence of systematic contaminants can bias the observed galaxy number density, and thereby bias the galaxy two-point statistics. As the statistical uncertainties get smaller, correcting for these systematic contaminants becomes increasingly important for unbiased cosmological analysis. We present and validate a new method for understanding and mitigating both additive and multiplicative systematics in galaxy clustering measurements (two-point function) by joint inference of contaminants in the galaxy overdensity field (one-point function) using a maximum-likelihood estimator (MLE). We test this methodology with KiDS-like mock galaxy catalogs and synthetic systematic template maps. We estimate the cosmological impact of such mitigation by quantifying uncertainties and possible biases in the inferred relationship between the observed and the true galaxy clustering signal. Our method robustly corrects the clustering signal to the sub-per
The LSST survey will provide unprecedented statistical power for measurements of dark energy. Consequently, controlling systematic uncertainties is becoming more important than ever. The LSST observing strategy will affect the statistical uncertainty and systematics control for many science cases; here, we focus on weak lensing systematics. The fact that the LSST observing strategy involves hundreds of visits to the same sky area provides new opportunities for systematics mitigation. We explore these opportunities by testing how different dithering strategies (pointing offsets and rotational angle of the camera in different exposures) affect additive weak lensing shear systematics on a baseline operational simulation, using the $ρ-$statistics formalism. Some dithering strategies improve systematics control at the end of the survey by a factor of up to $\sim 3-4$ better than others. We find that a random translational dithering strategy, applied with random rotational dithering at every filter change, is the most effective of those strategies tested in this work at averaging down systematics. Adopting this dithering algorithm, we explore the effect of varying the area of the survey
Neutron physics is one of the oldest branches of the experimental nuclear physics,but the investigation of the spontaneous neutron emission from the ground state along the neutron dripline is still at its beginning, in spite of the crucial importance for nuclear astrophysics. The proton dripline is much better investigated and a systematics of spontaneous proton half lives corected by the centrifugal barrier (monopole transitions) is given by the Geiger-Nuttall law $\log_{10}T\simχ$, where $χ\sim ZQ^{-1/2}$ is the Coulomb parameter characterizing the outgoing Coulomb-Hankel wave in terms of the daughter charge $Z$ and Q-value. Our purpose is to propose a similar simple systematics of spontaneous neutron half lives, but in terms of the nuclear reduced radius $ρ=κR\sim A^{1/3}Q^{1/2}$, characterizing the "neutral" outgoing spherical Hankel wave. It turns out that the half life in emission of neutral particles is governed by the scaling law $T\simρ^{-2}\sim A^{-2/3}Q^{-1}$ for monopole transitions. We evidence the important role of the angular momentum carried by the emitted neutron. The influence of the neutron wave function generated by a Woods-Saxon nuclear mean field is also analy
T2HK and T2HKK are the proposed extensions of the of T2K experiments in Japan and DUNE is the future long-baseline program of Fermilab. All these three experiments will use extremely high beam power and large detector volumes to observe neutrino oscillation. Because of the large statistics, these experiments will be highly sensitive to systematics. Thus a small change in the systematics can cause a significant change in their sensitivities. To understand this, we do a comparative study of T2HK, T2HKK and DUNE with respect to their systematic errors. Specifically we study the effect of the systematics in the determination of neutrino mass hierarchy, octant of the mixing angle $θ_{23}$ and $δ_{CP}$ in the standard three flavor scenario and also analyze the role of systematic uncertainties in constraining the parameters of the nonstandard interactions in neutrino propagation. Taking the overall systematics for signal and background normalization, we quantify how the sensitivities of these experiments change if the systematics are varied from $1\%$ to $7\%$.
The CMB's B-mode polarization provides a handle on several cosmological parameters most notably the tensor-to-scalar ratio, $r$, and is sensitive to parameters which govern the growth of large scale structure (LSS) and evolution of the gravitational potential. The primordial gravitational-wave- and secondary lensing-induced B-mode signals are very weak and therefore prone to various foregrounds and systematics. In this work we use Fisher-matrix-based estimations and apply, for the first time, Monte-Carlo Markov Chain (MCMC) simulations to determine the effect of beam systematics on the inferred cosmological parameters from five upcoming experiments: PLANCK, POLARBEAR, SPIDER, QUIET+CLOVER and CMBPOL. We consider beam systematics which couple the beam substructure to the gradient of temperature anisotropy and polarization (differential beamwidth, pointing and ellipticity) and beam systematics due to differential beam normalization (differential gain) and orientation (beam rotation) of the polarization-sensitive axes (the latter two effects are insensitive to the beam substructure). We determine allowable levels of beam systematics for given tolerances on the induced parameter errors
Weak gravitational lensing of the cosmic microwave background (CMB) is an important cosmological tool that allows us to learn about the structure, composition and evolution of the Universe. Upcoming CMB experiments, such as the Simons Observatory (SO), will provide high-resolution and low-noise CMB measurements. We consider the impact of instrumental systematics on the corresponding high-precision lensing reconstruction power spectrum measurements. We simulate CMB temperature and polarization maps for an SO-like instrument and potential scanning strategy, and explore systematics relating to beam asymmetries and offsets, boresight pointing, polarization angle, gain drifts, gain calibration and electric crosstalk. Our analysis shows that the majority of the biases induced by the systematics we modeled are below a detection level of $\sim 0.6σ$. We discuss potential mitigation techniques to further reduce the impact of the more significant systematics, and pave the way for future lensing-related systematics analyses.
Future CMB experiments will require exquisite control of systematics in order to constrain the $B$-mode polarisation power spectrum. One class of systematics that requires careful study is instrumental systematics. The potential impact of such systematics is most readily understood by considering analysis pipelines based on pair differencing. In this case, any differential gain, pointing or beam ellipticity between the two detectors in a pair can result in intensity leakage into the $B$-mode spectrum, which needs to be controlled to a high precision due to the much greater magnitude of the total intensity signal as compared to the $B$-mode signal. One well known way to suppress such systematics is through careful design of the scan-strategy, in particular making use of any capability to rotate the instrument about its pointing (boresight) direction. Here, we show that the combination of specific choices of such partial boresight rotation angles with redundancies present in the scan strategy is a powerful approach for suppressing systematic effects. This mitigation can be performed in analysis in advance of map-making and, in contrast to other approaches (e.g. deprojection or filter