We present a new approach to identify satellite trails (or other linear artifacts) in ACS/WFC imaging data using a modified Radon Transform. We demonstrate that this approach is sensitive to features with mean brightness significantly below the background noise level, and it is resistant to the influence of bright astronomical sources (e.g., stars, galaxies) in most cases. Comparing with a set of satellite trails identified by eye, we find a trail recovery rate of 85\% and a false detection rate (after removing diffraction spikes that are easily filtered) of 2.5\%. By performing an analysis using a much larger ACS/WFC data set where false trails are identified by their persistence across multiple images of the same field, we identify the Radon Transform parameter space and image properties where our algorithm is unreliable, and estimate a false detection rate of $\sim10\%$ elsewhere. We apply our method to ACS/WFC data taken between 2002 and 2022 to determine both the frequency of satellite trail contamination in science data and also the typical trail brightness as a function of time. We find the rate of satellite trail contamination has increased by approximately a factor of two
Using repeat imaging of a galaxy cluster taken over a seventeen-year baseline, we examine the impact that degraded Charge Transfer Efficiency (CTE) has on photometric measurements of extended sources using the ACS/WFC on HST. We examine how measured brightnesses depend on time since ACS installation, source location on the WFC detectors, source brightness, and local background level in individual exposures. We find that global brightness measurements using large apertures are generally reliable within $\sim$0.05 magnitudes across the WFC detectors if exposure backgrounds are above $20e^-/{pixel}$ and sources are brighter than $\sim300e^-$ in a single exposure. However, brightness measurements on smaller scales can suffer deficiencies in excess of 0.1 mags (sometimes, significantly more) in recent data unless sources are very close to the CCD serial registers ($\lesssim 512$ pixels), or brighter than $\sim3000\,e^-$ in a single exposure. We also show how degraded CTE can result in artificial asymmetries in galaxy light distributions, which are largely mitigated if backgrounds are $>20e^-/{pixel}$ and targets are not far ($>1536$ pixels) from the serial registers. As expected,
Recently, the ACS team applied an Ubercal framework to assess the photometric repeatability of stars observed across the WFC detector using 15 years of post-SM4 calibration data in the globular cluster 47 Tuc (Ryan et al., 2024). A surprising finding was an apparent 0.05 mag global difference in sensitivity between the WFC1 and WFC2 chips, which had not been seen in prior tests of sensitivity variations around the field-of-view. Given the many degenerate variables within the Ubercal framework such as CTE losses, time-dependent sensitivity, and flat-field corrections, we obtained new calibration data to perform a straightforward test of the reported $\sim$5$\%$ flux offset between detectors. We observed three white dwarf standards with three filters at four positions on the detector (each on a different amplifier), but with the same number of x and y pixel transfers to mitigate differential CTE-related effects. For the F606W and F814W filters, the agreements are good to 0.4$\%$ on average, and always 1$\%$ or better in individual cases. The consistency of these two filters over all three stars and the four dither positions provides very strong evidence against the large global sensi
Scientific CMOS (sCMOS) image sensors are a modern alternative to typical CCD detectors and are rapidly gaining popularity in observational astronomy due to their large sizes, low read-out noise, high frame rates, and cheap manufacturing. However, numerous challenges remain in using them due to fundamental differences between CCD and CMOS architectures, especially concerning the pixel-dependent and non-Gaussian nature of their read-out noise. One of the main components of the latter is the random telegraph noise (RTN) caused by the charge traps introduced by the defects close to the oxide-silicon interface in sCMOS image sensors, which manifests itself as discrete jumps in a pixel's output signal, degrading the overall image fidelity. In this work, we present a statistical method to detect and characterize RTN-affected pixels using a series of dark frames. Identifying RTN contaminated pixels enables post-processing strategies that mitigate their impact and the development of manufacturing quality metrics.
Many parts of human body generate internal sound during biological processes, which are rich sources of information for understanding health and wellbeing. Despite a long history of development and usage of stethoscopes, there is still a lack of proper tools for recording internal body sound together with complementary sensors for long term monitoring. In this paper, we show our development of a wearable electronic stethoscope, coined Patchkeeper (PK), that can be used for internal body sound recording over long periods of time. Patchkeeper also integrates several state-of-the-art biological sensors, including electrocardiogram (ECG), photoplethysmography (PPG), and inertial measurement unit (IMU) sensors. As a wearable device, Patchkeeper can be placed on various parts of the body to collect sound from particular organs, including heart, lung, stomach, and joints etc. We show in this paper that several vital signals can be recorded simultaneously with high quality. As Patchkeeper can be operated directly by the user, e.g. without involving health care professionals, we believe it could be a useful tool for telemedicine and remote diagnostics.
The paper concerns the extension of the Heritage Digital Twin Ontology introduced in previous work to describe the reactivity of digital twins used for cultural heritage documentation by including the semantic description of sensors and activators and all the process of interacting with the real world. After analysing previous work on the use of digital twins in cultural heritage, a summary description of the Heritage Digital Twin Ontology is provided, and the existing applications of digital twins to cultural heritage are overviewed, with references to reviews summarizing the large production of scientific contributions on the topic. Then a novel ontology, named Reactive Digital Twin Ontology is described, in which sensors, activators and the decision processes are also semantically described, turning the previous synchronic approach to cultural heritage documentation into a diachronic one. Some case studies exemplify this theory.
Forklifts are essential for transporting goods in industrial environments. These machines face wear and tear during field operations, along with rough terrain, tight spaces and complex handling scenarios. This increases the likelihood of unintended impacts, such as collisions with goods, infrastructure, or other machinery. In addition, deliberate misuse has been stated, compromising safety and equipment integrity. This paper presents a low-cost and low-power impact detection system based on multiple wireless sensor nodes measuring 3D accelerations. These were deployed in a measurement campaign covering realworld operational scenarios. An algorithm was developed, based on this collected data, to differentiate high-impact events from normal usage and to localize detected collisions on the forklift. The solution successfully detects and localizes impacts, while maintaining low power consumption, enabling reliable forklift monitoring with multi-year sensor autonomy.
We present a bio-hybrid environmental sensor system that integrates natural plants and embedded deep learning for real-time, on-device detection of temperature and ozone level changes. Our system, based on the low-power PhytoNode platform, records electric differential potential signals from Hedera helix and processes them onboard using an embedded deep learning model. We demonstrate that our sensing device detects changes in temperature and ozone with good sensitivity of up to 0.98. Daily and inter-plant variability, as well as limited precision, could be mitigated by incorporating additional training data, which is readily integrable in our data-driven framework. Our approach also has potential to scale to new environmental factors and plant species. By integrating embedded deep learning onboard our biological sensing device, we offer a new, low-power solution for continuous environmental monitoring and potentially other fields of application.
Accurate calibration of sensor extrinsic parameters for ground robotic systems (i.e., relative poses) is crucial for ensuring spatial alignment and achieving high-performance perception. However, existing calibration methods typically require complex and often human-operated processes to collect data. Moreover, most frameworks neglect acoustic sensors, thereby limiting the associated systems' auditory perception capabilities. To alleviate these issues, we propose an observability-aware active calibration method for ground robots with multimodal sensors, including a microphone array, a LiDAR (exteroceptive sensors), and wheel encoders (proprioceptive sensors). Unlike traditional approaches, our method enables active trajectory optimization for online data collection and calibration, contributing to the development of more intelligent robotic systems. Specifically, we leverage the Fisher information matrix (FIM) to quantify parameter observability and adopt its minimum eigenvalue as an optimization metric for trajectory generation via B-spline curves. Through planning and replanning of robot trajectory online, the method enhances the observability of multi-sensor extrinsic parameters
Low-cost sensors (LCS) are affordable, compact, and often portable devices designed to measure various environmental parameters, including air quality. These sensors are intended to provide accessible and cost-effective solutions for monitoring pollution levels in different settings, such as indoor, outdoor and moving vehicles. However, the data produced by LCS is prone to various sources of error that can affect accuracy. Calibration is a well-known procedure to improve the reliability of the data produced by LCS, and several developments and efforts have been made to calibrate the LCS. This work proposes a novel Estimated Error Augmented Two-phase Calibration (\textit{EEATC}) approach to calibrate the LCS in stationary and mobile deployments. In contrast to the existing approaches, the \textit{EEATC} calibrates the LCS in two phases, where the error estimated in the first phase calibration is augmented with the input to the second phase, which helps the second phase to learn the distributional features better to produce more accurate results. We show that the \textit{EEATC} outperforms well-known single-phase calibration models such as linear regression models (single variable li
The use of low-cost sensors in conjunction with high-precision instrumentation for air pollution monitoring has shown promising results in recent years. One of the main challenges for these sensors has been the quality of their data, which is why the main efforts have focused on calibrating the sensors using machine learning techniques to improve the data quality. However, there is one aspect that has been overlooked, that is, these sensors are mounted on nodes that may have energy consumption restrictions if they are battery-powered. In this paper, we show the usual sensor data gathering process and we study the existing trade-offs between the sampling of such sensors, the quality of the sensor calibration, and the power consumption involved. To this end, we conduct experiments on prototype nodes measuring tropospheric ozone, nitrogen dioxide, and nitrogen monoxide at high frequency. The results show that the sensor sampling strategy directly affects the quality of the air pollution estimation and that each type of sensor may require different sampling strategies. In addition, duty cycles of 0.1 can be achieved when the sensors have response times in the order of two minutes, and
In this work, we experimentally investigate the frequency limit of Hall effect sensor designs based on a 2 dimensional electron gas (2DEG) gallium arsenide/aluminum gallium arsenide (GaAs/AlGaAs) heterostructure. The frequency limit is measured and compared for four GaAs/AlGaAs Hall effect sensor designs where the Ohmic contact length (contact geometry) is varied across the four devices. By varying the geometry, the trade-off in sensitivity and frequency limit is explored and the underlying causes of the frequency limit from the resistance and capacitance perspective is investigated. Current spinning, the traditional method to remove offset noise, imposes a practical frequency limit on Hall effect sensors. The frequency limit of the Hall effect sensor, without current spinning, is significantly higher. Wide-frequency Hall effect sensors can measure currents in power electronics that operate at higher frequencies is one such application.
This paper presents a comprehensive review of methods covering significant subjective and objective human stress detection techniques available in the literature. The methods for measuring human stress responses could include subjective questionnaires (developed by psychologists) and objective markers observed using data from wearable and non-wearable sensors. In particular, wearable sensor-based methods commonly use data from electroencephalography, electrocardiogram, galvanic skin response, electromyography, electrodermal activity, heart rate, heart rate variability, and photoplethysmography both individually and in multimodal fusion strategies. Whereas, methods based on non-wearable sensors include strategies such as analyzing pupil dilation and speech, smartphone data, eye movement, body posture, and thermal imaging. Whenever a stressful situation is encountered by an individual, physiological, physical, or behavioral change is induced which help in coping with the challenge at hand. A wide range of studies has attempted to establish a relationship between these stressful situations and the response of human beings by using different kinds of psychological, physiological, physi
Resonant sensors determine a sensed parameter by measuring the resonance frequency of a resonator. For fast continuous sensing, it is desirable to operate resonant sensors in a closed-loop configuration, where a feedback loop ensures that the resonator is always actuated near its resonance frequency, so that the precision is maximized even in the presence of drifts or fluctuations of the resonance frequency. However, in a closed-loop configuration, the precision is not only determined by the resonator itself, but also by the feedback loop, even if the feedback circuit is noiseless. Therefore, to characterize the intrinsic precision of resonant sensors, the open-loop configuration is often employed. To link these measurements to the actual closed-loop performance of the resonator, it is desirable to have a relation that determines the closed-loop precision of the resonator from open-loop characterisation data. In this work, we present a methodology to estimate the closed-loop resonant sensor precision by relying only on an open-loop characterization of the resonator. The procedure is beneficial for fast performance estimation and benchmarking of resonant sensors, because it does not
The Advanced Camera for Surveys (ACS) Virgo Cluster Survey is a large program to image 100 early-type Virgo galaxies using the F475W and F850LP bandpasses of the Wide Field Channel of the ACS instrument on the Hubble Space Telescope (HST). The scientific goals of this survey include an exploration of the three-dimensional structure of the Virgo Cluster and a critical examination of the usefulness of the globular cluster luminosity function as a distance indicator. Both of these issues require accurate distances for the full sample of 100 program galaxies. In this paper, we describe our data reduction procedures and examine the feasibility of accurate distance measurements using the method of surface brightness fluctuations (SBF) applied to the ACS Virgo Cluster Survey F850LP imaging. The ACS exhibits significant geometrical distortions due to its off-axis location in the HST focal plane; correcting for these distortions by resampling the pixel values onto an undistorted frame results in pixel correlations that depend on the nature of the interpolation kernel used for the resampling. This poses a major challenge for the SBF technique, which normally assumes a flat power spectrum for
We propose a kernel-PCA based method to detect anomaly in chemical sensors. We use temporal signals produced by chemical sensors to form vectors to perform the Principal Component Analysis (PCA). We estimate the kernel-covariance matrix of the sensor data and compute the eigenvector corresponding to the largest eigenvalue of the covariance matrix. The anomaly can be detected by comparing the difference between the actual sensor data and the reconstructed data from the dominant eigenvector. In this paper, we introduce a new multiplication-free kernel, which is related to the l1-norm for the anomaly detection task. The l1-kernel PCA is not only computationally efficient but also energy-efficient because it does not require any actual multiplications during the kernel covariance matrix computation. Our experimental results show that our kernel-PCA method achieves a higher area under curvature (AUC) score (0.7483) than the baseline regular PCA method (0.7366).
Low-cost particulate matter sensors are transforming air quality monitoring because they have lower costs and greater mobility as compared to reference monitors. Calibration of these low-cost sensors requires training data from co-deployed reference monitors. Machine Learning based calibration gives better performance than conventional techniques, but requires a large amount of training data from the sensor, to be calibrated, co-deployed with a reference monitor. In this work, we propose novel transfer learning methods for quick calibration of sensors with minimal co-deployment with reference monitors. Transfer learning utilizes a large amount of data from other sensors along with a limited amount of data from the target sensor. Our extensive experimentation finds the proposed Model-Agnostic- Meta-Learning (MAML) based transfer learning method to be the most effective over other competitive baselines.
Scaling tactile sensing for robust whole-body manipulation is a significant challenge, often limited by wiring complexity, data throughput, and system reliability. This paper presents a complete architecture designed to overcome these barriers. Our approach pairs open-source, fabric-based sensors with custom readout electronics that reduce signal crosstalk to less than 3.3% through hardware-based mitigation. Critically, we introduce a novel, daisy-chained SPI bus topology that avoids the practical limitations of common wireless protocols and the prohibitive wiring complexity of USB hub-based systems. This architecture streams synchronized data from over 8,000 taxels across 1 square meter of sensing area at update rates exceeding 50 FPS, confirming its suitability for real-time control. We validate the system's efficacy in a whole-body grasping task where, without feedback, the robot's open-loop trajectory results in an uncontrolled application of force that slowly crushes a deformable cardboard box. With real-time tactile feedback, the robot transforms this motion into a gentle, stable grasp, successfully manipulating the object without causing structural damage. This work provides
Drift is a significant issue that undermines the reliability of gas sensors. This paper introduces a probabilistic model to distinguish between environmental variation and instrumental drift, using low-cost non-dispersive infrared (NDIR) CO2 sensors as a case study. Data from a long-term field experiment is analyzed to evaluate both sensor performance and environmental changes over time. Our approach employs importance sampling to isolate instrumental drift from environmental variation, providing a more accurate assessment of sensor performance. The results show that failing to account for environmental variation can significantly affect the evaluation of sensor drift, leading to improper calibration processes.
Near-sensor diagnosis has become increasingly prevalent in industry. This study proposes a lightweight model named LD-RPMNet that integrates Transformers and Convolutional Neural Networks, leveraging both local and global feature extraction to optimize computational efficiency for a practical railway application. The LD-RPMNet introduces a Multi-scale Depthwise Separable Convolution (MDSC) module, which decomposes cross-channel convolutions into pointwise and depthwise convolutions while employing multi-scale kernels to enhance feature extraction. Meanwhile, a Broadcast Self-Attention (BSA) mechanism is incorporated to simplify complex matrix multiplications and improve computational efficiency. Experimental results based on collected sound signals during the operation of railway point machines demonstrate that the optimized model reduces parameter count and computational complexity by 50% while improving diagnostic accuracy by nearly 3%, ultimately achieving an accuracy of 98.86%. This demonstrates the possibility of near-sensor fault diagnosis applications in railway point machines.