Overweight and obesity represent a rapidly growing threat to the health of populations in an increasing number of countries. Indeed they are now so common that they are replacing more traditional problems such as undernutrition and infectious diseases as the most significant causes of ill-health. Obesity comorbidities include coronary heart disease, hypertension and stroke, certain types of cancer, non-insulin-dependent diabetes mellitus, gallbladder disease, dyslipidaemia, osteoarthritis and gout, and pulmonary diseases, including sleep apnoea. In addition, the obese suffer from social bias, prejudice and discrimination, on the part not only of the general public but also of health professionals, and this may make them reluctant to seek medical assistance. WHO therefore convened a Consultation on obesity to review current epidemiological information, contributing factors and associated consequences, and this report presents its conclusions and recommendations. In particular, the Consultation considered the system for classifying overweight and obesity based on the body mass index, and concluded that a coherent system is now available and should be adopted internationally. The Consultation also concluded that the fundamental causes of the obesity epidemic are sedentary lifestyles and high-fat energy-dense diets, both resulting from the profound changes taking place in society and the behavioural patterns of communities as a consequence of increased urbanization and industrialization and the disappearance of traditional lifestyles. A reduction in fat intake to around 20-25% of energy is necessary to minimize energy imbalance and weight gain in sedentary individuals. While there is strong evidence that certain genes have an influence on body mass and body fat, most do not qualify as necessary genes, i.e. genes that cause obesity whenever two copies of the defective allele are present; it is likely to be many years before the results of genetic research can be applied to the problem. Methods for the treatment of obesity are described, including dietary management, physical activity and exercise, and antiobesity drugs, with gastrointestinal surgery being reserved for extreme cases.
Bicuspid valves with crescent-shaped leaflets are found in lymphatic vessels and veins, where their primary function is to prevent reflux and ensure unidirectional flow toward the heart. These valves are passive, and their functionality emerges spontaneously from a complex interplay between the properties of the valve leaflets and the flow patterns developing within the vessel sinus region surrounding the valve. The main function of the valves is to limit retrograde flow, or reflux, but the optimal valve structure has not been well-characterized. Here we investigate numerically how the length of the leaflets affects the valve efficiency in preventing reflux. The valves are subjected to backward flow, akin to that imposed by gravity. We report the flux through the valve orifice as a function of key parameters: valve length, leaflet length, and leaflet rigidity. We monitor the transition in the flow regime - from reflux to complete flow blockage - by varying only the leaflet length. The transition threshold is found to depend strongly on the valve shape and stiffness. We captured these control parameters numerically to evaluate the ability of the valve to close and prevent reflux. Th
Cyber attacks continue to be a cause of concern despite advances in cyber defense techniques. Although cyber attacks cannot be fully prevented, standard decision-making frameworks typically focus on how to prevent them from succeeding, without considering the cost of cleaning up the damages incurred by successful attacks. This motivates us to investigate a new resource allocation problem formulated in this paper: The defender must decide how to split its investment between preventive defenses, which aim to harden nodes from attacks, and reactive defenses, which aim to quickly clean up the compromised nodes. This encounters a challenge imposed by the uncertainty associated with the observation, or sensor signal, whether a node is truly compromised or not; this uncertainty is real because attack detectors are not perfect. We investigate how the quality of sensor signals impacts the defender's strategic investment in the two types of defense, and ultimately the level of security that can be achieved. In particular, we show that the optimal investment in preventive resources increases, and thus reactive resource investment decreases, with higher sensor quality. We also show that the de
Recent work has shown that out-of-order and speculative execution mechanisms used to increase performance in the majority of processors expose the processors to critical attacks. These attacks, called Meltdown and Spectre, exploit the side effects of performance-enhancing features in modern microprocessors to expose secret data through side channels in the microarchitecture. The well known implementations of these attacks exploit cache-based side channels since they are the least noisy channels to exfiltrate data. While some software patches attempted to mitigate these attacks, they are ad-hoc and only try to fix the side effects of the vulnerabilites. They may also impose a performance overhead of up to 30%. In this paper, we present a microarchitecture-based solution for Meltdown and Spectre that addresses the vulnerabilities exploited by the attacks. Our solution prevents flushed instructions from exposing data to the cache. Our approach can also be extended to other memory structures in the microarchitecture thereby preventing variants of the attacks which exploit these memory structures. We further identify two new variant attacks based on exploiting the side effects of specul
Dairy farming has great economic value in Brazil, however, during production, diseases such as mastitis can occur in animals, which can reduce productivity and, consequently, economic profitability. When mastitis is present in animals, it can cause physical and chemical changes in the milk, affecting its quality, market value and also compromising the health of the animal. MastiteApp is a tool to help producers prevent mastitis in their herds by checking the temperature taken from the four teats of the animal. To perform theanalysis, the temperature of all the animals' teats must be measured and, if there is a change in temperature, the system will display a message informing the producer of the possible presence of subclinical mastitis in their animal. The application has proven to be efficient in alerting producers to the possible presence of subclinical mastitis in the first few days of manifestation, thus initiating treatment and preventing the disease from worsening.
Diffusion models are powerful generative models but often generate sensitive data that are unwanted by users, mainly because the unlabeled training data frequently contain such sensitive data. Since labeling all sensitive data in the large-scale unlabeled training data is impractical, we address this problem by using a small amount of labeled sensitive data. In this paper, we propose positive-unlabeled diffusion models, which prevent the generation of sensitive data using unlabeled and sensitive data. Our approach can approximate the evidence lower bound (ELBO) for normal (negative) data using only unlabeled and sensitive (positive) data. Therefore, even without labeled normal data, we can maximize the ELBO for normal data and minimize it for labeled sensitive data, ensuring the generation of only normal data. Through experiments across various datasets and settings, we demonstrated that our approach can prevent the generation of sensitive images without compromising image quality.
A common phenomena confining the representation quality in Self-Supervised Learning (SSL) is dimensional collapse (also known as rank degeneration), where the learned representations are mapped to a low dimensional subspace of the representation space. The State-of-the-Art SSL methods have shown to suffer from dimensional collapse and fall behind maintaining full rank. Recent approaches to prevent this problem have proposed using contrastive losses, regularization techniques, or architectural tricks. We propose WERank, a new regularizer on the weight parameters of the network to prevent rank degeneration at different layers of the network. We provide empirical evidence and mathematical justification to demonstrate the effectiveness of the proposed regularization method in preventing dimensional collapse. We verify the impact of WERank on graph SSL where dimensional collapse is more pronounced due to the lack of proper data augmentation. We empirically demonstrate that WERank is effective in helping BYOL to achieve higher rank during SSL pre-training and consequently downstream accuracy during evaluation probing. Ablation studies and experimental analysis shed lights on the underlyi
Current methods to prevent crypto asset fraud are based on the analysis of transaction graphs within blockchain networks. While effective for identifying transaction patterns indicative of fraud, it does not capture the semantics of transactions and is constrained to blockchain data. Consequently, preventive methods based on transaction graphs are inherently limited. In response to these limitations, we propose the Kosmosis approach, which aims to incrementally construct a knowledge graph as new blockchain and social media data become available. During construction, it aims to extract the semantics of transactions and connect blockchain addresses to their real-world entities by fusing blockchain and social media data in a knowledge graph. This enables novel preventive methods against rug pulls as a form of crypto asset fraud. To demonstrate the effectiveness and practical applicability of the Kosmosis approach, we examine a series of real-world rug pulls from 2021. Through this case, we illustrate how Kosmosis can aid in identifying and preventing such fraudulent activities by leveraging the insights from the constructed knowledge graph.
Wildfires pose a serious threat to the environment of the world. The global wildfire season length has increased by 19% and severe wildfires have besieged nations around the world. Every year, forests are burned by wildfires, causing vast amounts of carbon dioxide to be released into the atmosphere, contributing to climate change. There is a need for a system which prevents, detects, and suppresses wildfires. The AI based Wildfire Prevention, Detection and Suppression System (WPDSS) is a novel, fully automated, end to end, AI based solution to effectively predict hotspots and detect wildfires, deploy drones to spray fire retardant, preventing and suppressing wildfires. WPDSS consists of four steps. 1. Preprocessing: WPDSS loads real time satellite data from NASA and meteorological data from NOAA of vegetation, temperature, precipitation, wind, soil moisture, and land cover for prevention. For detection, it loads the real time data of Land Cover, Humidity, Temperature, Vegetation, Burned Area Index, Ozone, and CO2. It uses the process of masking to eliminate not hotspots and not wildfires such as water bodies, and rainfall. 2. Learning: The AI model consists of a random forest class
The "ratchet principle", which states that non-equilibrium systems violating parity symmetry generically exhibit steady-state currents, is one of the few generic results outside thermal equilibrium. We study exceptions to this principle observed in active and passive systems with spatially varying fluctuations sources. For dilute systems, we show that a hidden time-reversal symmetry prevents the emergence of ratchet currents. At higher densities, pairwise forces break this symmetry but an emergent conservation law for the momentum field may nevertheless prevent steady currents. We show how the presence of this conservation law can be tested analytically and characterize the onset of ratchet currents in its absence. Our results show that the ratchet principle should be amended to preclude parity symmetry, time-reversal symmetry, and bulk momentum conservation.
Monitoring unexpected health events and taking actionable measures to avert them beforehand is central to maintaining health and preventing disease. Therefore, a tool capable of predicting adverse health events and offering users actionable feedback about how to make changes in their diet, exercise, and medication to prevent abnormal health events could have significant societal impacts. Counterfactual explanations can provide insights into why a model made a particular prediction by generating hypothetical instances that are similar to the original input but lead to a different prediction outcome. Therefore, counterfactuals can be viewed as a means to design AI-driven health interventions to not only predict but also prevent adverse health outcomes such as blood glucose spikes, diabetes, and heart disease. In this paper, we design \textit{\textbf{ExAct}}, a novel model-agnostic framework for generating counterfactual explanations for chronic disease prevention and management. Leveraging insights from adversarial learning, ExAct characterizes the decision boundary for high-dimensional data and performs a grid search to generate actionable interventions. ExAct is unique in integrati
Empirical studies show that preference for prevention versus treatment remains a subject of debate. We build a paradigm model combining a utility game for the individual-level dilemma of prevention versus treatment, and a compartmental model for the epidemic dynamic. We assume that individuals arrive to maximize the utility of voluntary prevention, as the epidemic reaches an endemic level alleviated by prevention and treatment. We thus obtain an expression for the asymptotic prevention coverage. Notably, we obtain that, if the relative cost of prevention versus treatment is sufficiently low, epidemics may be averted through the use of prevention alone.
Data races are a notorious problem in parallel programming. There has been great research interest in type systems that statically prevent data races. Despite the progress in the safety and usability of these systems, lots of existing approaches enforce strict anti-aliasing principles to prevent data races. The adoption of them is often intrusive, in the sense that it invalidates common programming patterns and requires paradigm shifts. We propose Capture Separation Calculus (System CSC), a calculus based on Capture Calculus (System CC<:box), that achieves static data race freedom while being non-intrusive. It allows aliasing in general to permit common programming patterns, but tracks aliasing and controls them when that is necessary to prevent data races. We study the formal properties of System CSC by establishing its type safety and data race freedom. Notably, we establish the data race freedom property by proving the confluence of its reduction semantics. To validate the usability of the calculus, we implement it as an extension to the Scala 3 compiler, and use it to type-check the examples in the paper.
This paper presents PREVENT, an approach for predicting and localizing failures in distributed enterprise applications by combining unsupervised techniques. Software failures can have dramatic consequences in production, and thus predicting and localizing failures is the essential step to activate healing measures that limit the disruptive consequences of failures. At the state of the art, many failures can be predicted from anomalous combinations of system metrics with respect to either rules provided from domain experts or supervised learning models. However, both these approaches limit the effectiveness of current techniques to well understood types of failures that can be either captured with predefined rules or observed while trining supervised models. PREVENT integrates the core ingredients of unsupervised approaches into a novel approach to predict failures and localize failing resources, without either requiring predefined rules or training with observed failures. The results of experimenting with PREVENT on a commercially-compliant distributed cloud system indicate that PREVENT provides more stable and reliable predictions, earlier than or comparably to supervised learning
This paper investigates the global existence of solutions to Keller-Segel systems with sub-logistic sources using the test function method. Prior work demonstrated that sub-logistic sources $f(u)=ru -μ\frac{u^2}{\ln^p(u+e)}$ with $p\in(0,1)$ can prevent blow-up solutions for the 2D minimal Keller-Segel chemotaxis model. Our study extends this result by showing that when $p=1$, sub-logistic sources can still prevent the occurrence of finite time blow-up solutions. Additionally, we provide a concise proof for a known result that the equi-integrability of $\left \{ \int_Ωu^{\frac{n}{2}}(\cdot,t) \right \}_{t\in (0,T_{\rm max})}$ can avoid blow-up.
Distributed deep neural network training necessitates efficient GPU collective communications, which are inherently susceptible to deadlocks. GPU collective deadlocks arise easily in distributed deep learning applications when multiple collectives circularly wait for each other. GPU collective deadlocks pose a significant challenge to the correct functioning and efficiency of distributed deep learning, and no general effective solutions are currently available. Only in specific scenarios, ad-hoc methods, making an application invoke collectives in a consistent order across GPUs, can be used to prevent circular collective dependency and deadlocks. This paper presents DFCCL, a novel GPU collective communication library that provides a comprehensive approach for GPU collective deadlock prevention while maintaining high performance. DFCCL achieves preemption for GPU collectives at the bottom library level, effectively preventing deadlocks even if applications cause circular collective dependency. DFCCL ensures high performance with its execution and scheduling methods for collectives. Experiments show that DFCCL effectively prevents GPU collective deadlocks in various situations. Moreo
In this paper, we present an incremental domain adaptation technique to prevent catastrophic forgetting for an end-to-end automatic speech recognition (ASR) model. Conventional approaches require extra parameters of the same size as the model for optimization, and it is difficult to apply these approaches to end-to-end ASR models because they have a huge amount of parameters. To solve this problem, we first investigate which parts of end-to-end ASR models contribute to high accuracy in the target domain while preventing catastrophic forgetting. We conduct experiments on incremental domain adaptation from the LibriSpeech dataset to the AMI meeting corpus with two popular end-to-end ASR models and found that adapting only the linear layers of their encoders can prevent catastrophic forgetting. Then, on the basis of this finding, we develop an element-wise parameter selection focused on specific layers to further reduce the number of fine-tuning parameters. Experimental results show that our approach consistently prevents catastrophic forgetting compared to parameter selection from the whole model.
The paper proposes a Blockchain (BC) system to prevent counterfeiting in health insurance sector. The results show the system strength in terms of achieving data integrity and privacy of data. Moreover, the results show that the consensus algorithm can effectively reduce the total validation time for the proposed system.
Road vehicle safety systems can be broadly classified into the two categories of passive and active systems. The aim of passive safety systems is to reduce risk of injury to the occupants of the vehicle during and after an accident like a crash or rollover. Passive safety systems include the design of safety restraints, design for crashworthiness, seat belts and air bags. In contrast to passive systems, the aim in active safety is to prevent an accident from occurring in the first place. As such, it makes sense to call them preventive systems also. Here, the concentration is on preventive and active safety systems. The current state of the art in some key preventive and active safety systems is presented in this paper, wherein the various techniques used are also explained briefly. In some cases, the presentation is complemented with results obtained in the research group of the author. A road map of expected future developments in the area of preventive and safety applications is also presented.
We examine here the effects of recurrent vaccination and waning immunity on the establishment of an endemic equilibrium in a population. An individual-based model that incorporates memory effects for transmission rate during infection and subsequent immunity is introduced, considering stochasticity at the individual level. By letting the population size going to infinity, we derive a set of equations describing the large scale behavior of the epidemic. The analysis of the model's equilibria reveals a criterion for the existence of an endemic equilibrium, which depends on the rate of immunity loss and the distribution of time between booster doses. The outcome of a vaccination policy in this context is influenced by the efficiency of the vaccine in blocking transmissions and the distribution pattern of booster doses within the population. Strategies with evenly spaced booster shots at the individual level prove to be more effective in preventing disease spread compared to irregularly spaced boosters, as longer intervals without vaccination increase susceptibility and facilitate more efficient disease transmission. We provide an expression for the critical fraction of the population