共找到 20 条结果
This note studies optimal experimental design under partial compliance when experimenters can screen participants prior to randomization. Theoretical results show that retaining all compliers and screening out all non-compliers achieves three complementary aims: (i) the Local Average Treatment Effect is the same as the standard 2SLS estimator with no screening; (ii) median bias is minimized; and (iii) statistical power is maximized. In practice, complier status is unobserved. We therefore discuss feasible screening strategies and propose a simple test for screening efficacy. Future work will conduct an experiment to demonstrate the feasibility and advantages of the optimal screening design.
I study multidimensional sequential screening. A monopolist contracts with a buyer who privately observes information about the distribution of their eventual valuations for multiple goods. After initial private information is reported and the contract is signed, the buyer learns and reports realized valuations. In these settings, the monopolist frontloads surplus extraction: Any information rents given to the buyer to elicit their true valuations can be extracted in expectation before those valuations are drawn, transforming the multidimensional screening problem by distorting buyer information rents compared to static screening. If the buyer's distributions over valuations are commonly FOSD ordered, regular for each good, and satisfy invariant dependencies (valuations can be dependent across goods, but how valuations are coupled cannot vary), the optimal mechanism coincides with independently offering the optimal sequential screening mechanism for each good. This rationalizes membership payments followed by separate sales schemes commonly used in practice.
Screening mechanisms are a natural method for suppressing long-range forces in scalar-tensor theories as they link the local background density to their strength. Focusing on Brans-Dicke theories, those including a non-minimal coupling between a scalar degree of freedom and the Ricci scalar, we study the origin of these screening mechanisms from a field theory perspective, considering the influence of the Standard Model on the mechanisms. Additionally, we further consider the role of scale symmetries on screening, demonstrating that only certain sectors, those obtaining their mass via the Higgs mechanism, contribute to screening the fifth forces. This has significant implications for baryons, which obtain most of their mass from the gluon's binding energy. Given that the Planck mass is related to the vacuum expectation value of the non-minimally coupled field, we find an extensive region of the parameter space where screening mechanisms create a spatially dependent gravitational constant. We say that the field over-screens when this effect is more significant than the fifth forces suppressed by screening mechanisms, as we illustrate for the chameleon and symmetron models.
Velopharyngeal dysfunction (VPD) is characterized by inadequate velopharyngeal closure during speech and often causes hypernasality and reduced intelligibility. Although speech-based machine learning models can perform well under standardized clinical recording conditions, their performance often drops in real-world settings because of domain shift caused by differences in devices, channels, noise, and room acoustics. To improve robustness, we propose a two-stage framework for VPD screening. First, a nasality-focused speech representation is learned by supervised contrastive pre-training on an auxiliary corpus with phoneme alignments, using oral-context versus nasal-context supervision. Second, the encoder is frozen and used with lightweight classifiers on 0.5-second speech chunks, whose probabilities are aggregated to produce recording-level decisions with a fixed threshold. On an in-domain clinical cohort of 82 subjects, the proposed method achieved perfect recording-level screening performance (macro-F1 = 1.000, accuracy = 1.000). On a separate out-of-domain set of 131 heterogeneous public Internet recordings, large pretrained speech representations degraded substantially, while
Artificial intelligence (AI)-enabled digital interventions, including Generative AI (GenAI) and Human-Centered AI (HCAI), are increasingly used to expand access to digital psychiatry and mental health care. This PRISMA-ScR scoping review maps the landscape of AI-driven mental health (mHealth) technologies across five critical phases: pre-treatment (screening/triage), treatment (therapeutic support), post-treatment (remote patient monitoring), clinical education, and population-level prevention. We synthesized 36 empirical studies implemented through early 2024, focusing on Large Language Models (LLMs), machine learning (ML) models, and autonomous conversational agents. Key use cases involve referral triage, empathic communication enhancement, and AI-assisted psychotherapy delivered via chatbots and voice agents. While benefits include reduced wait times and increased patient engagement, we address recurring challenges like algorithmic bias, data privacy, and human-AI collaboration barriers. By introducing a novel four-pillar framework, this review provides a comprehensive roadmap for AI-augmented mental health care, offering actionable insights for researchers, clinicians, and poli
This paper treats the problem of screening for variables with high correlations in high dimensional data in which there can be many fewer samples than variables. We focus on threshold-based correlation screening methods for three related applications: screening for variables with large correlations within a single treatment (autocorrelation screening); screening for variables with large cross-correlations over two treatments (cross-correlation screening); screening for variables that have persistently large auto-correlations over two treatments (persistent-correlation screening). The novelty of correlation screening is that it identifies a smaller number of variables which are highly correlated with others, as compared to identifying a number of correlation parameters. Correlation screening suffers from a phase transition phenomenon: as the correlation threshold decreases the number of discoveries increases abruptly. We obtain asymptotic expressions for the mean number of discoveries and the phase transition thresholds as a function of the number of samples, the number of variables, and the joint sample distribution. We also show that under a weak dependency condition the number of
We introduce EmoLoom-2B, a lightweight and reproducible pipeline that turns small language models under 2B parameters into fast screening candidates for joint emotion classification and Valence-Arousal-Dominance prediction. To ensure protocol-faithful and fair evaluation, we unify data loading, training, and inference under a single JSON input-output contract and remove avoidable variance by adopting KV-off decoding as the default setting. We incorporate two orthogonal semantic regularizers: a VAD-preserving constraint that aligns generated text with target VAD triples, and a lightweight external appraisal classifier that provides training-time guidance on goal attainment, controllability, certainty, and fairness without injecting long rationales. To improve polarity sensitivity, we introduce Valence Flip augmentation based on mirrored emotional pairs. During supervised fine-tuning, we apply A/B mixture sampling with entropy-aware temperature scheduling to balance coverage and convergence. Using Qwen-1.8B-Chat as the base model, EmoLoom-2B achieves strong performance on GoEmotions and EmpatheticDialogues, and demonstrates robust cross-corpus generalization on DailyDialog. The propo
This paper presents a framework for incentivising colorectal cancer (CRC) screening programs from the perspective of policymakers and under the assumption that the citizens participating in the program have misaligned objectives. To do so, it leverages tools from adversarial risk analysis to propose an optimal incentive scheme under uncertainty. The work relies on previous work on modeling CRC risk and optimal screening strategies and provides use cases regarding individual and group-based optimal incentives based on a simple financial scheme.
Edge localized modes (ELMs) are instabilities at the tokamak edge that can have short outbursts of highly energetic particles and heat, which can severely damage the walls of a plasma reactor. Resonant magnetic perturbations (RMPs) are used to mitigate or eliminate ELMs from the plasma. One effect that can reduce the intensity of the RMP is screening, which is caused by eddy currents in conducting structures or a plasma that are induced by a time-varying magnetic field. The eddy current code CARIDDI was recently coupled with the magnetohydrodynamics (MHD) code JOREK, and is able to capture the behavior of volumetric conducting structures that surround a plasma. The objective of this study is to characterize screening behavior in the JOREK-CARIDDI coupling. The analysis is divided in three parts. First, CARIDDI results are benchmarked against results from STARWALL, another JOREK extension that captures interactions of (two-dimensional) conducting structures. It is found that CARIDDI and STARWALL show good agreement, with slight variations. The second part covers the screening of time-varying RMP fields by conducting structures, oscillating at frequencies from 3 Hz to 10 kHz. The ove
Wave impact loads on maritime structures can cause casualties, damage, pollution and operational delays. Consequently, their extreme values should be accounted for in the design of these structures. However, this is challenging, as wave impact events are both rare and highly complex, requiring both high-fidelity simulations and long analysis durations to reliably quantify the associated design loads. Moreover, existing extreme value prediction methods are neither specifically developed nor adequately validated for wave impact phenomena. We therefore introduce the new Probabilistic Adaptive Screening (PAS) method for predicting extreme non-linear loads on maritime structures. The method integrates copula-based statistical dependence modelling with multi-fidelity screening and adaptive sampling. This framework enables efficient extreme value prediction by statistically mapping low-fidelity indicator variables to high-fidelity impact loads. The method allows for efficient linear potential flow indicators to be used in the low-fidelity stage, even for strongly non-linear cases. Its statistical framework is validated against four non-linear test cases, including non-linear waves, ship v
Artificial intelligence (AI) hiring tools have revolutionized resume screening, and large language models (LLMs) have the potential to do the same. However, given the biases which are embedded within LLMs, it is unclear whether they can be used in this scenario without disadvantaging groups based on their protected attributes. In this work, we investigate the possibilities of using LLMs in a resume screening setting via a document retrieval framework that simulates job candidate selection. Using that framework, we then perform a resume audit study to determine whether a selection of Massive Text Embedding (MTE) models are biased in resume screening scenarios. We simulate this for nine occupations, using a collection of over 500 publicly available resumes and 500 job descriptions. We find that the MTEs are biased, significantly favoring White-associated names in 85.1\% of cases and female-associated names in only 11.1\% of cases, with a minority of cases showing no statistically significant differences. Further analyses show that Black males are disadvantaged in up to 100\% of cases, replicating real-world patterns of bias in employment settings, and validate three hypotheses of int
Since the formation of the first stars, most of the gas in the Universe has been ionized. Spatial variations in the density of this ionized gas generate cosmic microwave background anisotropies via Thomson scattering, a process known as the ``anisotropic screening'' effect. We propose and implement for the first time a new estimator to cross-correlate unWISE galaxies and anisotropic screening, as measured by the Atacama Cosmology Telescope and Planck satellite. We do not significantly detect the effect; the null hypothesis is consistent with the data at 1.7 $σ$ (resp. 0.016 $σ$) for the blue (resp. green) unWISE sample. We obtain an upper limit on the integrated optical depth within a 6 arcmin disk to be $\barτ< 0.033$ arcmin$^2$ at 95\% confidence for the blue sample and $\barτ< 0.057$ arcmin$^2$ for the green sample. Future measurements with Simons Observatory and CMB-S4 should detect this effect significantly. Complementary to the kinematic Sunyaev-Zel'dovich effect, this probe of the gas distribution around halos will inform models of feedback in galaxy formation and baryonic effects in galaxy lensing.
As a novel deep learning model, gcForest has been widely used in various applications. However, the current multi-grained scanning of gcForest produces many redundant feature vectors, and this increases the time cost of the model. To screen out redundant feature vectors, we introduce a hashing screening mechanism for multi-grained scanning and propose a model called HW-Forest which adopts two strategies, hashing screening and window screening. HW-Forest employs perceptual hashing algorithm to calculate the similarity between feature vectors in hashing screening strategy, which is used to remove the redundant feature vectors produced by multi-grained scanning and can significantly decrease the time cost and memory consumption. Furthermore, we adopt a self-adaptive instance screening strategy to improve the performance of our approach, called window screening, which can achieve higher accuracy without hyperparameter tuning on different datasets. Our experimental results show that HW-Forest has higher accuracy than other models, and the time cost is also reduced.
We initiate the study of strategic behavior in screening processes with multiple classifiers. We focus on two contrasting settings: a conjunctive setting in which an individual must satisfy all classifiers simultaneously, and a sequential setting in which an individual to succeed must satisfy classifiers one at a time. In other words, we introduce the combination of strategic classification with screening processes. We show that sequential screening pipelines exhibit new and surprising behavior where individuals can exploit the sequential ordering of the tests to zig-zag between classifiers without having to simultaneously satisfy all of them. We demonstrate an individual can obtain a positive outcome using a limited manipulation budget even when far from the intersection of the positive regions of every classifier. Finally, we consider a learner whose goal is to design a sequential screening process that is robust to such manipulations, and provide a construction for the learner that optimizes a natural objective.
Structure-based virtual screening (SBVS) is a key computational strategy for identifying potential drug candidates by estimating the binding free energies (delta G_bind) of protein-ligand complexes. The immense size of chemical libraries, combined with the need to account for protein and ligand conformations as well as ligand translations and rotations, makes these tasks computationally intensive on classical hardware. This study proposes a quantum convolutional neural network (QCNN) framework to estimate delta G_bind efficiently. Using the PDBbind v2020 dataset, we trained QCNN models with 9 and 12 qubits, with the core set designated as the test set. The best-performing model achieved a Pearson correlation coefficient of 0.694 on the test set. To assess robustness, we introduced quantum noise under two configurations. While noise increased the root mean square deviation, the Pearson correlation coefficient remained largely stable. These results demonstrate the feasibility and noise tolerance of QCNNs for high-throughput virtual screening and highlight the potential of quantum computing to accelerate drug discovery.
We study and classify systems of certain screening operators arising in a generalized vertex operator algebra, or more generally an abelian intertwining algebra with an associated vertex operator (super)algebra. Screening pairs arising from weight one primary vectors acting commutatively on a lattice vertex operator algebra (the vacuum module) are classified into four general types, one type of which has been shown to play an important role in the construction and study of certain important families of $\mathcal{W}$-vertex algebras. These types of screening pairs we go on to study in detail through the notion of a system of screeners, which are lattice elements or `screening momenta' which give rise to screening pairs. We classify screening systems for all positive definite integral lattices of rank two, and for all positive definite even lattices of arbitrary rank when these lattices are generated by a screening system.
A designer distributes goods while considering the perceived equity of the resulting allocation. Such concerns are modeled through an equity constraint requiring that equally deserving agents receive equal allocations. I ask what forms of screening are compatible with equity and show that while the designer cannot equitably screen with a single instrument (e.g., payments or ordeals), combining multiple instruments, which on their own favor different groups, allows her to screen while still producing an equitable allocation.
Attempts to modify gravity in the infrared typically require a screening mechanism to ensure consistency with local tests of gravity. These screening mechanisms fit into three broad classes; we investigate theories which are capable of exhibiting more than one type of screening. Specifically, we focus on a simple model which exhibits both Vainshtein and kinetic screening. We point out that due to the two characteristic length scales in the problem, the type of screening that dominates depends on the mass of the sourcing object, allowing for different phenomenology at different scales. We consider embedding this double screening phenomenology in a broader cosmological scenario and show that the simplest examples that exhibit double screening are radiatively stable.
We review lattice studies of the color screening in the quark-gluon plasma. We put the phenomena related to the color screening into the context of similar aspects of other physical systems (electromagnetic plasma or cold nuclear matter). We discuss the onset of the color screening and its signature and significance in the QCD transition region, and elucidate at which temperature and to which extent the weak-coupling picture based on hard thermal loop expansion, potential nonrelativistic QCD, or dimensionally-reduced QCD quantitatively captures the key properties of the color screening. We discuss the different regimes pertaining to the color screening and thermal dissociation of the static quarks in depth for various spatial correlation functions that are studied on the lattice, and clarify the status of their asymptotic screening masses. We finally discuss the screening correlation functions of dynamical mesons with a wide range of flavor and spin content, and how they conform with expectations for low- and high-temperature behavior.
Due to the steady rise in population demographics and longevity, emergency department visits are increasing across North America. As more patients visit the emergency department, traditional clinical workflows become overloaded and inefficient, leading to prolonged wait-times and reduced healthcare quality. One of such workflows is the triage medical directive, impeded by limited human workload, inaccurate diagnoses and invasive over-testing. To address this issue, we propose TriNet: a machine learning model for medical directives that automates first-line screening at triage for conditions requiring downstream testing for diagnosis confirmation. To verify screening potential, TriNet was trained on hospital triage data and achieved high positive predictive values in detecting pneumonia (0.86) and urinary tract infection (0.93). These models outperform current clinical benchmarks, indicating that machine-learning medical directives can offer cost-free, non-invasive screening with high specificity for common conditions, reducing the risk of over-testing while increasing emergency department efficiency.