共找到 20 条结果
An 81-year-old man with a history of multiple myeloma presented with decompensated heart failure (HF). Although his baseline left ventricular ejection fraction (LVEF) was preserved, it significantly declined to 35% at 13 months after the initiation of ixazomib. Considering the chronological relationship between the timing of ixazomib treatment and symptom onset, prior normal echocardiography, and multimodal imaging findings, ixazomib-induced cancer therapy-related cardiac dysfunction (CTRCD) was strongly suspected. Despite the discontinuation of ixazomib and continued administration of guideline-directed medical therapy, his LVEF has not recovered to date. We herein report a rare case of ixazomib-induced irreversible CTRCD.
暂无摘要(点击查看详情)
This study aims to optimize the [Formula: see text]-nearest neighbors search (kNN search) by reducing the computational burden of the well-known Brute-force method while providing the same solution. While there exist rule-based approaches for reducing the computational burden of the kNN search, methods that use the stochastic patterns inherent to the data are lacking. Our method leverages data structures and probabilistic assumptions to enhance the scalability of the search. By focusing on the Training set where our neighbors reside, we define a sample space that limits the [Formula: see text]-nearest neighbors search to a smaller space. For each observation in the Query set (e.g., the set of observations for which a classification is desired), a fixed radius search is employed, with the radius stochastically linked to the desired number of neighbors. This approach allows us to find the [Formula: see text]-nearest neighbors using only a fraction of the entire Training set in contrast to the Brute-force method, which requires distances to be calculated between each observation in the Training set and each observation in the Query set. Through simulations and a theoretical computational complexity analysis, we demonstrate that our method outperforms the Brute-force approach, particularly when the Training and Query set sample sizes are large. In addition, a benchmarked comparison of our approach and the Brute-force method on an Alzheimer's disease data set further demonstrated this, showing a 27.57-fold improvement in total elapsed time. Overall, our stochastic approach significantly reduces the computational load of kNN search while maintaining accuracy, making it a viable alternative to traditional methods for large datasets.
We propose the integral-equation formalism of population dynamics (IEPDYN) to describe the population dynamics of distinct configurational states. According to classical reaction dynamics theory, the probability density associated with a given state obeys the Liouville equation, including influx from and efflux to neighboring states. By introducing a Markov approximation for the crossing of boundaries separating the states, tractable integral equations governing the state populations are derived. Once the time-dependent quantities appearing in these equations are evaluated, the population dynamics on long timescales can be obtained. Because these quantities depend only on a few states in the local neighborhood of a given state, they can be computed using a set of short-timescale molecular dynamics (MD) simulations. The IEPDYN method is formulated in continuous time and therefore does not rely on a coarse-grained timescale (lag time). Consequently, kinetic quantities obtained from IEPDYN are free from lag-time dependence, which has been discussed as a limitation in other approaches. We apply the IEPDYN method to the binding and unbinding kinetics of CH4/CH4, Na+/Cl-, and 18-crown-6-ether (crown ether)/K+ in water. For both kinetics, the time constants estimated from the IEPDYN method are comparable to those obtained from brute-force MD simulations. The required timescale of each MD trajectory in the IEPDYN method is approximately two orders of magnitude shorter than that in the brute-force MD approach in the crown ether/K+ system. This reduction in the trajectory timescale enables applications to complex binding and unbinding systems whose characteristic timescales are far beyond those directly accessible by brute-force MD simulations.
Basecalling is a crucial step in DNA sequencing that converts raw nanopore signals into nucleotide sequences. This paper presents a serial-parallel reprogrammable DNA sequencing accelerator based on a 64-state Hidden Markov Model (HMM) implemented in a 130-nm CMOS process. The proposed method optimizes computational efficiency, hardware utilization, and power consumption using a coarse-grained serial-parallel processing approach. It achieves 94.3% accuracy, outperforming Nanocall (85.6%) and Meta-Align (91.2%), while being slightly superior to the Scalable Hardware Accelerator (93.1%). Furthermore, it consumes 200 mW, which is 6 times lower than brute-force HMM implementations and 3–5 times more power-efficient than deep learning-based basecallers like DeepNano and Bonito. The proposed accelerator maintains competitive throughput at 8 M Bases/sec, balancing processing speed and energy efficiency. Additionally, the architecture supports scalability up to 4096 states, making it highly adaptable for various sequencing applications. It’s hardware-optimized and low-power design makes it an ideal alternative to brute-force and software-based methods for real-time, mobile, and embedded DNA sequencing devices.
The synchronization of complex networks, governed by the generalized Fiedler value (γ) of the Laplacian matrix, is critical for functional stability and energy efficiency. However, this property also renders networks vulnerable to targeted disruptions. Traditional percolation-based attack strategies, which focus on structural integrity, often fail to effectively suppress synchronization. This study introduces a Laplacian spectral perturbation approach to systematically identify and remove edges critical to synchronization. By deriving the sensitivity of γ to topological changes and leveraging the gradient of the Fiedler vector, we quantify each edge's contribution to synchronization, revealing its connection to community structure. We propose the Fiedler Gradient Iterative Attack (FGIA) algorithm for static networks, which constructs locally optimal edge-removal sequences to maximize γ degradation while preserving global connectivity. FGIA achieves computational efficiency, outperforming brute-force methods and conventional centrality-based attacks. Extensive simulations on synthetic and real-world networks demonstrate FGIA's superior performance in synchronization suppression, offering practical applications in neuroscience and critical infrastructure protection.
Expansion of diffusion MRI (dMRI) both into the realm of strong gradients and into accessible imaging with portable low-field devices brings about the challenge of gradient nonlinearities. Spatial variations of the diffusion gradients make diffusion weightings and directions non-uniform across the field of view, and deform perfect shells in the q $$ q $$ -space designed for isotropic directional coverage. Such imperfections hinder parameter estimation: Anisotropic shells hamper the deconvolution of the fiber orientation distribution function (fODF), while brute-force retraining of a nonlinear regressor for each unique set of directions and diffusion weightings is computationally inefficient. Here, we propose a protocol-independent parameter estimation (PIPE) method that enables fast parameter estimation for the most general case where each voxel is measured with a different protocol in q $$ q $$ -space. PIPE applies to any spherical convolution-based dMRI model, irrespective of its complexity, which makes it suitable both for white and gray matter in the brain or spinal cord, and for other tissues where fiber bundles have the same properties (fiber response) within a voxel, but are distributed with an arbitrary fODF. We also derive a parsimonious representation that isolates isotropic and anisotropic effects of gradient nonlinearities on multidimensional diffusion encodings. Applied to in vivo human MRI with linear tensor encoding on a high-performance gradient system, PIPE evaluates fiber response and fODF parameters for the whole brain in the presence of significant gradient nonlinearities in under 3 min. PIPE enables fast parameter estimation in the presence of arbitrary gradient nonlinearities, eliminating the need to arrange dMRI in shells or to retrain the estimator for different protocols in each voxel. PIPE applies to any model based on a convolution of a voxel-wise fiber response and fODF, and data from varying b $$ b $$ -values, diffusion/echo times, and other scan parameters.
We describe AI agents as stochastic dynamical systems and frame the problem of learning to reason as in transductive inference: Rather than approximating the distribution of past data as in classical induction, the objective is to capture its algorithmic structure so as to reduce the time needed to solve new tasks. In this view, information from past experience serves not only to reduce a model's uncertainty, as in Shannon's classical theory, but to reduce the computational effort required to find solutions to unforeseen tasks. Working in the verifiable setting, where a checker or reward function is available, we establish three main results. First, we show that the optimal speed-up for a new task is tightly related to the algorithmic information it shares with the training data, yielding a theoretical justification for the power-law scaling empirically observed in reasoning models. Second, while the compression view of learning, rooted in Occam's Razor, favors simplicity, we show that transductive inference yields its greatest benefits precisely when the data-generating mechanism is most complex. Third, we identify a possible failure mode of naïve scaling: in the limit of unbounded model size and computing, models with access to a reward signal can behave as savants, brute-forcing solutions without acquiring transferable reasoning strategies. Accordingly, we argue that a critical quantity to optimize when scaling reasoning models is time, the role of which in learning has remained largely unexplored.
Internet-wide scanning is indispensable for security research and network measurement, yet its efficacy remains limited by significant visibility heterogeneity across global networks. Traditional centralized scanners suffer from single-point failures and offer a constrained perspective, while naive distributed approaches fail to intelligently leverage visibility variations, leading to redundant effort and incomplete coverage. This paper presents VistaScan, a novel distributed scanning system based on node visibility awareness, demonstrating that the visibility pattern among IP addresses is highly consistent within CIDR blocks, enabling a lightweight method for efficient and scalable quantification of per-node visibility. It first constructs a Visibility Matrix through efficient anchor probing, then employs a load-aware task allocation mechanism that assigns each block to the most capable node while filtering out entirely invisible blocks. Evaluation across global, regional, and challenging Special-Block tasks demonstrates that VistaScan consistently outperforms five baseline methods. It achieves near-optimal coverage (97.95%, 99.05%, and 97.58%, respectively), reduces probe volume by 64-93%, and shortens completion time by 13-19× compared to conventional centralized and distributed scanners. Furthermore, the visibility matrix derived from one port (TCP/80) effectively generalizes to other TCP ports (TCP/22, TCP/53), achieving coverages of 91.09% and 87.95%-preliminarily validating the practical generalizability of our approach. VistaScan provides both a highly efficient solution for Internet-scale distributed measurement and a new theoretical foundation based on visibility consistency, advancing the field from brute-force probing toward intelligent, low-overhead, and sustainable scanning practices.
This paper proposes a seven-core fiber key synchronization transmission scheme based on constellation flat coding (CFC). This scheme maps binary keys to the positions of silent subcarriers to achieve synchronous transmission of keys and data. A four-dimensional (4D) chaotic model is used to perform XOR, CFC rules, and symbol and subcarrier masking encryption on the data. After completing XOR and three-dimensional (3D) constellation mapping, flatten the encoding of high-dimensional constellations, reduce the order of 3D constellations to two-dimensions (2D), and mask keys and data from the constellation dimension, thereby achieving chaotic encryption and enhancing system security. We conducted experiments on 56 Gb/s 3D index modulation (IM) CFC signals on a 2-kilometer-long seven-core fiber. The results show that the difference between each core of the seven-core fiber does not exceed 0.1 dB, and the bit error rate (BER) performance of 2D-CFC demodulation and key mismatch is around 0.5. This scheme deeply couples index selection, constellation mapping, and chaotic driving to embed encryption into modulation, achieving inherent physical-layer security with an effective parameter space of approximately 10109 for the chaotic seed, rendering brute-force attacks infeasible.
Although scholarship has long called for attention to the intersection of race and gender in workplace harassment, the experiences of Black Americans remain insufficiently theorized. Existing frameworks often assume harassment to be gender-based in ways that center White women's victimization, leaving limited conceptual space to understand how Black women and Black men are targeted. In this essay, we synthesize research on racialized sex-based harassment (RSBH) to illustrate how harassment directed at Black Americans is shaped by cultural narratives that simultaneously sexualize, criminalize, and devalue them. Specifically, we introduce sociohistorical archetypes (e.g., Jezebel, Mammy, Sapphire, Mandingo, Brute, Uncle Tom) as cultural mechanisms through which RSBH is enacted, rationalized, and normalized within organizational contexts. We argue that RSBH functions as a mechanism for enforcing racialized gender hierarchy: it draws on sociohistorical meanings attached to Black femininity and masculinity to mark certain identities as inherently available, threatening, or subordinate. We further review evidence linking RSBH to psychological distress, social identity threat, physiological strain, and career stagnation, as well as factors that shape vulnerability and adaptation. By conceptualizing RSBH as a patterned and predictable form of identity-based harm, grounded in the lasting impact of sociohistorical archetypes, rather than a variation of generalized sexual harassment, this work advances theories of harassment and race in organizations. We conclude by outlining implications for measurement, organizational policy, and intervention efforts aimed at disrupting the reproduction of racialized gender inequality at work.
This study proposes a fast image encryption method for color images, integrating an autoencoder to compress the image and a 6D hyperchaotic system to ensure enhanced security. Initially, a hash value is obtained from the original color image. The hash value, which serves as the secret key of the proposed encryption method, is used to initialize the state variables of the hyperchaotic system, which produces six distinct pseudo-random sequences. The input image is then compressed into a latent image (lossy) using a Vision Transformer Autoencoder model. This latent image is scrambled using chaotic sequences and a Random Shuffle technique. Diffusion is achieved through the Trifid Cipher transformation, which utilizes the remaining chaotic sequences to manipulate pixel values, thereby yielding a cipher version of the latent image. The suggested technique is faster and significantly enhances security compared to the state-of-the-art methods. This method achieves an average entropy of 7.9986, a correlation coefficient close to zero ≈ 0.00004, and key sensitivity analysis gives NPCR = 99.6110% and UACI = 33.4637%. Moreover, the key space of [Formula: see text] confirms that the proposed scheme offers strong resistance against brute-force attacks.
Real-time intrusion detection in heterogeneous Internet of Things (IoT) networks involves continuously monitoring diverse connected devices and communication protocols to promptly identify malicious activities or anomalies. Due to varied device capabilities, dynamic topologies, and resource constraints, these systems leverage lightweight AI-driven analytics, edge processing, and adaptive security models to ensure minimal latency. Effective detection enhances resilience, safeguards sensitive data, and maintains seamless IoT operations in mission-critical environments. We propose a stage-specific Recursive Sparse & Relevance-based Feature Selection (RS2FS) and a confidence-gated Support Vector Machine (SVM) → SVM → ANFIS cascade for real-time intrusion detection in heterogeneous IoT networks. RS2FS combines elastic-net screening, MI ∩ mRMR relevance, stability selection, and margin-aware recursive pruning to yield compact, non-redundant feature sets per cascade stage. The cascade accepts easy cases with calibrated SVMs and routes ambiguous, family-localized traffic to per-family ANFIS rules, providing interpretable subtype decisions. Evaluated on CICIoT2023 with scenario-held-out splits (5 × grouped CV), our model attains Macro-F1 = 0.962, Macro-AUC = 0.991, Balanced Accuracy = 0.963, MCC = 0.952, Brier = 0.038, and ECE = 0.012 at 6.3 ms CPU latency per window with a 7.8 MB footprint. Class-wise F1 shows consistent gains: Benign 0.991, DDoS 0.984, DoS 0.958, Recon 0.961, Web 0.937, Brute Force 0.951, Data Exfiltration 0.921, Botnet 0.942. Cascade behavior explains the speed-accuracy trade-off: 68% of windows are resolved at Stage-1 (F1 0.985, 3.38 ms), 22% at Stage-2 (F1 0.962, 7.73 ms), and only 10% escalate to ANFIS (F1 0.936, 23 ms). Against strong baselines, we improve Macro-F1 by + 1.9 pp over SVM-only (0.943), + 1.7 pp over XGBoost (0.945), and + 1.1 pp over a small 1D-CNN (0.951); bootstrap tests show significance (p < 0.01). Unlike existing IoT IDS approaches that rely on single-stage classifiers or one-time, global feature selection, our framework introduces two fundamental advances. First, the proposed RS2FS mechanism performs stage-specific, stability-aware, and margin-guided feature reduction, addressing the gaps of redundancy, volatility, and non-adaptiveness found in prior MI-, mRMR-, or L1-based selection methods. Second, the confidence-gated SVM → SVM → ANFIS cascade introduces a new routing paradigm where high-margin "easy" traffic is settled early, while only low-confidence, ambiguous windows are escalated to fuzzy reasoning overcoming the limitations of conventional hybrid SVM-ANFIS systems that apply the same classifier depth to all samples. Together with integrated open-set rejection and drift micro-adaptation, these contributions position the framework as a fundamentally new IDS architecture for heterogeneous IoT environments. The open-set guard achieves AUROC 0.981 and TPR@1%FPR 0.912 with 4.6% reject rate. Robustness holds under + 5% timestamp jitter (0.957), ± 10% packet-size noise (0.955), and 10% missing features (0.949). Interpretable ANFIS rules highlight payload-entropy, MQTT topic-depth, and DWT-energy interactions. Overall, the framework delivers accurate, calibrated, interpretable, and fast IDS suitable for deployment in modern IoT environments.
Enhanced sampling methods enable the mechanistic study of complex biophysical processes at atomistic resolution, addressing the key timescale limitations of brute-force molecular dynamics (MD) simulation. However, selecting appropriate collective variables (CVs) for enhanced sampling simulation to explore the relevant phase space of the system is challenging. In recent years machine learning (ML) algorithms have shown promise in the design of efficient CVs for enhanced sampling and improvements over traditional intuitive order parameters (OPs) in free energy surface (FES) exploration. However, the lack of interpretability and high cost of evaluation make it difficult to apply these ML-based CVs across diverse systems. Moreover, transferability of ML-guided CVs is a critical issue and can't be directly applied in different systems with similar mechanistic details without retraining. In this study, we introduce a surrogate model assisted enhanced sampling method using an elastic net (EN) regression model which expresses the relevance of different OPs as a linear combination locally at the transition state (TS) region. We demonstrate the successful applications of surrogate model-based TS-derived CV in exploring the landscapes of polymer collapse transition with varying lengths. This method shows improvements in achieving a faster free energy convergence within very short simulation time over other OPs tested in this study. Moreover, we demonstrate that this approach is transferable across different lengths of polymer systems without the requirement of large training data for each system. Overall, this study provides a general and interpretable approach to run enhanced sampling simulations with surrogate model-assisted TS-derived CV which can be extrapolated beyond their training system.
This study presents an efficient image encryption algorithm designed for secure data transmission in big data environments. The proposed method employs a cosine extended logistic chaotic map with a wider chaotic range and enhanced randomness. The map's dynamics are verified through bifurcation diagrams, Lyapunov exponents, Shannon entropy and Kolmogorov entropy. The encryption scheme incorporates two confusion and two diffusion phases using chaotic pixel permutation, controlled flipping, modulo arithmetic, MSB/LSB separation, and cross quadrant bitwise operations to achieve lightweight yet robust protection suitable for resource constrained systems. Experimental results on standard USC-SIPI and medical image datasets show near ideal entropy (~ 7.997), NPCR and UACI values close to 99.6094% and 33.4635%, and chi square and correlation values within secure limits, indicating strong resistance against statistical and differential attacks. With an estimated key space of 2318, the scheme surpasses brute force resilience standards and demonstrates an effective balance between security and computational efficiency for real time image transmission.
Quantum computing improves substantially on known classical algorithms for various important problems, but the nature of the relationship between quantum and classical computing is not yet fully understood. This relationship can be clarified by free models, that add to classical computing just enough physical principles to represent quantum computing and no more. Here, we develop an axiomatization of quantum computing that replaces the standard continuous postulates with a small number of discrete equations, as well as a free model that replaces the standard linear-algebraic model with a category-theoretical one. The axioms and model are based on reversible classical computing, isolate quantum advantage in the ability to take certain well-behaved square roots, and link to various quantum computing hardware platforms. This approach allows combinatorial optimization, including brute force computer search, to optimize quantum computations. The free model may be interpreted as a programming language for quantum computers, that has the same expressivity and computational universality as the standard model, but additionally allows automated verification and reasoning.
Metal-organic frameworks (MOFs) are prime candidate materials for gas adsorption and separation owing to their exceptional porosity and structural tunability. However, the nearly infinite chemical space and exponentially growing number of candidate structures pose insurmountable challenges to traditional experimental methods and brute-force computational screening. Data-driven machine learning (ML) offers a transformative solution for efficiently navigating this vast materials library. This review analyzes the current state of ML-based MOF screening, evaluates the limitations of mainstream MOF databases, and highlights how data authenticity and update frequency affect model reliability. The evolution of feature engineering─from manual geometric descriptors to automated representation learning using graph neural networks (GNNs) and molecular fingerprints─is also outlined. Furthermore, we discuss the specific applicability of advanced algorithmic frameworks, including deep learning, active learning, and transformers, to MOF screening tasks. Future development should focus on integrating high-fidelity experimental data with model interpretability to enable closed-loop autonomous discovery systems.
Understanding the molecular structure, dynamics, and reactivity requires bridging processes that occur across widely separated timescales. Conventional molecular dynamics simulations provide an atomistic resolution, but their femtosecond time steps limit access to the slow conformational changes and relaxation processes that govern chemical function. Here, we introduce a deep generative modeling framework that accelerates sampling of molecular dynamics by four orders of magnitude while retaining physical realism. Applied to small organic molecules and peptides, the approach enables quantitative characterization of equilibrium ensembles and dynamical relaxation processes that were previously only accessible by costly brute-force simulation. The method generalizes across chemical composition and system size, extrapolating to peptides larger than those used for training, and captures chemically meaningful transitions on extended timescales. By expanding the accessible range of molecular motions without sacrificing the atomistic detail, this approach opens opportunities for probing conformational landscapes, thermodynamics, and kinetics in systems central to chemistry and biophysics.
Internet of Things (IoT) is increasingly realized through large scale deployments of heterogeneous devices and gateways operating under strict energy budgets and interference limited links, which motivates reliability aware topology control and end to end communication performance objectives. As IoT deployments grow to massive scales and incorporate highly heterogeneous devices, designing and controlling network topology in a reliable and energy-efficient manner becomes a fundamental challenge. In particular, poor link quality, interference, and localization uncertainty severely limit the effectiveness of traditional topology-control approaches. In this paper, we address this challenge by introducing IoTNTop, a novel and unified graph-based framework for joint localization, graph embedding, and topology control in large-scale, resource-constrained IoT networks. Unlike conventional methods that decouple localization from topology design, IoTNTop embeds both end-nodes and gateways into a globally consistent spatial structure using partial and noisy distance measurements, and directly couples this geometry with communication-aware topology optimization. IoTNTop adopts an error-centric topology-control objective that explicitly minimizes end-to-end (E2E) error probability while enforcing practical code-rate and transmit-power constraints. The framework jointly optimizes link activation, transmit power, and data transmission code rate, and employs a scalable sub-graph stitching pipeline based on eigenvector synchronization (EVS), landmark alignment (LA), and semidefinite programming (SDP) refinement. A greedy signal-to-noise-ratio (SNR)-guided edge selection strategy with convergence checking further ensures computational efficiency. Comprehensive numerical analysis and network-level simulations show IoTNTop retains approximately 60-80% of the initial per-node energy budget while maintaining symbol error probability below 15% for the majority of nodes. At the same time, it converges in fewer iterations than Genetic Algorithm (GA) and brute-force baselines and sustains higher achievable code rates at lower transmit power levels. These performance gains remain consistent across the tested signal-to-noise ratio regimes and network sizes.
ConspectusEnhanced-sampling techniques employed in free-energy calculations overcome the limitations of brute-force molecular dynamics (MD) and are widely used to interrogate complex biological and chemical systems at atomic resolution. Depending on the nature of the problem at hand, different strategies are utilized to estimate the underlying free-energy change. In geometrical transformations, sampling is accelerated along a defined set of collective variables (CVs) to reconstruct the associated free-energy landscape. Conversely, in alchemical transformations, the free-energy difference between the two end states is determined by tracing a nonphysical pathway. Generalized-ensemble techniques accelerate sampling through rapid exchanges between low and high temperatures, and the resulting trajectories are then reweighted to recover the free energy. This methodological diversity─paired with distinct schools of thought promoting incompatible or competing procedures─can often breed confusion and jeopardize the reproducibility of results. To alleviate this problem, we have recently expanded the theoretical foundation of the adaptive biasing force (ABF) framework─originally classified as an importance-sampling method─and have extended its application to geometrical, alchemical, and generalized-ensemble free-energy calculations. In this Account, we review these developments and introduce a unified strategy: Well-tempered metadynamics-xABF (WTM-xABF). WTM-xABF accommodates geometrical, alchemical, generalized-ensemble, and hybrid schemes with minimal parameter tuning, making it a robust and accessible platform for a wide range of applications. Its geometrical and alchemical variants are demonstrably more efficient than, or at least competitive with, leading state-of-the-art algorithms. To illustrate its versatility, we demonstrate the use of WTM-xABF in (1) disentangling coupled motions in complex biochemical systems by combining human-designed and machine-learning CVs, (2) performing extensive protein-ligand binding free-energy calculations for substrates of greater size and flexibility than traditional drug-like molecules, and (3) conducting fully blind folding simulations of fast-folding proteins. With its sound theoretical foundation, computational efficiency, and broad applicability, WTM-xABF is poised to become a powerful method for MD across physical chemistry, biophysics, and drug discovery.