Targeted delivery of drugs and hyperthermia in cardiovascular disease demand the accurate delivery of nanoparticles in complex arterial geometries. This paper introduces combined hybrid computational model that concomitantly examines the combined impact of nanoparticle radius and interparticle spacing on the thermal and mass transport characteristics of ternary bio-nanofluid flow under magnetohydrodynamic (MHD) effect. The ternary fluid is composed of blood fluid with suspended nanoparticles such as gold (Au), silver (Ag) silica (SiO2). The mathematical model accounts for geometric properties of nanoparticles such as nanoparticles radius and interparticle spacing for their practical utility for several medical interventions. The numerical analysis is based on hybrid computational strategy, where the solutions are determined through the bvp4c numerical solver and then a novel supervised multi hidden layers Artificial neural network (ANN) is integrated. The proposed model has a high predictive capability with an exceptionally high accuracy with the lowest Mean squared error and ideal regression coefficient MSE=9.6327×10-11, Gradient=9.5681e-08, Mu=1e-09, and R2=1.0. Some of the main findings indicate that less spacing between particles (h=0.1) leads to continuous networks of thermal percolation, which enhance the thermal conductivity by up to 35% to improve the efficiency of hyperthermia, whereas the larger nanoparticles (radius ≥1.5) offer a higher drug-loading capacity, yet the rate of heat transfer decreases by 15-20%. Optimization of the magnetic parameter (M=0.1-0.7) also decreases flow velocity by 28% and extends the nanoparticle residence time at the stenosis by 35% which allows sustained drug delivery, results directly applicable to clinical-strength (1.5-3T) MRI-guided interventions. Radiation parameter (Rd=0.5-2.5) increases temperature of the arteries by 15-20% giving controllable thermal modulation to applications of hyperthermia. The proposed model predicts that optimal nanoparticle preparations (50 nm radius, 20 nm spacing) have to potential to lower the rate of restenosis by 30-40% in relation to traditional drug-eluting stents. The purpose of such an integrated computational-machine learning systems is to give quantitative advice to stent coating design, nanoparticle formulation, and optimization of treatment protocols, and has been directly used in biomedical interventions. The results can be used to offer practical advice to stent manufactures, interventional radiologist and pharmaceutical developers to create evidence-based cardiovascular therapy of the next generation.
Omics data, comprising a diverse array of high-throughput molecular datasets, present substantial statistical challenges due to their intrinsic heterogeneity and variability. Effectively distinguishing biologically meaningful variations from random noise requires the application and development of robust statistical approaches. Interdisciplinary collaboration plays a pivotal role in refining these methodologies and enhancing the understanding of intricate biological systems. This chapter reviews the importance of statistical methods in omics data analysis, highlighting the need for ongoing advancements to address key challenges, including experimental design, preprocessing, dimensionality reduction, statistical modeling of complex datasets, and the interpretation of results. The pursuit of improved reliability in biological insights creates opportunities for the development and refinement of advanced statistical methodologies.
This article proposes modified test statistics for six blind covariance-based detectors used in data fusion cooperative spectrum sensing, where the full Hermitian sample covariance matrix (SCM) of the received signal is replaced by a symmetric real-valued partial sample covariance matrix (PSCM). This substitution results in a substantial reduction in overall computational complexity compared to the original SCM-based formulations, while preserving or improving detection accuracy under realistic conditions that include non-uniform noise powers, time-varying distance-dependent path loss, spatially correlated shadowing, and multipath fading with a random Rice factor. The computation of the PSCM requires 50% fewer floating-point operations than the full SCM and offers a hardware-friendly structure due to its reliance on real-valued arithmetic. On the test statistic side, the adoption of the PSCM leads to computational costs ranging from 3.37% to 61.9% of those incurred by the corresponding SCM-based test statistics.
Early-stage infrared forest fire detection is severely hindered by strong background thermal interference and extremely weak fire radiation signals. Existing methods mainly rely on spatial-domain modeling and overlook the frequency-domain characteristics of flame thermal radiation, limiting robustness in complex environments. To address this challenge, we propose CTM-DETR, an end-to-end detection framework tailored for infrared forest fire monitoring. A frequency-aware backbone, CGlobalFilter, is introduced to explicitly model thermal radiation priors by performing real-spectrum filtering in the frequency domain, effectively suppressing non-fire thermal disturbances. Furthermore, a statistics-guided linear attention mechanism (TSSA) is embedded into the detection head of AIFI. This mechanism approximates the dominant global interactions in conventional pairwise attention using token-level second-order statistics, thereby reducing the interaction complexity from O(N²) to O(N) while preserving global contextual modeling ability. To mitigate sample imbalance, a Matching-Aware Loss (MAL) is incorporated to adaptively reweight samples based on matching quality. Experiments on a constructed infrared forest fire dataset show that CTM-DETR surpasses RT-DETR, achieving a 3.1% mAP50 improvement, with 15.6% fewer parameters and 17.8% lower computational cost. Beyond performance gains, this work provides new insights into the frequency-domain and statistical properties of infrared flame radiation and offers a transferable paradigm for thermal imaging-based perception tasks.
Groundwater pollution source identification (GPSI) plays a vital role in groundwater protection and remediation. Among various inversion approaches, the Metropolis-Hastings (MH) and particle filter (PF) algorithms have been widely employed for probabilistic parameter estimation. However, the MH algorithm often suffers from slow convergence and a tendency to become trapped in local optima, while the PF algorithm experiences particle degeneracy and high computational cost in high-dimensional nonlinear systems. To address these challenges, this study proposes a hybrid Metropolis-Hastings and particle filter (PF-MH) algorithm that integrates the global exploration of PF with the local optimization of MH, effectively improving sampling efficiency and stability in posterior estimation. Furthermore, a residual neural network (ResNet) surrogate model is incorporated to approximate the numerical simulator, greatly reducing computational burden without compromising accuracy. We designed two hypothetical aquifer cases of varying complexity to validate the proposed method's effectiveness. The ResNet surrogate model significantly outperformed the multilayer perceptron (MLP) in both cases. Moreover, the PF-MH algorithm outperformed the MH algorithm, reducing average relative errors in pollutant release and hydrogeological parameter estimates while demonstrating greater stability across multiple runs. Overall, integrating the ResNet surrogate model with the PF-MH algorithm provides an efficient and reliable approach to tackle the high computational costs and significant uncertainties in GPSI.
Advances in molecular epidemiology and computational modeling have improved our ability to track pathogen evolution, but accurate reconstruction of spatiotemporal transmission remains essential for epidemic preparedness and response. Structured coalescent models offer a phylogeographic framework by restricting coalescence to lineages within the same deme. Although the Bayesian structured coalescent approximation (BASTA) provides a tractable approach, contemporary phylogeographic analyses involving dozens of localities and hundreds to thousands of genomes exceed the computational capacity of existing implementations. The BASTA likelihood scales cubically with deme count and quadratically with sequence count due to matrix exponentiation and partial likelihood vectors update. Here, we introduce an algorithmic restructuring of the structured coalescent likelihood that eliminates redundancies, optimizes memory access, and exposes parallelization opportunities. Our approach reorganizes computations along three dimensions: i) independent calculation of deme-transition probability matrices across time intervals; ii) simultaneous evaluation of partial likelihood vectors within temporal slices; and iii) concurrent aggregation of coalescent probabilities. Algorithmic restructuring cuts average coalescent likelihood computation by 7 to 8 fold, and parallelization further boosts performance to 10 to 26 fold, enabling joint phylogeographic analyses of dengue virus across 10 South American countries and H5N1 avian influenza across 20 Eurasian regions to finish in a fraction of prior time. This computational efficiency also enables comparison between backward-in-time structured coalescent approximations and forward-in-time phylogeographic methods, revealing that the former provides appropriately conservative posterior estimates, particularly at intermediate phylogenetic depths. We integrate our implementation into the BEAST X and BEAGLE software packages, providing researchers with an accessible and scalable tool for real-time phylogeographic surveillance of rapidly evolving pathogens.
Heterogeneity in spiking activity is ubiquitous among neurons even within a given cell type. To date, the relative contributions of extrinsic mechanisms (e.g., synaptic bombardment), intrinsic mechanisms (e.g., conductances), and cell morphology toward determining spiking activity remain poorly understood. Here, we addressed this important question using a combination of extracellular in vivo recordings of electrosensory pyramidal cells within weakly electric fish with computational modeling. Specifically, by varying parameters of a conductance-based computational model, we successfully reproduced the highly heterogeneous spiking activities seen experimentally. Model parameters that varied the most were then used to gauge the relative contributions of extrinsic vs. intrinsic mechanisms. Overall, extrinsic synaptic input was predicted to be the main factor accounting for spiking heterogeneity. We tested this prediction experimentally by performing two different manipulations: (1) pharmacologically inactivating feedback from higher brain areas and (2) applying the neuromodulator serotonin. Our model showed that feedback inactivation should reduce spiking heterogeneity, whereas serotonin application should increase it, two predictions that were corroborated experimentally. Importantly, for serotonin application, increased heterogeneity occurred despite a strong reduction in intrinsic membrane conductance, further demonstrating that extrinsic synaptic input is the primary determinant of spiking heterogeneity in vivo. Taken together, our results demonstrate that devising a computational model to capture spiking heterogeneities in vivo and assessing which parameters are responsible can successfully determine the relative contributions of extrinsic inputs, intrinsic properties, and neural morphology.
Adaptive behavior requires organisms to make decisions under uncertainty, balancing the exploitation of known options with exploration as environmental structure changes. Across ecology and neuroscience, this problem has been studied using distinct experimental and theoretical frameworks, including probabilistic choice, reversal learning, foraging tasks, reinforcement learning, and Bayesian inference. Here, we synthesize some of these ideas within a predictive processing perspective, arguing that they address a shared computational challenge: inferring latent environmental structure and adjusting behavior in response to different sources of variability. We distinguish key forms of uncertainty and review evidence that animals can regulate learning rates, persistence, and exploration according to the inferred origin of outcome variability. Laboratory paradigms such as probabilistic reversal learning provide controlled settings to dissociate sensitivity to noise from sensitivity to change, while foraging tasks reveal how local fluctuations are integrated with global estimates of environmental quality. Across species, apparent decision variability often reflects adaptive sampling rather than suboptimal noise. We further review evidence suggesting that cortical and subcortical circuits can encode predictions and environmental statistics, and that neuromodulator systems, including noradrenaline, acetylcholine, dopamine, and serotonin, modulate the influence of new evidence relative to prior beliefs. Together, these findings support a view of adaptive decision-making as hierarchical uncertainty resolution that operates across behavioral timescales and experimental contexts, and provide a framework for linking ecological decision rules, laboratory models, and neural mechanisms.
Nanoconfined fluid is central to many engineering applications such as shale energy production, carbon sequestration, and molecular separations. While classical molecular dynamics (MD) simulation provides essential atomistic detail, its prohibitive computational cost severely limits accessible time and length scales. Hybrid MD-Monte Carlo (MDMC) methods accelerate sampling but lack generality beyond their trained conditions. In this work, we introduce an AI-assisted MDMC framework that overcomes this limitation by learning local, conditional transition statistics directly from MD trajectories. Our method encodes molecular motion into a compact set of neural network-predicted displacement actions, preserving MD-level accuracy within a drastically reduced dimensionality. This approach enables efficient sampling with robust generality. We systematically demonstrate the framework's accuracy and transferability across diverse thermodynamic conditions (temperature, pressure), spatial scales, and complex nano-scale geometries, establishing a versatile path for simulating confined fluid phenomena relevant to engineering applications.
Improved accessibility of high-throughput RNA sequencing has increased the amount of data generated each year. This increase in data creates a need for reproducible pipelines that can process RNA-seq data consistently across experiments. AutoRNAseq addresses this need by providing a Snakemake-based workflow for bulk RNA-seq analysis by automating data retrieval, quality control, and gene quantification. Unlike existing RNA-seq workflows that require users to coordinate multiple pipelines and pre-configure reference data, AutoRNAseq provides a single, end-to-end workflow that automates data acquisition, reference preparation, quality control, alignment, and quantification with minimal user intervention. AutoRNAseq is applicable to any domain requiring consistently processed RNA-seq datasets, including bioinformatics, computational biology, and drug-response studies. AutoRNAseq is implemented in Snakemake and available at https://gitlab.com/unebraska/lagbh-public/autornaseq . Documentation and example configuration files are provided in the GitLab README file and this paper's Supplementary Information. The code to reproduce the statistics presented here is in the GitLab repository under the "publication" folder.
Stochastic models can be highly computationally expensive. This limits the range of parameters and scenarios that can be realistically explored. Previously, a queuing network model was developed for the insulin-stimulated intracellular translocation of the glucose transporter GLUT4. Whilst one hypothesis of insulin action was tested, alternative hypotheses were too computationally expensive for parameter inference. In this study, a deterministic surrogate model is developed for the queuing network. The surrogate model uses feedback terms in a system of differential equations to approximate the blocking mechanisms seen in the queuing network. A sensitivity analysis of the surrogate model was performed and its correspondence to the queuing network assessed. This surrogate model may be useful in a parameter inference recalibration process, allowing posteriors for the queuing network to be acquired with lower computational cost.
Quantum-Assisted Photon Imaging (QAPI) leverages the correlated photon pairs produced by positron annihilation to overcome the intrinsic noise limitations of classical radiation imaging. In this study, we develop a statistical framework describing the photon-counting behavior of QAPI and compare its predicted signal-to-noise ratio (SNR) performance against classical imaging under both idealized and realistic detector conditions. Analytical derivations demonstrate that QAPI exhibits reduced variance through two mechanisms: elimination of stochastic uncertainty in photon generation via idler detector measurements, and application of binomial rather than Poisson transmission statistics enabled by the high transmission probability of 511 keV photons. To validate these predictions, we performed GATE Monte Carlo simulations using a phantom with variable-depth inserts across a range of exposure times. Under idealized conditions, measured SNRs closely matched theoretical expectations, with QAPI consistently outperforming classical imaging across all transmission probabilities. Minor deviations at extreme transmission values were attributed to finite sampling effects and breakdown of the Poisson approximation. Realistic simulations incorporating CZT detector response revealed additional challenges, particularly coincidence pairing failures due to detector transmission, which we addressed through geometric correction of missing idler events and sensitivity-based normalization. Despite these complications, QAPI retained a substantial SNR advantage approaching [Formula: see text] improvement over classical imaging. These results establish that the statistical advantage of QAPI arises fundamentally from access to idler information and the favorable transmission characteristics of high-energy photons, providing a validated theoretical and computational foundation for quantum-assisted transmission imaging and motivating further experimental development.
Accurate prediction of drug synergy is critical for the rational design of effective combination therapies against cancer. However, existing computational approaches usually characterize the effect of an individual drug on a cell line separately and then merge the effect representations of two drugs for synergy prediction, which seriously limits their abilities to capture how two drugs act together within a specific cellular environment. We introduce DeepDrugs, a mechanism-aware deep learning framework that employs a tri-linear attention network to directly characterize how two drugs jointly act within a specific cellular context to produce synergy. Extensive experiments demonstrate that DeepDrugs outperforms state-of-the-art approaches in predictive accuracy, robustness, and generalization. Systematic model interpretation analyses identify key pharmacophores that are consistent with experimental validations. Furthermore, DeepDrugs predicts multiple unseen drug combinations (e.g. the Docetaxel-Bortezomib pair in the MCF7 cell line) that align with empirical findings.
Mappings from biological sequences (DNA, RNA, protein) to quantitative measures of sequence functionality play an important role in contemporary biology. We are interested in the related tasks of (i) inferring predictive sequence-to-function maps and (ii) decomposing sequence-function maps to elucidate the contributions of individual subsequences. Because each sequence-function map can be written as a weighted sum over subsequences in multiple ways, meaningfully interpreting these weights requires "gauge-fixing," i.e., defining a unique representation for each map. Recent work has established that most existing gauge-fixed representations arise as the unique solutions to L 2 -regularized regression in an overparameterized "weight space" where the choice of regularizer defines the gauge. Here, we establish the relationship between regularized regression in overparameterized weight space and Gaussian process approaches that operate in "function space," i.e. the space of all real-valued functions on a finite set of sequences. We disentangle how weight space regularizers both impose an implicit prior on the learned function and restrict the optimal weights to a particular gauge. We show how to construct regularizers that correspond to arbitrary explicit Gaussian process priors combined with a wide variety of gauges and characterize the implicit function space priors associated with the most common weight space regularizers. Finally, we derive the posterior distribution of a broad class of sequence-to-function statistics, including gauge-fixed weights and multiple systems for expressing higher-order epistatic coefficients. We show that such distributions can be efficiently computed for product-kernel priors using a kernel trick.
Mass spectrometry (MS)-based metabolomics is a powerful tool for understanding the complexity of biochemical processes and to identify biomarkers across diverse biological systems. The vast amount of data generated by extreme resolution mass spectrometers poses significant data processing challenges, requiring robust computational approaches and workflows for meaningful data interpretation. This chapter provides a comprehensive overview of current methodologies in MS-based metabolomics data analysis, with a focus on data preprocessing and pretreatment, m/z extraction and annotation, univariate and multivariate statistical approaches, as well as data visualization. We discuss key considerations for ensuring data quality and the growing role of bioinformatics in pathway analysis and metabolite identification. We highlight the transforming role of extreme resolution and mass accuracy enabled by FT-ICR mass spectrometers, and finally, we explore emerging trends, including artificial intelligence-driven insights and real-time data processing, to guide future developments in this rapidly evolving field.
Proper air quality forecasting is essential in developing countries such as India, where climate variability, industrialization, and increasing urbanization play a major role in degrading air quality and posing health risks. This research introduces an integrated machine learning (ML) architecture for the prediction of the Air Quality Index (AQI) in two Andhra Pradesh urban cities, Visakhapatnam and Vijayawada, and one city in Telangana, Hyderabad based on five years of pollutant and meteorological data. This approach combines a deep feedforward neural network (FNN) with residual blocks and several traditional regression techniques, viz., Random Forest, Lasso, and Gradient Boosting, to both predict AQI directly and impute it through pollutant-wise modeling in accordance with CPCB standards. Imposing a large amount of feature engineering like temporal lags, rolling statistics, and pollutant interactions was used to identify spatiotemporal dynamics. The unified advanced FNN model attained [Formula: see text] values of 0.965, 0.97, and 0.96, and Root Mean Square Errors (RMSE) of 10.0, 11.25, and 13.17 for Vijayawada, Hyderabad, and Visakhapatnam respectively. Furthermore, predictions for pollutant-specific values in 2025 showed close conformity with real AQI values ([Formula: see text], RMSE = 2.68, 5.84, 5.30 for Random Forest for Vijayawada, Hyderabad, Visakhapatnam respectively) when predicted from estimated pollutant concentrations. This research illustrates a scalable method for AQI forecasting that is capable of informing real-time policy and public health intervention in data-scarce settings.
Gene regulatory networks (GRNs) at single-cell resolution provide a fundamental framework for understanding cellular functions and regulatory mechanisms. However, existing methods often focus on regulatory relationships among genes while overlooking intercellular heterogeneity and global expression organization across cell populations. Here, we present CSGRN, a supervised computational framework that integrates graph embedding and conditional cell-specific networks (CCSNs) to infer GRNs for individual cells from single-cell RNA sequencing (scRNA-seq) data. By incorporating causal regulatory structures and integrating local and global representations, CSGRN improves the accuracy and robustness of regulatory network inference. Benchmark analyses across three datasets demonstrated that CSGRN outperforms nine existing approaches. In addition, we developed two downstream analytical strategies-signal flow analysis and gene perturbation simulation-to quantify regulatory relationships and explore regulatory dynamics. These analyses reveal cell type-specific regulatory programs and key regulators involved in cellular differentiation and disease-related processes, providing a framework for investigating gene regulation in complex biological systems.
The transforming growth factor-β (TGF-β) signaling pathway is crucial in promoting tumor growth, enabling tumors to evade immune responses, and contributing to resistance against therapies. As a result, it is a significant target for cancer treatment. However, its full potential remains untapped because selectively inhibiting it without affecting normal cells is challenging. This study reports the design, synthesis, and comprehensive evaluation of novel benzimidazolium-chalcone hybrid salts (3a-3e) that strategically combine two privileged scaffolds with complementary anticancer mechanisms. Following complete structural characterization by Elemental Analysis, FT-IR, and NMR spectroscopy, an integrated experimental and computational workflow supports a three-strategy hypothesis rather than a single lead compound. Compound 3e emerged as the most selective derivative, showing moderate anti-proliferative activity in U87 cells while maintaining reduced toxicity toward non-cancerous cells. Although direct pathway-level validation was beyond the scope of the present study, the slight reduction observed in extracellular TGF-β1, together with anti-migratory and anti-migratory and anti-clonogenic effects, along with in silico interaction patterns, supports 3e as a promising lead scaffold for further mechanistic investigation. In vitro studies using the glioblastoma cell line, U87, displayed promising anticancer activity of benzimidazolium-chalcone hybrid salts. Compounds 3a, 3b, and 3c demonstrated limited selectivity, with IC₅₀ ratios between cancer and normal cells ranging from 1.5- to 1.7-fold. Compound 3d showed a 1.6-fold and 2.0-fold selectivity advantage over BEAS-2B and HUVEC cells, respectively. In contrast, compound 3e demonstrated the most favorable selectivity profile, with an IC₅₀ of 41.09 μM in U87 cells and IC₅₀ values of ≥96.60 μM in normal cell lines. Molecular docking predicted binding affinities ranging from -9.91 to -11.67 kcal/mol. However, no significant correlation was observed between docking scores and biological activity (R2 = 0.068, p = 0.671). Molecular dynamics simulations (3 × 100 ns) confirmed stable ligand binding for all compounds (protein-ligand minimum distance: ∼0.20 nm), with per-residue energy decomposition revealing that compound 3a binds mainly through extensive hydrophobic contacts (92% van der Waals), while compound 3e forms unique polar interactions with His283 and Tyr282. Principal component analysis revealed distinct conformational profiles (variance: 3a = 0.715, 3d = 1.364, 3e = 0.531), suggesting a possible connection between conformational restriction and cellular safety. ADMET profiling confirmed drug-like properties for compound 3e with no PAINS alerts or CYP3A4 inhibition. These findings support a preliminary hypothesis that links physicochemical properties and interaction quality, rather than static binding affinity, to therapeutic selectivity in TGF-β1 modulation design.
The prediction of thermodynamic properties using mathematical and graph-theoretical approaches has secured significant attention in materials science. The current paper is a statistical investigation of the relationship between various topological indices and the heat of formation (HOF) of the Magnesium aluminate [Formula: see text] network. The study fills the current gap in the systematic relationship between graph-theoretical descriptors and thermodynamic stability of structured [Formula: see text] networks. The HOF data is computed using a computational simulation of the related network structures under homogeneous reference conditions whereby consistency is applied in the energy assessment mechanism. Moreover the present study show a statistical insight into how different topological indices express the heat of formation in the Magnesium aluminate [Formula: see text] network. By taking into account various topological indices, we use a power curve-fitting technique to speculate and describe the heat of formation-an major thermodynamic element that directly act on the stability and reactivity of [Formula: see text]. We compute and investigate the Randic index, the Atom-Bond Connectivity (ABC) index, the Geometric-Arithmetic (GA) index, and the Zagreb index based on the chemical graph depiction in case of the HOF data. Results indicate significant predictive efficiency, with the value of R1 having the best fit ([Formula: see text], [Formula: see text]), followed by [Formula: see text] and GA indices. All other indices have shown to have strong correlations ([Formula: see text]). It is thus evident that power fitting helps in estimating the value of HOF effectively and successfully. We also found some important connections among heat of formation and topological indicators applying the power curve-fitting method. Our outcome gives proof that the curve-fitted model not only provides accurate results of the data points but also assists a good perception of the nature of the chemical interlinking within the [Formula: see text] network.
Accurate identification of disease-associated microRNAs (miRNAs) is crucial for elucidating pathogenic mechanisms and advancing therapeutic discovery. Although computational methods, particularly those based on biological networks, have become essential tools for predicting miRNA-disease associations, existing approaches often struggle to comprehensively learn from heterogeneous data and optimize feature representations. To overcome these limitations, we propose the Multi-view Hybrid Attention Graph Convolutional Network (MV-HAGCN). This framework constructs a comprehensive heterogeneous network by integrating multi-source biological information, simultaneously capturing miRNA similarity and disease similarity. We design a hierarchical attention mechanism to enable refined feature learning: first, the Efficient Channel Attention (ECA) module prioritizes information-rich input features, ensuring the model focuses on high-value biological characteristics. Subsequently, the Multi-Head Self-Attention Graph Convolutional Network operates on these refined features. Through iterative message passing and multi-head self-attention, it captures not only direct first-order relationships between nodes but also explicitly models and infers complex, indirect higher-order relationships within the network. This hierarchical design progressively refines feature representations, from channel-level recalibration to global structural dependency modeling, enabling the model to capture both local and high-order relational patterns. Furthermore, a dynamic weight learning strategy adaptively integrates multi-perspective similarity matrices, achieving superior feature complementarity and synergy. Finally, the high-order node representations learned through multi-layer graph convolutions are fed into a multi-layer perceptron for integration and nonlinear transformation, enabling precise prediction of potential miRNA-disease associations. Comprehensive evaluation through five-fold cross-validation on HMDD v2.0 and v3.2 benchmark datasets demonstrates that MV-HAGCN consistently outperforms existing state-of-the-art methods in predictive performance. Case studies targeting key diseases such as breast cancer, lung tumors, and pancreatic disorders revealed that the top 50 miRNAs associated with each of these three conditions were all validated in databases, confirming the practical value of this model in screening candidate miRNAs with high biological relevance.