This paper introduces a sophisticated, scalable testing system that integrates observability-driven automation with AI-augmented proactive quality engineering to tackle contemporary software delivery difficulties. The suggested system enhances PreventativeTestPro, an open-source, hybrid testing platform that combines black-box and white-box methodologies, by incorporating an innovative observability-based test orchestration layer. The platform utilizes logs, metrics, events, and traces alongside browser and server-side monitoring to promptly identify anomalies, enhance test case selection, and automate the creation of functional, performance, and security test suites. A distinctive characteristic is the incorporation of large language models (LLMs) to provide root cause insights and autonomously construct new test cases based on production behaviors and identified abnormalities, thus providing adaptive regression coverage and intelligent remediation. The system facilitates concurrent test execution with instantaneous AI-driven log analysis, fostering a continuous feedback loop between operations and testing. It has been validated in several enterprise scenarios, including microservices-based SaaS platforms and SAP BTP ecosystems. Empirical findings from four production deployments and a beta group of 49 engineers indicate a decrease of up to 30% in mean time to resolution, over 95% compliance with SLAs, and substantial improvements in both test coverage and defect traceability. The effortless connection with industry-standard tools illustrates its plug-and-play capability. This research presents a comprehensive, tool-independent, and forward-looking quality engineering methodology consistent with agile and DevOps principles. Future endeavors encompass dynamic anomaly classification through machine learning, extension to mobile and user experience-oriented systems, and augmented large language model capabilities for domain-specific test development and failure forecasting.
暂无摘要(点击查看详情)
Software effort estimation has traditionally been grounded in the assumption that development cost is primarily driven by human labor, approximated through proxies such as code size, functional complexity, or perceived task difficulty. The increasing adoption of large language models (LLMs) as software development assistants challenges this assumption by automating substantial portions of reasoning, coding, and refactoring. In LLM-assisted workflows, effort increasingly shifts toward interaction management, validation, correction, and integration, leading to growing misalignment between established estimation techniques-such as COCOMO, Function Points, and Story Points-and actual development cost. This paper argues that the limitations of existing estimation models in LLM-mediated development are structural rather than parametric. When core development activities are delegated to automated reasoning systems. Through conceptual analysis supported by exploratory observations, we illustrate systematic mismatches between traditional effort estimates and LLM-assisted task execution, particularly in agile environments that rely on Story Points. To address this gap, we introduce a unified conceptual foundation for LLM-aware software effort estimation. We reconceptualize effort as Hybrid Intelligence Effort, emerging from the interaction between LLM cognitive complexity and human oversight effort. We further identify five core dimensions governing effort in LLM-assisted development: LLM reasoning complexity, context and information completeness, code transformation impact, iterative reasoning cycles, and human oversight effort. These dimensions capture cost drivers that are largely absent from conventional estimation theory. Rather than proposing a parametric estimation model, this work establishes a theoretical foundation for future empirical calibration and data-driven approaches. By redefining what constitutes effort in the presence of LLMs, the paper contributes a conceptual basis for estimation models aligned with contemporary AI-augmented software engineering practices.
This paper presents an AI-based generative model to address the cybersecurity threats in software development for Small and Medium Enterprises (SMEs). The model aims to address the unique challenges SMEs face in implementing effective cybersecurity practices by leveraging generative AI to enhance threat detection, prevention, and response. Initially, we conducted a multivocal literature review (MLR) and an empirical survey to identify and validate cybersecurity threats and the generative AI practices used in secure software development for SMEs. An expert panel review was then assigned for the process of artificial neural network (ANN) and interpretive structural model (ISM). The ANN model can predict potential cybersecurity threats by learning from historical data and software development patterns. ISM is used to (1) structure and visualize (2) relations between identified threats and mitigation approaches and (3) offer a combined, multi-layered risk management methodology. A case study was conducted to evaluate the effectiveness of the proposed model. The evaluation has shown that the model significantly enhances SME online security and enables rapid adoption of sophisticated AI-based practices for detecting and responding to primary and advanced cyber threats. Phishing and ransomware received high assessments (Advanced), whereas some advanced techniques, e.g., AI-guided evasion and zero-day attacks, were at early stages of development (Understanding and Development). The general results indicated that generative AI can help organizations enhance SME cybersecurity, and some efforts are underway to develop use cases for advanced threats further. The AI-based generative model is a viable and scalable approach to the cybersecurity of SME software development. Such AI-based practices will enable SMEs to effectively protect themselves against various cyber threats systematically. Future studies should focus on developing contemporary threat strategies and on the impediments to global implementation, particularly in less resource-rich settings.
The fault is taking place in the power system where the saturation of the Current Transformer (CT) is occurring, among other variations of system parameters. Empirical Mode Decomposition (EMD) and Relevance Vector Machine (RVM) detection of Current Transformer (CT) saturation has been found to be a reliable, data-based method of protecting the power system. In this approach, a power system is modelled in PSCAD, with faults being added at various points, fault resistances and fault origin angles to provide a high volume of secondary CT current signals under saturated and unsaturated conditions. The simulated data contains realistic nonlinearities such as partial and severe saturation effects that normally arise after fault commencement. The CT current signals obtained are passed through EMD, which breaks down the nonlinear waveform into a collection of Intrinsic Mode Functions (IMFs). These IMFs bring about concealed oscillatory elements connected to signal distortion as a result of saturation. Empirical properties are obtained from the IMFs, like the energy distribution, instant frequency changes, kurtosis, skewness, and entropy. These characteristics are very sensitive to distortion of the waveforms, and thus are the appropriate signals to show saturation in CT. EMD with RVM technique are proven using MATLAB software as a means of ensuring successful classification of the CT saturation phenomenon. The features are extracted and represent the input to a Relevance Vector Machine (RVM) classifier that is trained on the labelled data to be able to distinguish between saturated and unsaturated cases. The proposed EMD-RVM scheme detects CT saturation within 23.5 ms, making it suitable for fast relay operation. The extracted EMD features provide clear separability between normal and saturated conditions, enabling accurate classification through RVM. A hardware-in-the-loop setup is also prepared for future real-time validation and deployment. The online version contains supplementary material available at 10.1038/s41598-026-35444-2.
Many software reliability growth models (SRGMs) have been proposed by researchers within the context of probability theory to estimate software reliability, remaining number of faults and optimal release time. The Fault Detection Rate (FDR) may vary because of changes in testing strategies. Due to lack of knowledge of software code, the testing team might be unable to rectify the detected faults thereby introducing new faults during the fault correction process. The debugging process is imperfect due to factors like human error, insufficient testing and complex codes resulting in epistemic uncertainty. In this paper, we have proposed a new software belief reliability growth model (SBRGM) using uncertain differential equations to deal with epistemic uncertainty effectively. We have incorporated imperfect debugging and change point based on the approach of belief reliability theory, making this model more accurate as compared to some of the previously developed models. Model parameters estimation methodology is derived using the least square method and Python version 3.10. Calculation of change point is done using empirical data analysis based on the First principle of Derivatives. Three real data sets have been used to validate the proposed model. This research contributes to being more flexible and realistic in dealing with epistemic uncertainty effectively as compared to conventional approaches.
In this paper, we introduce a novel and advanced multiscale approach to Granger causality testing, achieved by integrating Variational Mode Decomposition (VMD) with traditional statistical causality methods. Our approach decomposes complex time series data into intrinsic mode functions (IMFs), each representing a distinct frequency scale, thus enabling a more precise and granular analysis of causal relationships across multiple scales. By applying Granger causality tests to the stationary IMFs, we uncover causal patterns that are often concealed in aggregated data, providing a more comprehensive understanding of the underlying system dynamics. This methodology is implemented in a Python-based software package, featuring an intuitive, user-friendly interface that enhances accessibility for both researchers and practitioners. The integration of VMD with Granger causality significantly enhances the flexibility and robustness of causal analysis, making it particularly effective in fields such as finance, engineering, and medicine, where data complexity is a significant challenge. Extensive empirical studies, including analyzes of cryptocurrency data, biomedical signals, and simulation experiments, validate the effectiveness of our approach. Our method demonstrates a superior ability to reveal hidden causal interactions, offering greater accuracy and precision than leading existing techniques.
Multiplex PCR is a key modality of nucleic acid amplification testing with growing applications in clinical diagnostics, especially in infectious diseases. Recent work has demonstrated that thermodynamic and kinetic information embedded in amplification curves (ACs) can be leveraged for target identification in the multiplex setting. This technology, named Amplification Curve Analysis (ACA), requires a mechanistic simulation tool linking biochemical design choices to AC features. We present DYNAMIC, an open-source Python implementation of a kinetic model acting as a digital twin of singleplex TaqMan PCR. Based on established kinetic and stoichiometric principles, DYNAMIC predicts fluorescence values over a wide range of experimental conditions. Key features include separate modeling of primer and probe annealing, a flexible 2-parameter thermal degradation model of Taq activity, and support for atypical regimes relevant to ACA, such as asymmetric primer concentrations. A global optimization algorithm identifies thermodynamic hyperparameters linking assay characteristics to AC features. In comparison with experimental data, DYNAMIC reproduces AC variations driven by changes in primer and probe concentrations, captures late-cycle efficiency loss from enzyme degradation, and yields realistic cycle threshold trends across orders of magnitude in input DNA. Tested against a dilution series of four previously published assays, the model robustly identifies key kinetic hyperparameters. Overall, DYNAMIC provides a mechanistic framework for predicting TaqMan PCR kinetics that can streamline assay development, reduce empirical optimization, and support the rational design of multiplex panels where target identification relies on AI-enabled classification of AC features.
Identifying the constraints hindering industry-education integration (IEI) in vocational education is a critical prerequisite for promoting its high-quality development. However, existing research has yet to establish a systematic solution to this core issue. Therefore, this study takes Shenzhen, China, as a case study and adopts an empirical research approach to construct an analytical framework of "factor identification-mechanism analysis-strategy formulation," aiming to thoroughly investigate the key constraints on IEI development in vocational education. First, based on a literature review and questionnaire survey, the study identifies 12 critical constraining factors of IEI. Subsequently, through expert interviews and Interpretative Structural Modelling (ISM), it reveals the underlying mechanisms of these constraints, classifying the 12 factors into five hierarchical levels. These factors form 16 constraint pathways, with inadequate policies, weak foundations for school-enterprise collaboration, insufficient institutional commitment, and ineffective communication mechanisms identified as critical bottom-level factors. These indirectly influence intermediate factors such as hardware/software conditions, alignment of talent cultivation philosophies, education-market demand matching, collaboration channel efficiency, and stakeholder interest fulfillment. Consequently, they diminish corporate engagement enthusiasm, blur responsibility-rights boundaries between stakeholders, and dampen faculty participation motivation, ultimately impeding IEI progress. Finally, the study proposes targeted countermeasures, emphasizing tripartite collaboration among the government, vocational institutions, and enterprises. Key strategies include strengthening legal safeguards, consolidating industry-education collaboration foundations, and proactively fulfilling educational responsibilities. The findings resolve the dilemmas constraining IEI development, providing policymakers with evidence-based references and offering practical guidance for deeper collaboration between vocational institutions and enterprises. This study holds significant theoretical and practical value for advancing high-quality vocational education.
The rapid rise of artificial intelligence-based contactless sensors (AI-CS) is expected to significantly transform how patients are measured, monitored, and understood through a versatile, noninvasive approach to data collection and health assessment. However, there is a lack of empirical research specifically focusing on AI-CS in health. Moreover, existing studies tend to focus on medical or patient perspectives, while neglecting other stakeholders such as researchers, political actors, or the general public. The study aims to provide an in-depth empirical ethical analysis and, through a multistakeholder approach, a uniquely comprehensive overview by addressing the research question: what are the attitudes of different stakeholders (patients, health care professionals, researchers, political stakeholders, and the general public) toward AI-CS and their applications in health? We conducted a cross-sectional study with 104 participants using a semistructured interview guide. Interviews were analyzed using qualitative content analysis with ATLAS.ti software (ATLAS.ti Scientific Software Development GmbH), following a 3-component model of feelings, thoughts, and behavioral aspects. The results of the study provide an in-depth analysis of attitudes toward AI-CS in health among different stakeholders. Overall, the results show a high level of openness to AI-CS in health across all stakeholder groups. In terms of feelings and their correlation with behavioral aspects, 2 key trends emerged: first, greater experience and knowledge correlated with a reduced tendency to react emotionally. Second, participants with positive experiences with technologies were generally more open and positive toward contactless sensors. The combined findings on thoughts and behavioral aspects highlighted 3 key tensions-around contact(lessness) and the importance and ambivalence of touch, between protection and surveillance (particularly regarding path- and context-dependency) and between the benefits and challenges of unobtrusiveness (especially in relation to control and governance implications). In addition, the analysis revealed the need for information and consent about AI-CS and clarified possible technical implementations and fields of application. This study provides a comprehensive and empirically grounded ethical analysis of stakeholder attitudes toward AI-CS in health. The findings offer valuable guidance for the responsible development, implementation, and governance of AI-CS in health care contexts.
A rigorous understanding of structure-property relationships is pivotal for the rational design of high-performance catalysts, which requires descriptors that are both predictive and physically interpretable. To overcome the limitations of empirical metrics and the opacity of machine-learning "black boxes", we propose a 4-stage Artificial Intelligence-guided Logical Descriptors (AILD) framework: (1) Knowledge-driven feature generation, (2) Computational data and preprocessing, (3) Feature engineering and predictive modeling, and (4) Mechanistic interpretation and experimental guidance. By uniting domain knowledge, first-principles simulation, and interpretable machine learning, this framework links mechanism to design and advances catalyst development beyond empirical trial-and-error toward a knowledge-driven, science-based paradigm. To validate these logical descriptors, we have implemented them in the open-source AILD software, enabling reproducible research and accelerated rational catalyst design.
To explore a mathematical method for calculating key articulator parameters based on mandibular movement trajectory data, and to compare the results of this method with reference values provided by existing foreign mandibular movement recording system, thereby establishing an algorithmic basis for developing a domestic mandibular movement recording system. Twenty healthy volunteers (7 males, 13 females) meeting inclusion criteria were recruited, with a mean age of (31±8) years. Mandibular movement data during protrusive and left/right lateral movements were recorded using the JMA Optic foreign mandibular movement recording system. A reference plane coordinate system was established using reverse engineering software, the multi-source maxillofacial data were integrated, and the coordinate systems were then unified. The condylar apex, medial condylar pole, lateral condylar pole, condylar center, empirical hinge axis point, and mandibular incisor point were selected as reference points for mandibular movement trajectories. Three-dimensional movement trajectories were generated for each reference point to calculate the sagittal condylar inclination (SCI), transverse condylar inclination (TCI), immediate side shift (ISS), incisal guidance inclination and canine guidance inclination. The calculation results from different reference points served as distinct experimental groups. Reference values provided by the JMA Optic system were used as the control group for comparative analysis. The SCI values of all the experimental groups were significantly higher than that of the control group (P < 0.001), with a systematic positive bias of approximately 3.1°, though the limits of agreement were relatively narrow. The TCI results varied depending on the reference point: Only the condylar apex group (5.7°±6.1°) was significantly lower than the control group (9.2°±6.6°) (t=5.023, P < 0.001). Differences between the remaining groups and the control group were not statistically significant. The empirical hinge axis point group showed the smallest mean bias and the narrowest limits of agreement, indicating optimal consistency with the control group's TCI. The ISS values were 0.0 (0.0) mm in all the groups. The incisal guidance inclination of the mandibular incisor point group (43.1°±8.6°) was significantly lower than that of the control group (50.6°±13.7°) (t=3.749, P=0.001) with poor consistency. However, the canine guidance inclination of the mandibular incisor point group showed no statistically significant difference compared with the control group (t=-1.873, P=0.069), with acceptable consistency. This study proposed a mathematical method for calculating key articulator parameters based on mandibular movement trajectory data, with a clear and traceable computational pathway. The proposed method showed acceptable consistency with the JMA Optic system algorithm in calculating TCI, ISS, and canine guidance inclination, but poor consistency in calculating SCI and incisal guidance inclination. The selection of reference points directly influenced the results of parameter calculation. This mathematical method provided a reliable theoretical foundation for achieving precise, personalized articulator parameter settings. 探索一套基于下颌运动轨迹数据计算𬌗架关键参数的数学方法, 并将该方法计算结果与现有国外下颌运动记录系统提供的参考值进行比较, 为国产化下颌运动轨迹记录系统的研发奠定相关算法基础。 招募符合纳入标准的20例健康志愿者(男性7例, 女性13例), 平均年龄(31±8)岁。使用JMA Optic下颌运动记录系统采集受试者前伸及左右侧方运动数据。使用逆向工程软件建立参考平面坐标系, 完成受试者颌面部多源数据整合, 再统一坐标系。选取髁突顶点、髁突内极、髁突外极、髁突中心、经验铰链轴点及下颌切点作为下颌运动轨迹参考点, 并生成各参考点的三维运动轨迹, 进而计算前伸髁导斜度(sagittal condylar inclination, SCI)、侧方髁导斜度(transversal condylar inclination, TCI)、迅即侧移(immediate side shift, ISS)、切导斜度和尖导斜度, 以不同参考点的计算结果作为不同实验组。以JMA Optic系统提供的参考值作为对照组进行比较分析。 各实验组的SCI均显著高于对照组(P < 0.001), 且存在约3.1°的系统性正偏差, 但一致性界限较窄。TCI结果因参考点而异, 仅髁突顶点组(5.7°±6.1°)显著低于对照组(9.2°±6.6°)(t=5.023, P < 0.001), 其余组与对照组间差异无统计学意义, 经验铰链轴点组与对照组间表现出最小的平均偏差与最窄的一致性界限, 该组与对照组TCI的一致性最佳。所有组别的ISS均为0.0(0.0) mm。下颌切点组的切导斜度(43.1°±8.6°)显著低于对照组(50.6°±13.7°)(t=3.749, P=0.001), 且一致性欠佳, 而下颌切点组的尖导斜度与对照组间差异无统计学意义(t=-1.873, P=0.069), 且一致性尚可。 提出了一套基于下颌运动轨迹数据计算𬌗架关键参数的数学方法, 其计算路径明确、可追溯。本方法与JMA Optic系统算法在计算TCI、ISS、尖导斜度上一致性尚可, 而在计算SCI和切导斜度上一致性欠佳, 参考点的选择对参数计算结果具有直接影响。该数学方法为实现精准化、个性化的𬌗架参数设置提供了可靠的理论基础。
In gamma-ray spectrometry, efficiency calibration using geometry-matched certified reference materials is often impractical for irregularly shaped radioactive sources and off-axis setups. Semi-empirical efficiency calibration software packages have been developed to address this issue; however, because these tools rely on simplified assumptions and user-defined parameters, they have limitations in covering complex source-detector geometries, leading to significant deviations in activity estimation. This study presents a three-dimensional scanner-based method that directly models the complete source-detector geometry and incorporates their relative positions into Monte Carlo simulations for efficiency calibration. This framework enables precise activity estimation for complex geometries without relying on geometric simplifications. Experimental validations were performed using fabricated showerhead- and turbine-shaped reference materials, and the results were compared with those obtained using commercial efficiency transfer software. The proposed method reproduced certified activity values with deviations of up to ±15%, whereas the commercial software exhibited larger deviations depending on the assumed model dimensions. This result highlights the potential of the proposed method for reliable in-situ gamma spectrometry of irregularly shaped materials and nonstandardized measurement conditions.
In Coiled Tubing (CT) acoustic telemetry, the reliability of surface signal reception is severely challenged by the "contact dead zone" of traditional probes and complex nonstationary environmental noise. To address these issues, this paper proposes a hardware-software integrated solution for high-fidelity signal extraction. In terms of hardware, a novel pickup probe based on the micro-lever principle is developed. By utilizing a pivoted lever structure with an optimized arm ratio of 2.6 to 1 and a full pressure-balanced mechanism, the design physically overcomes the contact dead zone inherent in traditional pressure-compensating probes and effectively isolates low frequency common-mode interference through a lateral floating architecture. In terms of software, a joint denoising model combining Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and wavelet thresholding is proposed. A cross-correlation coefficient criterion is introduced to adaptively screen intrinsic mode functions and eliminate residual fluid turbulence noise. Field experiments on a 1500 ft full-scale circulation loop demonstrate that the proposed probe improves the detection sensitivity of the radial breathing mode by approximately 20.6 dB compared to the baseline, while effectively eliminating stick-slip friction noise during dynamic tripping. Furthermore, the joint algorithm increases the Signal to noise Ratio by an additional 16.9 dB under typical pumping conditions of 0.5 bpm, with a normalized cross-correlation exceeding 0.96. These results verify that the proposed method effectively solves the bottleneck of weak signal detection in deep wells, providing robust technical support for CT telemetry operations.
Quantum key distribution (QKD) provides a foundation for information-theoretic security based on quantum mechanics, yet its practical deployment is often constrained by intrinsically low secure key generation rates, particularly in high-bandwidth or low-latency settings. This work introduces a hybrid cryptographic technique that integrates conventional QKD with deterministic chaos, modeled using the Lorenz attractor, to provide a software-based enhancement of the effective key expansion rate. From a short 20-bit QKD seed, the system generates long bitstreams within milliseconds; although these streams exhibit high empirical randomness, their fundamental entropy remains bounded by the seed, consistent with standard cryptographic principles. The method employs the exponential divergence of chaotic trajectories, such that even minute uncertainties in an adversary's estimate of the initial state lead to rapid desynchronization and, as established in Appendix A, an exponential decay of Eve's mutual information with respect to the expanded key. Simulation results confirm this theoretical behavior and demonstrate an effective rate amplification exceeding two orders of magnitude over the baseline QKD seed rate. The proposed chaotic expansion operates entirely in software and requires no modifications to existing QKD hardware, offering a practical pathway to enhance throughput for applications ranging from secure video communication to low-latency IoT and edge-computing environments.
Cast iron, whose structure simultaneously contains graphite precipitates in various forms, with controlled proportions of individual forms, has been named "Vari-Morph" (VM) cast iron by the authors. The authors have been researching the properties of such cast iron for many years, and the results are being published successively. This new type of cast iron, not included in national (Polish) or European standards, is intended as a material for special-purpose castings. These castings have unique requirements for a set of properties: physical, mechanical, and functional. VM cast iron is characterized by a set of properties that cannot be achieved when the graphite is uniform in shape. The desired properties of VM cast iron are achieved by controlling the morphology of graphite precipitates and the proportion of individual forms in the structure, rather than by changing the matrix. To quantitatively describe graphite precipitates, a proprietary method for determining the graphite shape indicator (fK) was developed. Graphite precipitate analysis is performed by scanning a microscopic image of the metallographic specimen, and then using Tescan Imaging Software (Tescan ESSENCE™) Unified Control for Imaging and Analysis, each precipitate is described using surface metrology parameters. The final value of the graphite shape indicator (fK) is calculated as a weighted average of all precipitates present in the analysis field. Empirical relationships between the fK indicator and a selected group of physical, mechanical, and functional properties of VM cast iron were determined. Studies have demonstrated a very well-correlated relationship between the fK indicator in VM cast iron and ultrasonic wave velocity (CL). The relationship CL = f(fk) is characterized by a very high correlation coefficient of R > 0.90. In previous publications, the authors presented the relationships between the fK indicator and physical properties such as thermal conductivity (λ), specific density (ρ), strength (Rm), elongation (A5), index quality (IQ), and functional properties such as low-cycle mechanical fatigue resistance (Zc), thermal fatigue resistance (N), and cast iron tightness (H) as functions of the fK index. The study concerned VM cast iron with a ferritic matrix. This work contains new empirical relationships that extend previous studies. The newly developed relationships replace the fk shape indicator with the velocity of the ultrasonic wave determined in cast iron with a specific fK indicator value. This resulted in a number of practical dependencies, including: λ = f(CL); ρ = f(CL); ED = f(CL); Rm = f(CL); A5 = f(CL); IQ = f(CL); N = f(CL); Zc = f(CL); H = f(CL). These relationships allow us to measure the wave velocity in a Vari Morph iron casting (with various forms of graphite) and determine a number of characteristics and properties of the material/iron from which the casting was made. It is possible to assess the suitability of a casting with a specific structure for operation under selected conditions.
Vectorcardiography provides a spatial orientation and magnitude of the electrical activity of the heart, but its complexity and lack of interactive resources have limited its application in medical and bioengineering education. To bridge this pedagogical gap, we developed a low-cost, real-time vectorcardiography simulator integrating a physical wet-lab interface and digital signal processing. The hardware setup consists of a conductive medium and electrodes, enabling users to manually simulate a cardiac vector and observe the resultant electrophysiological signal consequences in real time. The system was validated through three complementary methods: theoretical conformity analysis, emulation of real ECG data from database, and user-driven waveform generation tests. A pilot study with medical students and instructors provided empirical evidence of the educational value of the device, indicating that the active, hands-on nature of the system might foster deeper cognitive engagement and facilitates the integration of complex electrophysiological concepts. By providing open-source software and cost-effective hardware, this simulator offers a scalable solution to enhance cardiac electrophysiology education and promote the broader adoption of VCG in clinical practice.
A wide-ranging model for the viscosity surface of methane (CH4) was developed with a range of validity from the triple-point temperature to 625 K and pressures up to 1000 MPa. An extensive literature survey was undertaken and all available experimental data, to the extent of our knowledge, were considered in the development of the model. The correlation incorporates recent ab initio results for the dilute-gas contribution, Rainwater-Friend theory for the initial density dependence, and an empirical contribution for higher densities obtained using recently developed open-source symbolic regression software. The estimated uncertainty of the correlation (at k = 2) varies from a low of 0.13 % for the gas at pressures below 1 MPa over temperatures from 210 K to 392 K, to 0.8 % to 2 % depending on the temperature for the mid-pressure range of 1 MPa < p < 50 MPa, and is 4 % for pressures from 50 MPa to 1000 MPa for temperatures from 223 K to 625 K. In the liquid region at pressures up to 33 MPa, the estimated uncertainty is 3 %. The online version contains supplementary material available at 10.1007/s10765-025-03690-7.
Dynamic programming algorithms within the NUPACK software suite enable analysis of equilibrium base-pairing properties for complex and test tube ensembles containing arbitrary numbers of interacting nucleic acid strands. Currently, calculations are limited to single-material systems that are either all-RNA or all-DNA. Here, to enable analysis of mixed-material systems that are critical for modern applications in vitro, in situ, and in vivo, we develop physical models and dynamic programming algorithms that allow the material of the system to be specified at nucleotide resolution. Free energy parameter sets are constructed for both RNA/DNA and RNA/2'OMe-RNA mixed-material systems by combining available empirical mixed-material parameters with single-material parameter sets to enable treatment of the full complex and test tube ensembles. New dynamic programming recursions account for the material of each nucleotide throughout the recursive process. For a complex with N nucleotides, the mixed-material dynamic programming algorithms maintain the O(N3) time complexity of the single-material algorithms, enabling efficient calculation of diverse physical quantities over complex and test tube ensembles (e.g., complex partition function, equilibrium complex concentrations, equilibrium base-pairing probabilities, minimum free energy secondary structure(s), and Boltzmann-sampled secondary structures) at a cost increase of roughly 2.0-3.5×. The results of existing single-material algorithms are exactly reproduced when applying the new mixed-material algorithms to single-material systems. Accuracy is significantly enhanced using mixed-material models and algorithms to predict RNA/DNA and RNA/2'OMe-RNA duplex melting temperatures from the experimental literature as well as RNA/DNA melt profiles from new experiments. Mixed-material analyses can be performed online using the NUPACK web app (www.nupack.org) or locally using the NUPACK Python module.
Global urbanization has promoted to an increasing scale of construction projects, thereby making the optimization of construction project organization design a critical task in engineering management. However, conventional methods relying on empirical decision-making suffer from issues like low resource allocation efficiency, many difficulties in coordinating multi-objective conflicts and insufficient dynamic adjustment capabilities. To address these issues, we propose the first multi-objective extension of the Animated Oat Optimization algorithm (MOAOO), which represents a pioneering contribution in transforming the single-objective AOO into a multi-objective optimizer for construction project organization design. The developed algorithm fundamentally extends the biological mechanism of Animated Oat Optimization introducing several key innovations: (a) a novel hybrid position update rule combining Elite Reference Points and stochastic perturbations to prevent premature convergence; (b) an innovative three-layer constraint processing mechanism ensuring the generation of feasible solutions; and (c) a dual-threshold convergence monitoring system for early termination. Notably, we establish MOAOO as the inaugural multi-objective variant of AOO, integrating dynamic elite retention strategies, non-dominated sorting, and dynamic archive mechanisms to enable effective collaborative optimization of three conflicting goals. Enough experiments on ZDT test functions demonstrate that the designed MOAOO method shows competitive performance compared to advanced algorithms such as Pre-DEMO, MOEA/D-OED, and Pi-MOEA in terms of hypervolume inverted generational distance and the Spacing metrics. The indicator is improved in certain configurations. In an engineering case study, MOAOO reduces resource fluctuation by 72.7% in the compromise solution while achieving a balanced duration (279 days) and cost ($1.34 M). Moreover, the proposed algorithm converges in 118 iterations on average, thereby verifying its practical value in construction scheduling.