Today, in general, embedded software is distributed onto networks and structured into logical components that interact asynchronously by exchanging messages. The software system is connected to sensors, actuators, human machine interfaces and networks. In this paper we study fundamental models of composed embedded software systems and their properties, identify and describe various basic views, and show how they are related. We consider, in particular, models of data, states, interfaces, functionality, hierarchically composed systems, and processes. We study relationships by abstraction and refinement as well as forms of composition and modularity. In particular, we introduce a comprehensive mathematical model and a corresponding mathematical theory for composed systems, its essential views and their relationships. We introduce two methodologically essential, complementary and orthogonal concepts for the structured modeling of multifunctional embedded systems in software and systems engineering and their scientific foundation. One approach addresses mainly tasks in requirements engineering and the specification of the comprehensive user functionality of multifunctional systems in terms of their functions, features and services. The other approach essentially addresses the design phase with its task to develop logical architectures formed by networks of interactive components that are specified by their interface behavior.
Today, in silico studies and trial simulations already complement experimental approaches in pharmaceutical R&D and have become indispensable tools for decision making and communication with regulatory agencies. While biology is multiscale by nature, project work, and software tools usually focus on isolated aspects of drug action, such as pharmacokinetics at the organism scale or pharmacodynamic interaction on the molecular level. We present a modeling and simulation software platform consisting of PK-Sim(®) and MoBi(®) capable of building and simulating models that integrate across biological scales. A prototypical multiscale model for the progression of a pancreatic tumor and its response to pharmacotherapy is constructed and virtual patients are treated with a prodrug activated by hepatic metabolization. Tumor growth is driven by signal transduction leading to cell cycle transition and proliferation. Free tumor concentrations of the active metabolite inhibit Raf kinase in the signaling cascade and thereby cell cycle progression. In a virtual clinical study, the individual therapeutic outcome of the chemotherapeutic intervention is simulated for a large population with heterogeneous genomic background. Thereby, the platform allows efficient model building and integration of biological knowledge and prior data from all biological scales. Experimental in vitro model systems can be linked with observations in animal experiments and clinical trials. The interplay between patients, diseases, and drugs and topics with high clinical relevance such as the role of pharmacogenomics, drug-drug, or drug-metabolite interactions can be addressed using this mechanistic, insight driven multiscale modeling approach.
EQ3/6 is a software package for geochemical modeling of aqueous systems. This report describes version 7.0. The major components of the package include: EQ3NR, a speciation-solubility code; EQ6, a reaction path code which models water/rock interaction or fluid mixing in either a pure reaction progress mode or a time mode; EQPT, a data file preprocessor, EQLIB, a supporting software library; and five supporting thermodynamic data files. The software deals with the concepts of thermodynamic equilibrium, thermodynamic disequilibrium, and reaction kinetics. The five supporting data files contain both standard state and activity coefficient-related data. Three support the use of the Davies or B-dot equations for the activity coefficients; the other two support the use of Pitzer`s equations. The temperature range of the thermodynamic data on the data files varies from 25{degree}C only to 0--300{degree}C. EQPT takes a formatted data file (a data0 file) and writes an unformatted near-equivalent called a datal file, which is actually the form read by EQ3NR and EQ6. EQ3NR is useful for analyzing groundwater chemistry data, calculating solubility limits, and determining whether certain reactions are in states of partial equilibrium or disequilibrium. It is also required to initialize an EQ6 calculation. EQ6 models the consequences of reacting an aqueous solution with a set of reactants which react irreversibly. It can also model fluid mixing and the consequences of changes in temperature. This code operates both in a pure reaction progress frame and in a time frame.
Quite often failures in network based services and server systems may not be accidental, but rather caused by deliberate security intrusions. We would like such systems to either completely preclude the possibility of a security intrusion or design them to be robust enough to continue functioning despite security attacks. Not only is it important to prevent or tolerate security intrusions, it is equally important to treat security as a QoS attribute at par with, if not more important than other QoS attributes such as availability and performability. This paper deals with various issues related to quantifying the security attribute of an intrusion tolerant system, such as the SITAR system. A security intrusion and the response of an intrusion tolerant system to the attack is modeled as a random process. This facilitates the use of stochastic modeling techniques to capture the attacker behavior as well as the system's response to a security intrusion. This model is used to analyze and quantify the security attributes of the system. The security quantification analysis is first carried out for steady-state behavior leading to measures like steady-state availability. By transforming this model to a model with absorbing states, we compute a security measure called the "mean time (or effort) to security failure" and also compute probabilities of security failure due to violations of different security attributes.
International audience
暂无摘要(点击查看原文获取完整内容)
暂无摘要(点击查看原文获取完整内容)
暂无摘要(点击查看原文获取完整内容)
暂无摘要(点击查看原文获取完整内容)
Variability models represent the common and variable features of products in a product line. Since the introduction of FODA in 1990, several variability modeling languages have been proposed in academia and industry, followed by hundreds of research papers on variability models and modeling. However, little is known about the practical use of such languages. We study the constructs, semantics, usage, and associated tools of two variability modeling languages, Kconfig and CDL, which are independently developed outside academia and used in large and significant software projects. We analyze 128 variability models found in 12 open--source projects using these languages. Our study 1) supports variability modeling research with empirical data on the real-world use of its flagship concepts. However, we 2) also provide requirements for concepts and mechanisms that are not commonly considered in academic techniques, and 3) challenge assumptions about size and complexity of variability models made in academic papers. These results are of interest to researchers working on variability modeling and analysis techniques and to designers of tools, such as feature dependency checkers and interactive product configurators.
Abstract An integrated molecular modeling system for designing and studying organic and bioorganic molecules and their molecular complexes using molecular mechanics is described. The graphically controlled, atom‐based system allows the construction, display and manipulation of molecules and complexes having as many as 10,000 atoms and provides interactive, state‐of‐the‐art molecular mechanics on any subset of up to 1,000 atoms. The system semiautomates the graphical construction and analysis of complex structures ranging from polycyclic organic molecules to biopolymers to mixed molecular complexes. We have placed emphasis on providing effective searches of conformational space by a number of different methods and on highly optimized molecular mechanics energy calculations using widely used force fields which are supplied as external files. Little experience is required to operate the system effectively and even novices can use it to carry out sophisticated modeling operations. The software has been designed to run on Digital Equipment Corporation VAX computers interfaced to a variety of graphics devices ranging from inexpensive monochrome terminals to the sophisticated graphics displays of the Evans & Sutherland PS300 series.
Over the years, a number of approaches have been proposed on the description of systems and software in terms of multiple views represented by models. This modelling branch, so-called multi-view software and system modelling, praises a differentiated and complex scientific body of knowledge. With this study, we aimed at identifying, classifying, and evaluating existing solutions for multi-view modelling of software and systems. To this end, we conducted a systematic literature review of the existing state of the art related to the topic. More specifically, we selected and analysed 40 research studies among over 8600 entries. We defined a taxonomy for characterising solutions for multi-view modelling and applied it to the selected studies. Lastly, we analysed and discussed the data extracted from the studies. From the analysed data, we made several observations, among which: (i) there is no uniformity nor agreement in the terminology when it comes to multi-view artefact types, (ii) multi-view approaches have not been evaluated in industrial settings and (iii) there is a lack of support for semantic consistency management and the community does not appear to consider this as a priority. The study results provide an exhaustive overview of the state of the art for multi-view software and systems modelling useful for both researchers and practitioners.
Component-based software structuring principles are now commonplace at the application level; but componentization is far less established when it comes to building low-level systems software. Although there have been pioneering efforts in applying componentization to systems-building, these efforts have tended to target specific application domains (e.g., embedded systems, operating systems, communications systems, programmable networking environments, or middleware platforms). They also tend to be targeted at specific deployment environments (e.g., standard personal computer (PC) environments, network processors, or microcontrollers). The disadvantage of this narrow targeting is that it fails to maximize the genericity and abstraction potential of the component approach. In this article, we argue for the benefits and feasibility of a generic yet tailorable approach to component-based systems-building that offers a uniform programming model that is applicable in a wide range of systems-oriented target domains and deployment environments. The component model, called OpenCom , is supported by a reflective runtime architecture that is itself built from components. After describing OpenCom and evaluating its performance and overhead characteristics, we present and evaluate two case studies of systems we have built using OpenCom technology, thus illustrating its benefits and its general applicability.
MOTIVATION: Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. RESULTS: Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. AVAILABILITY: GROMACS is an open source and free software available from http://www.gromacs.org. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
GEPASI is a software system for modelling chemical and biochemical reaction networks on computers running Microsoft Windows. For any system of up to 45 metabolites and 45 reactions, each with any user-defined or one of 35 predefined rate equations, one can produce trajectories of the metabolite concentrations and obtain a steady state (if it does exist). When steady-state solutions are produced, elasticity and control coefficients, as defined in metabolic control analysis, are calculated. GEPASI also allows the automatic generation of a sequence of simulations with different combinations of parameter values, effectively scanning a hyper-solid in parameter space. Together with the ability to produce user-defined columnar data files, these features allow for both very quick and systematic study of biochemical pathway models. The source code (in C) is available on request from the author, and while the user interface is dependent on having MS-Windows as the operating system, the numerical part is portable to other operating systems. GEPASI is suitable both for research and educational purposes. Although GEPASI was written with biochemical pathways in mind, it can equally be used to stimulate other dynamical systems.
The Sequence Alignment and Modeling system (SAM) is a collection of flexible software tools for creating, refining, and using linear hidden Markov models for biological sequence analysis. The model states can be viewed as representing the sequence of columns in a multiple sequence alignment, with provisions for arbitrary position-dependent insertions and deletions in each sequence. The models are trained on a family of protein or nucleic acid sequences using an expectation-maximization algorithm and a variety of algorithmic heuristics. A trained model can then be used to both generate multiple alignments and search databases for new members of the family. SAM is written in the C programming language for Unix machines and MasPar parallel computers, and includes extensive documentation. The algorithms and methods used by SAM have been described in several pioneering papers from the University of California, Santa Cruz. These papers, as well as the SAM software suite, are available via anonymous ftp to ftp.cse.ucsc.edu in the pub/protein directory, or via the World-Wide Web to http://www.cse.ucsc.edu/research/compbio/.
The book presents both the current state of the art in requirements engineering and a systematic method for engineering high-quality requirements, broken down into four parts. The first part introduces fundamental concepts and principles including the aim and scope of requirements engineering, the products and processes involved, requirements qualities to aim at and flaws to avoid, and the critical role of requirements engineering in system and software engineering.\nThe second part of the book is devoted to system modeling in the specific context of engineering requirements. It presents a multi-view modeling framework that integrates complementary techniques for modeling the system-as-is and the system-to-be. The third part of the book reviews goal-based reasoning techniques to support the various steps of the KAOS method. The fourth part of the book goes beyond requirements engineering to discuss the mapping from goal-oriented requirements to software specifications and to software architecture.\nOnline software will accompany the book and will add value to both classroom and self-study by enabling students to build models and specifications involved in the book=s exercises and case studies, helping them to discover the latest RE technology solutions. Instructor resources such as slides, solutions, models and animations will be available from an accompanying website.
Several methods for enterprise systems analysis rely on flow-oriented representations of business operations, otherwise known as business process models. The Business Process Modeling Notation (BPMN) is a standard for capturing such models. BPMN models facilitate communication between domain experts and analysts and provide input to software development projects. Meanwhile, there is an emergence of methods for enterprise software development that rely on detailed process definitions that are executed by process engines. These process definitions refine their counterpart BPMN models by introducing data manipulation, application binding, and other implementation details. The de facto standard for defining executable processes is the Business Process Execution Language (BPEL). Accordingly, a standards-based method for developing process-oriented systems is to start with BPMN models and to translate these models into BPEL definitions for subsequent refinement. However, instrumenting this method is challenging because BPMN models and BPEL definitions are structurally very different. Existing techniques for translating BPMN to BPEL only work for limited classes of BPMN models. This article proposes a translation technique that does not impose structural restrictions on the source BPMN model. At the same time, the technique emphasizes the generation of readable (block-structured) BPEL code. An empirical evaluation conducted over a large collection of process models shows that the resulting BPEL definitions are largely block-structured. Beyond its direct relevance in the context of BPMN and BPEL, the technique presented in this article addresses issues that arise when translating from graph-oriented to block-structure flow definition languages.
Software systems are known to suffer from outages due to transient errors. Recently, the phenomenon of "software aging", in which the state of the software system degrades with time, has been reported (S. Garg et al., 1998). The primary causes of this degradation are the exhaustion of operating system resources, data corruption and numerical error accumulation. This may eventually lead to performance degradation of the software or crash/hang failure, or both. Earlier work in this area to detect aging and to estimate its effect on system resources did not take into account the system workload. In this paper, we propose a measurement-based model to estimate the rate of exhaustion of operating system resources both as a function of time and the system workload state. A semi-Markov reward model is constructed based on workload and resource usage data collected from the UNIX operating system. We first identify different workload states using statistical cluster analysis and build a state-space model. Corresponding to each resource, a reward function is then defined for the model based on the rate of resource exhaustion in the different states. The model is then solved to obtain trends and the estimated exhaustion rates and the time-to-exhaustion for the resources. With the help of this measure, proactive fault management techniques such as "software rejuvenation" (Y. Huang et al., 1995) may be employed to prevent unexpected outages.
Efficient scheduling of resources is critical to the proper functioning of businesses in today's competitive environment. Scheduling focuses on theoretical as well as applied aspects of the scheduling of resources. It is unique in the range of problems and issues that it covers. A software package especially designed for the readers of this text is available with this book. Known as LEKIN, this system covers most of the machine environments discussed in this book and enables the user to test many of the algorithms and heuristics described. The book consists of three parts. The first part focuses on deterministic models and deals with single and parallel machine models. The second part covers stochastic models. The third part deals with scheduling in practice. It covers heuristics that are popular with practicioners and also delves into system design and developmental issues. Discussion of the basic properties of scheduling models Computational as well as theoretical exercises at the end of each chapter Thorough examination of numerous applications Investigation of the latest developments in the field Discussion of future research developments CD containing software included with book.