The design of an adequate test suite is usually guided by identifying test requirements which should be satisfied by the selected set of test cases. To reduce testing costs, test suite minimization heuristics aim at e...
详细信息
ISBN:
(纸本)9780769551852
The design of an adequate test suite is usually guided by identifying test requirements which should be satisfied by the selected set of test cases. To reduce testing costs, test suite minimization heuristics aim at eliminating redundancy from existing test suites. However, recent test suite minimization approaches lack (1) to handle test suites commonly derived for families of similar software variants under test, and (2) to incorporate fine-grained information concerning cost/profit goals for test case selection. In this paper, we propose a formal framework to optimize test suites designed for sets of software variants under test w.r.t. multiple conflicting cost/profit objectives. The problem representation is independent of the concrete testing methodology. We apply integer linear programming (ILP) to approximate optimal solutions. We further develop an efficient incremental heuristic for deriving a sequence of representative software variants to be tested for approaching optimal profits under reduced costs. We evaluated the algorithm by comparing its outcome to the optimal solution.
This paper revisits the economic production quantity (EPQ) model with rework process at a single-stage manufacturing system with planned backorders. It is well known that any imperfect production system of real life h...
详细信息
This paper revisits the economic production quantity (EPQ) model with rework process at a single-stage manufacturing system with planned backorders. It is well known that any imperfect production system of real life has random defective rates. In this direction, this paper extends an inventory model to allow random defective rates. Basically, three different inventory models are developed for three different distribution density functions such as uniform, triangular, and beta. The analytical derivation provides closed-form solution for each inventory model. We have made comparison tables of optimal results among the distribution functions. Some numerical examples and sensitivity analysis are given to illustrate the inventory models. (C) 2014 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.
This paper describes an approach to implementation of aspect-oriented programming (AOP) frameworks for C, outlines traditional AOP facilities for different programming languages, and shows how specific features of C a...
详细信息
This paper describes an approach to implementation of aspect-oriented programming (AOP) frameworks for C, outlines traditional AOP facilities for different programming languages, and shows how specific features of C and a build process of C programs affect AOP implementations. Next, we consider additional requirements imposed by a practical application of AOP implementations for C programs. Existing solutions are described and possibility of their use is analyzed. The paper describes a new AOP tool for C that implements the proposed approach and demonstrates its capabilities.
The conceptualization of knowledge required for an efficient processing of textual data is usually represented as ontologies. Depending on the knowledge domain and tasks, different types of ontologies are constructed:...
详细信息
The conceptualization of knowledge required for an efficient processing of textual data is usually represented as ontologies. Depending on the knowledge domain and tasks, different types of ontologies are constructed: formal ontologies, which involve axioms and detailed relations between concepts;taxonomies, which are hierarchically organized concepts;and informal ontologies, such as Internet encyclopedias created and maintained by user communities. Manual construction of ontologies is a time-consuming and costly process requiring the participation of experts;therefore, in recent years, there have appeared many systems that automate this process in a greater or lesser degree. This paper provides an overview of methods for automatic construction and enrichment of ontologies, with the focus being placed on informal ontologies.
The increasing density of NAND flash memory leads to a dramatic increase in the bit error rate of flash, which greatly reduces the ability of error correcting codes (ECC) to handle multibit errors. NAND flash memory i...
详细信息
The increasing density of NAND flash memory leads to a dramatic increase in the bit error rate of flash, which greatly reduces the ability of error correcting codes (ECC) to handle multibit errors. NAND flash memory is normally used to store the file system metadata and page mapping information. Thus, a broken physical page containing metadata may cause an unintended and severe change in functionality of the entire flash. This paper presents Meta-Cure, a novel hardware and file system interface that transparently protects metadata in the presence of multibit faults. Meta-Cure exploits built-in ECC and replication in order to protect pages containing critical data, such as file system metadata. Redundant pairs are formed at run time and distributed to different physical pages to protect against failures. Meta-Cure requires no changes to the file system, on-chip hierarchy, or hardware implementation of flash memory chip. We evaluate Meta-Cure under a real-embedded platform using a variety of I/O traces. The evaluation platform adopts dual ARM Cortex A9 processor cores with 64 Gb NAND flash memory. We have evaluated the effectiveness of Meta-Cure on the new technology file system file system. Experimental results show that the proposed technique can reduce uncorrectable page errors by 70.38% with less than 7.86% time overhead in comparison with conventional error correction techniques.
programming experience is an important confounding parameter in controlled experiments regarding program comprehension. In literature, ways to measure or control programming experience vary. Often, researchers neglect...
详细信息
programming experience is an important confounding parameter in controlled experiments regarding program comprehension. In literature, ways to measure or control programming experience vary. Often, researchers neglect it or do not specify how they controlled for it. We set out to find a well-defined understanding of programming experience and a way to measure it. From published comprehension experiments, we extracted questions that assess programming experience. In a controlled experiment, we compare the answers of computer-science students to these questions with their performance in solving program-comprehension tasks. We found that self estimation seems to be a reliable way to measure programming experience. Furthermore, we applied exploratory and confirmatory factor analyses to extract and evaluate a model of programming experience. With our analysis, we initiate a path toward validly and reliably measuring and describing programming experience to better understand and control its influence in program-comprehension experiments.
The paper develops the approach to testing considered in [1]. A formal model of test interaction of the most general type and reduction-type conformance are proposed for which there is hardly any dependence between er...
详细信息
The paper develops the approach to testing considered in [1]. A formal model of test interaction of the most general type and reduction-type conformance are proposed for which there is hardly any dependence between errors. It is shown that many known types of conformance in various interaction semantics are particular cases of this general model. The paper is devoted to the problem of dependence between errors defined by specification and to the related problem of optimization of tests. There is dependence between errors if there exists a strict subset of errors such that any nonconformal implementation (i.e., implementation that contains some error) contains an error from this subset. Accordingly, it is sufficient that the tests detect errors only from this subset. In the general model proposed, the dependence between errors may arise when one chooses, as a class of implementations under test, some strict subset of the class of all implementations. Partial interaction semantics and/or various implementation hypotheses (in particular, a safety hypothesis) precisely suggest that an implementation under test is not arbitrary but belongs to some subclass of (safe) implementations.
The paper is concerned with the problem of load balancing for a set of parallel tasks on a group of geographically distributed clusters aimed at reducing the energy consumption in computation. Several task allocation ...
详细信息
The paper is concerned with the problem of load balancing for a set of parallel tasks on a group of geographically distributed clusters aimed at reducing the energy consumption in computation. Several task allocation algorithms are put forward, and experimental verification of their efficiency is performed.
In the paper, defects in a program code in Python are considered. It is shown that these defects are different from those in a code in C/C++;hence, there is a need in study of defects in large-scale projects with an o...
详细信息
In the paper, defects in a program code in Python are considered. It is shown that these defects are different from those in a code in C/C++;hence, there is a need in study of defects in large-scale projects with an open source code. A classification of the defects found, which is based on whether type inference is required for finding an error, is presented. It is shown that there exists a small portion of "simple" defects;however, the determination of the majority of the defects requires type inference. The question of what constructs of the Python language are to be supported in type inference for finding real defects is discussed.
Prospects for applying virtualization technology in high-performance computations on the x64 systems are studied. Principal reasons for performance degradation when parallel programs are running in virtual environment...
详细信息
Prospects for applying virtualization technology in high-performance computations on the x64 systems are studied. Principal reasons for performance degradation when parallel programs are running in virtual environments are considered. The KVM/QEMU and Palacios virtualization systems are considered in detail, with the HPC Challenge and NAS Parallel Benchmarks used as benchmarks. A modern computing cluster built on the Infiniband high-speed interconnect is used in testing. The results of the study show that, in general, virtualization is reasonable for a wide class of high-performance applications. Fine tuning of the virtualization systems involved made it possible to reduce overheads from 10-60% to 1-5% on the majority of tests from the HPC Challenge and NAS Parallel Benchmarks suites. The main bottlenecks of virtualization systems are reduced performance of the memory system (which is critical only for a narrow class of problems), costs associated with hardware virtualization, and the increased noise caused by the host operating system and hypervisor. Noise can have a negative effect on performance and scalability of fine-grained applications (applications with frequent small-scale communications). The influence of noise significantly increases as the number of nodes in the system grows.
暂无评论