作者:
Alberto BosioGiorgio Di NataleLanguages
Informatics Systems and Software Engineering Department Faculty of Computer Science Université Montpelher II Montpellier France
High-density components and process scaling lead more and more to the occurrence of new class of dynamic faults, especially in Static Random Access Memories (SRAMs), thus requiring more and more sophisticated test alg...
详细信息
High-density components and process scaling lead more and more to the occurrence of new class of dynamic faults, especially in Static Random Access Memories (SRAMs), thus requiring more and more sophisticated test algorithms. Among the different types of algorithms proposed for testing SRAMs, March Tests have proven to be the most performing due to their low complexity, their simplicity and regular structure. Several March Tests for dynamic faults have been published, with different fault coverage. In this paper we propose March BDN, an extended version of the March AB. We will prove that it is able to increase the fault coverage in order to target latest dynamic faults. We show that the proposed March BDN has the highest known fault coverage compared to March Tests with the same complexity.
Summary form only given. Grids and peer-to-peer (P2P) networks have emerged as popular platforms for the next generation parallel and distributed computing. In these environments, resources are geographically distribu...
详细信息
Summary form only given. Grids and peer-to-peer (P2P) networks have emerged as popular platforms for the next generation parallel and distributed computing. In these environments, resources are geographically distributed, managed and owned by various organizations with different policies, and interconnected by wide-area networks or the Internet. This introduces a number of resource management and application scheduling challenges in the domain of security, resource and policy heterogeneity, fault tolerance, and dynamic resource conditions. In these dynamic distributed computing environments, it is hard and challenging to carry out resource management design studies in a repeatable and controlled manner as resources and users are autonomous and distributed across multiple organizations with their own policies. Therefore, simulations have emerged as the most feasible technique for analyzing policies for resource allocation. This paper presents emerging trends in distributed computing and their promises for revolutionizing the computing field, and identifies distinct characteristics and challenges in building them. We motivate opportunities for modeling and simulation communities and present our discrete-event grid simulation toolkit, called GridSim, used by researchers world-wide for investigating the design of utility-oriented computing systems such as data centers and grids. We present various case studies on the use of GridSim in modeling and simulation of business grids, parallel applications scheduling, workflow scheduling, and service pricing and revenue management.
The Gridbus workflow management system (GWMS) is designed to execute scientific applications, expressed in the form of workflows, onto Grid and Cloud resources. With the help of this system, we demonstrate a computati...
详细信息
The Gridbus workflow management system (GWMS) is designed to execute scientific applications, expressed in the form of workflows, onto Grid and Cloud resources. With the help of this system, we demonstrate a computational and data-intensive Image Registration (IR) workflow for functional Magnetic Resonance Imaging (fMRI) applications on Clouds and global Grids. We also present a demonstration of the Aneka System. Aneka is a .NET based Cloud software platform that provides: (i) a configurable service container hosting pluggable services for discovering, scheduling various types of workloads and (ii) a flexible and extensible framework supporting various programming models. We use distributed Grid and Cloud resources from Australia, Austria, France, Japan, and USA for the executions of image registration workflow, and resources from Melbourne University for executing map-reduce applications in Aneka cloud environment.
Integrated drive generators (IDGs) are the main source of electrical power for a number of critical systems in aircraft. Fast and accurate fault detection and isolation is a necessary component for safe and reliable o...
详细信息
Integrated drive generators (IDGs) are the main source of electrical power for a number of critical systems in aircraft. Fast and accurate fault detection and isolation is a necessary component for safe and reliable operation of the IDG and the aircraft. IDGs are complex systems, and a majority of the existing fault detection and isolation techniques are based on signal analysis and heuristic methods derived from experience. Model-based fault diagnosis techniques are hypothesized to be more general and powerful in designing detection and isolation schemes, but building sufficiently accurate models of complex IDGs is a difficult task. dq0 models have been developed for design and control of generators, but these models are not suitable for fault situations, where the generator may become unbalanced. In this paper, we present a hybrid phase-domain model for the aircraft generator that accurately represents both nominal and parametric faulty behaviors. We present the details of the hybrid modeling approach and simulation results of nominal operation and fault behaviors associated with parametric faults in the aircraft generator. The simulation results show that the developed model is capable of accurately capturing the generator dynamics under a variety of normal and faulty configurations.
One of the primary problems with computing clusters is to ensure that they maintain a reliable working state most of the time to justify economics of operation. In this paper, we introduce a model-based hierarchical r...
详细信息
One of the primary problems with computing clusters is to ensure that they maintain a reliable working state most of the time to justify economics of operation. In this paper, we introduce a model-based hierarchical reliability framework that enables periodic monitoring of vital health parameters across the cluster and provides for autonomic fault mitigation. We also discuss some of the challenges faced by autonomic reliability frameworks in cluster environments such as non-determinism in task scheduling in standard operating systems such as Linux and need for synchronized execution of monitoring sensors across the cluster. Additionally, we present a solution to these problems in the context of our framework, which utilizes a feedback controller based approach to compensate for the scheduling jitter in non real-time operating systems. Finally, we present experimental data that illustrates the effectiveness of our approach.
The MapReduce programming model allows users to easily develop distributed applications in data centers. However, many applications cannot be exactly expressed with MapReduce due to their specific characteristics. For...
The MapReduce programming model allows users to easily develop distributed applications in data centers. However, many applications cannot be exactly expressed with MapReduce due to their specific characteristics. For instance, genetic algorithms (GAs) naturally fit into an iterative style. That does not follow the two phase pattern of MapReduce. This paper presents an extension to the MapReduce model featuring a hierarchical reduction phase. This model is called MRPGA (MapReduce for parallel GAs), which can automatically parallelize GAs. We describe the design and implementation of the extended MapReduce model on a .NET-based enterprise grid system in detail. The evaluation of this model with its runtime system is presented using example applications.
Visual comprehension is the characteristic that deals with how efficiently and effectively users are able to grasp the underlying design intent along with the interactions to explore the visually represented informati...
详细信息
Visual comprehension is the characteristic that deals with how efficiently and effectively users are able to grasp the underlying design intent along with the interactions to explore the visually represented information. To assess comprehension i.e. to measure this seemingly immeasurable factor of visualization systems, we are proposing a set of criteria based on a detailed analysis of information flow from the raw data to the cognition of information in human mind. Our comprehension criteria are adapted from the pioneering work of two eminent researchers - Donald A. Nortnan and Aaron Marcus, who have investigated the issues of human perception and cognition, and visual effectiveness respectively. These proposed criteria are refined by experts' opinion in order to compose a minimal evaluation set that is then applied to a bioinformatics visualization study tool to show the efficacy of criteria in assessing comprehension in a more quantitative manner.
Decision tree classification is one of the most practical and effective methods which is used in inductive learning. Many different approaches, which are usually used for decision making and prediction, have been inve...
详细信息
ISBN:
(纸本)9781424429141
Decision tree classification is one of the most practical and effective methods which is used in inductive learning. Many different approaches, which are usually used for decision making and prediction, have been invented to construct decision tree classifiers. These approaches try to optimize parameters such as accuracy, speed of classification, size of constructed trees, learning speed, and the amount of used memory. There is a trade off between these parameters. That is to say that optimization of one may cause obstruction in the other, hence all existing approaches try to establish equilibrium. In this study, considering the effect of the whole data set on class assigning of any data, we propose a new approach to construct not perfectly accurate, but less complex trees in a short time, using small amount of memory. To achieve this purpose, a multi-step process has been used. We trace the training data set twice in any step, from the beginning to the end and vice versa, to extract the class pattern for attribute selection. Using the selected attribute, we make new branches in the tree. After making branches, the selected attribute and some records of training data set are deleted at the end of any step. This process continues alternatively in several steps for remaining data and attributes until the tree is completely constructed. In order to have an optimized tree the parameters which we use in this algorithm are optimized using genetic algorithms. In order to compare this new approach with previous ones we used some known data sets which have been used in different researches. This approach has been compared with others based on the classification accuracy and also the decision tree size. Experimental results show that it is efficient to use this approach particularly in cases of massive data sets, memory restrictions or short learning time.
Emerging deadline-driven Grid applications require a numberof computing resources to be available over a time frame, startingat a specific time in the future. To enable these applications, it is importantto predict th...
ISBN:
(纸本)9783540898931
Emerging deadline-driven Grid applications require a numberof computing resources to be available over a time frame, startingat a specific time in the future. To enable these applications, it is importantto predict the resource availability and utilise this informationduring provisioning because it affects their performance. It is impracticalto request the availability information upon the scheduling of everyjob due to communication overhead. However, existing work has notconsidered how the precision of availability information influences theprovisioning. As a result, limitations exist in developing advanced resourceprovisioning and scheduling mechanisms. This work investigateshow the precision of availability information affects resource provisioningin multiple site environments. Performance evaluation is conductedconsidering both multiple scheduling policies in resource providers andmultiple provisioning policies in brokers, while varying the precision ofavailability information. Experimental results show that it is possible toavoid requesting availability information for every Grid job scheduledthus reducing the communication overhead. They also demonstrate thatmultiple resource partition policies improve the slowdown of Grid jobs.
The Modeling in softwareengineering (MiSE) workshops are a collaboration between the ICSE and MoDELS research communities, with a focus on using models to facilitate software development.
ISBN:
(纸本)9781605580791
The Modeling in softwareengineering (MiSE) workshops are a collaboration between the ICSE and MoDELS research communities, with a focus on using models to facilitate software development.
暂无评论