Distributed synchronization for parallel simulation is generally classified as being either optimistic or conservative. While considerable investigations have been conducted to analyze and optimize each of these synch...
详细信息
Distributed synchronization for parallel simulation is generally classified as being either optimistic or conservative. While considerable investigations have been conducted to analyze and optimize each of these synchronization strategies, very little study on the definition and strictness of causality have been conducted. Do we really need to preserve causality in all types of simulations? This paper attempts to answer this question. We argue that significant performance gains can be made by reconsidering this definition to decide if the parallel simulation needs to preserve causality. We investigate the feasibility of unsynchronized parallel simulation through the use of several queuing model simulations and present a comparative analysis between unsynchronized and Time Warp simulation.
The SAVANT, QUEST II, and HEPE research programs at the University of Cincinnati include the development and distribution of VHDL analysis and simulation capabilities. These capabilities are being freely distributed f...
详细信息
The SAVANT, QUEST II, and HEPE research programs at the University of Cincinnati include the development and distribution of VHDL analysis and simulation capabilities. These capabilities are being freely distributed for non-commercial use. The SAVANT project is underway specifically to develop a VHDL analyzer with a well-documented, extensible intermediate form;the main objective is to smooth the integration of VHDL technology into university and industrial research programs. The SAVANT project is funded through the Air Force SBIR program and is a joint activity between the University of Cincinnati and MTL Systems, Inc. The QUEST II program is investigating parallel algorithms and architectures for simulation, behavioral synthesis, and ATPG. The HEPE program is investigating (in part) novel strategies for relaxing causal orders in the parallel simulation of active networks. As part of the QUEST II/HEPE simulation activities, a VHDL simulation kernel is being developed that will operate with the SAVANT intermediate form for sequential or parallel execution of VHDL models (a C++ code generator from the SAVANT intermediate is being jointly developed by the SAVANT and QUEST II programs). All of the software from the QUEST and HEPE simulation programs is freely available for use (commercial or otherwise).
A framework for performance analysis of parallel discrete event simulators is presented. The center-piece of this framework is a platform-independent Workload Specification Language (WSL). WSL is a language that allow...
详细信息
A framework for performance analysis of parallel discrete event simulators is presented. The center-piece of this framework is a platform-independent Workload Specification Language (WSL). WSL is a language that allows the characterization of simulation models using a set of fundamental performance-critical parameters. WSL also implements a facility for representing real models. For each simulator to be tested, a WSL translator is used to generate synthetic platform-specific simulation models that conform to the performance characteristics captured by the WSL description. Accordingly, sets of portable simulation models that explore the effects of the different parameters, individually or collectively, on the performance can be constructed. The construction of the workload simulation models is assisted using a Synthetic Workload Generator (SWG). The utility of the system is demonstrated with the generation of a representative set of experiments. The described framework can be used to create a standard benchmark suite that consists of a mixture of real simulation models, selected from different application domains, and synthetic models generated by SWG.
Data center efficiency has quickly become a first-class design goal. In response, many studies have emerged from the academic community and industry using low-power design to help improve the energy efficiency of serv...
详细信息
Side-channel attacks exploit the hardware implementation of processors to extract sensitive data. Attacks that target shared resources between the victim and the attacker are prominent. A shared cache (available in to...
详细信息
Clear imagery of retinal vessels is one of the critical shreds of evidence in specific disease diagnosis and evaluation, including sophisticated hierarchical topology and plentiful-and-intensive capillaries. In this w...
详细信息
Accurately modeling server power consumption is critical in designing data center power provisioning infrastructure. However, to date, most research proposals have used average CPU utilization to infer the power consu...
详细信息
ISBN:
(纸本)9781450301466
Accurately modeling server power consumption is critical in designing data center power provisioning infrastructure. However, to date, most research proposals have used average CPU utilization to infer the power consumption of clusters, typically averaging over tens of minutes per observation. We demonstrate that average CPU utilization is not sufficient to predict peak power consumption accurately. By characterizing the relationship between server utilization and power supply behavior, we can more accurately model the actual peak power consumption. Finally, we introduce a new operating system metric that can capture the needed information to design for peak power with low overhead. Copyright 2010 ACM.
Recently, there has been an explosive growth in Internet services, greatly increasing the importance of data center systems. Applications served from the cloud are driving data center growth and quickly overtaking tra...
详细信息
Perfect quality represents 100% conformance to specifications. Complex systems make it difficult and expensive to assure conformance to specifications by post production testing alone. As a result quality assurance pr...
详细信息
Perfect quality represents 100% conformance to specifications. Complex systems make it difficult and expensive to assure conformance to specifications by post production testing alone. As a result quality assurance processes are moving upstream in the life cycle, i.e., statistical process control and design for manufacturability. These processes try to avoid defects or make it easier to detect defects. In software systems this move to upstream processes for quality control is still in its early stages. As with more traditional manufacturing systems, maturity of the software design and development process will dramatically reduce the cost of attaining conformance to specifications. Even so, it is difficult to imagine a 'bug free' complex software system. Downstream processes will continue to play an important part in the efforts to achieve defect-free software. This paper presents results of a survey which used the defect detection tool Purify to examine off-the-shelf software products in order to show that software errors continue to escape testing and threaten field failures. Errors detected are compiled and presented. All data were collected on C and C++ programs running in a UNIX operating system environment.
The Aizu supercomputer is a massively parallel system suited to the solution of virtual reality problems and the support of multimedia applications. It employs a highly parallel MIMD architecture using a conflict-free...
详细信息
暂无评论