Ordering of simultaneous events in DES is an important issue as it has an impact on modelling expressiveness, model correctness as well as causal dependencies. In sequential DES this is a problem which has attracted m...
详细信息
Ordering of simultaneous events in DES is an important issue as it has an impact on modelling expressiveness, model correctness as well as causal dependencies. In sequential DES this is a problem which has attracted much attention over the years and most systems provide the user with tools to deal with such issues. This has also attracted some attention within the PDES community and we present an overview of these efforts. We have, however not yet found a scheme which provides us with the desired functionality. Thus, we present and evaluate some simple schemes to achieve a well defined ordering of events and means to identify both causally dependent and independent events with identical timestamps in the context of optimistic simulations. These schemes should be applicable also to conservative PDES.
We present two GVT computation algorithms for PDES techniques with event based activities, relying on a space-time memory abstraction. Algorithm 2 involves a modification in the activity control, and is based on an ep...
详细信息
We present two GVT computation algorithms for PDES techniques with event based activities, relying on a space-time memory abstraction. Algorithm 2 involves a modification in the activity control, and is based on an epoch coloring scheme. The effect of the modification is assessed through an experimental study on a simulator implemented in the Linda coordination language. Experiments are performed on a cluster of workstations, and show that the modified activity control discipline is able to enhance performance.
This paper introduces the Critical Channel Traversing (CCT) algorithm, a new scheduling algorithm for both sequential and parallel discrete event simulation. CCT is a general conservative algorithm that is aimed at th...
详细信息
This paper introduces the Critical Channel Traversing (CCT) algorithm, a new scheduling algorithm for both sequential and parallel discrete event simulation. CCT is a general conservative algorithm that is aimed at the simulation of low-granularity network models on shared-memory multi-processor computers. An implementation of the CCT algorithm within a kernel called TasKit has demonstrated excellent performance for large ATM network simulations when compared to previous sequential, optimistic and conservative kernels. TasKit has achieved two to three times speedup on a single processor with respect to a splay tree central-event-list based sequential kernel. On a 16 processor (R8000) Silicon Graphics PowerChallenge, TasKit has achieved an event-rate of 1.2 million events per second and a speedup of 26 relative to the sequential kernel for a large ATM network model. Performance is achieved through a multi-level scheduling scheme that supports the scheduling of large grains of computation even with low-granularity events. Performance is also enhanced by supporting good cache behavior and automatic load balancing. The paper describes the algorithm and its motivation, proves its correctness and briefly presents performance results for TasKit.
The High Level Architecture (HLA) provides the specification of a software architecture for distributedsimulation. The baseline definition of the HLA includes the HLA Rules, The HLA Interface Specification, and the H...
详细信息
The High Level Architecture (HLA) provides the specification of a software architecture for distributedsimulation. The baseline definition of the HLA includes the HLA Rules, The HLA Interface Specification, and the HLA Object Model Template. The HLA Rules are a set of 10 basic rules that define the responsibilities and relationships among the components of an HLA federation. The HLA Interface Specification provides a specification of the functional interfaces between HLA federates and the HLA Runtime Infrastructure. The HLA OMT provides a common presentation format for HLA simulation and Federation Object Models. The HLA was developed over the past three years. It is in the process of being applied with simulations developed for analysis, training and test and evaluation and incorporated into industry standards for distributedsimulation by both the Object Management Group and the IEEE. This paper provides a discussion of key areas where there are technology challenges in the future implementation and application of the HLA.
This paper presents a checkpointing scheme for optimistic simulation which is a mixed approach between periodic and probabilistic checkpointing. The latter based on statistical data collected during the simulation, ai...
详细信息
This paper presents a checkpointing scheme for optimistic simulation which is a mixed approach between periodic and probabilistic checkpointing. The latter based on statistical data collected during the simulation, aims at recording as checkpoints states of a logical process that have high probability to be restored due to rollback (this is done in order to make those states immediately available). The periodic part prevents performance degradation due to state reconstruction (coasting forward) cost whenever the collected statistics do not allow to identify states highly likely to be restored. Hence, this scheme can be seen as a highly general solution to tackle the checkpoint problem in optimistic simulation. A performance comparison with previous solutions is carried out through a simulation study of a store-and-forward communication network in a two-dimensional torus topology.
A code for the simulation of X-ray diffraction pattern of a powder has been implemented on a massively parallel SIMD platform developed in the frame of the PQE2000 Project. The code allows the evaluation of the diffra...
详细信息
A code for the simulation of X-ray diffraction pattern of a powder has been implemented on a massively parallel SIMD platform developed in the frame of the PQE2000 Project. The code allows the evaluation of the diffraction pattern of atomic-scale models of both perfectly ordered and disordered structures. The code has been used to investigate the structures resulting from the non-equilibrium alloying process of an immiscible metallic couple (Ag-Cu).
We show that the latest version of massively parallel processing associative string processing architecture (System-V) is applicable for fast Monte Carlo simulation if an effective on-processor random number generator...
详细信息
We show that the latest version of massively parallel processing associative string processing architecture (System-V) is applicable for fast Monte Carlo simulation if an effective on-processor random number generator is implemented. Our lagged Fibonacci generator can produce 10/sup 8/ random numbers on a processor string of 12 K PE-s. The time dependent Monte Carlo algorithm of the one-dimensional non-equilibrium kinetic Ising model performs 80 faster than the corresponding serial algorithm on a 300 MHz UltraSparc.
Behavioral simulation including the impact of architectural choices is required to help the designer in reducing design ambiguities and errors of embedded complex systems which are distributed (for performance and rel...
详细信息
Behavioral simulation including the impact of architectural choices is required to help the designer in reducing design ambiguities and errors of embedded complex systems which are distributed (for performance and reliability requirements) and present hard real-time features (time critical avionics functions), as early in the product life cycle as possible. This paper presents a modeling and simulation approach for real-time performance analysis of advanced modular avionics architectures that must operate with hard real-time constraints. Proposed models allow the design capture at increased level of abstraction, including the description of avionics real-time software architecture, target modular hardware architecture made up of available components, and clustering and mapping of the software architecture on the hardware architecture. These models have been coupled with a commercially available discrete event simulation environment that allows real-time performance evaluation.
Optimism is a technique used by the Time Warp paradigm to make decisions about event execution under uncertainty. While the benefits of throttling the optimism of Time Warp has been studied, the benefits of extending ...
详细信息
Optimism is a technique used by the Time Warp paradigm to make decisions about event execution under uncertainty. While the benefits of throttling the optimism of Time Warp has been studied, the benefits of extending optimism to operations besides event execution is presented in this paper. Specifically, we discuss how optimism can be mapped to fossil collection which has traditionally been assumed to be a non-recoverable operation that must be performed under global control. Optimistic Fossil Collection (OFC) is a technique for fossil collection that does not require global control. However, as states are fossil collected optimistically, a recovery mechanism is required to recover from errors. Performance results show that Time Warp using OFC can improve the performance of simulations on a network of workstations. Another benefit of OFC is that the simulation checkpoints provide for fault tolerance in more than just fossil collection.
We have developed a set of performance prediction tools which help to estimate the achievable speedups from parallelizing a sequential simulation. The tools focus on two important factors in the actual speedup of a pa...
详细信息
We have developed a set of performance prediction tools which help to estimate the achievable speedups from parallelizing a sequential simulation. The tools focus on two important factors in the actual speedup of a parallelsimulation program: the simulation protocol used; and the inherent parallelism in the simulation model. The first two tools are a performance/parallelism analyzer for a conservative, asynchronous simulation protocol, and a similar analyzer for a conservative, synchronous (super-step) protocol. Each analyzer allows us to study how the speedup of a model changes with increasing number of processors, when a specific protocol is used. The third tool-a critical path analyzer-gives on ideal upper bound to the model's speedup. This paper gives an overview of the prediction tools, and reports the predictions from applying the tools to a discrete-event wafer fabrication simulation model. The predictions are close to speedups from actual parallel implementations. These tools help us to set realistic expectations of the speedup from a parallelsimulation program, and to focus our work on issues which are more likely to yield performance improvement.
暂无评论