In this paper we describe an approach to exploit temporal uncertainty in parallel and distributedsimulation by utilizing time intervals rather than precise time stamps. Unlike previously published work that propose n...
详细信息
In this paper we describe an approach to exploit temporal uncertainty in parallel and distributedsimulation by utilizing time intervals rather than precise time stamps. Unlike previously published work that propose new message ordering semantics, our approach is based on conservative, time stamp order execution and enhancing the lookahead of the simulation by pre-drawing random numbers from a distribution that models temporal uncertainty. The advantages of this approach are that it allows time intervals to be exploited using a conventional Time Stamp Order (TSO) delivery mechanism, and it offers the modeler greater statistical control over the assigned time stamps. An implementation of this approach is described and initial performance measurements are presented.
Can parallelsimulations efficiently exploit a network of workstations? Why haven't PDES models followed standard modeling methodologies? Will the field of PDES survive, and if so, in what form? Researchers in the...
详细信息
Can parallelsimulations efficiently exploit a network of workstations? Why haven't PDES models followed standard modeling methodologies? Will the field of PDES survive, and if so, in what form? Researchers in the PDES field have addressed these questions and others in a series of papers published in the last few years [1,2,3,4]. The purpose of this paper is to shed light on these questions, by documenting an actual case study of the development of an optimistically synchronized PDES application on a network of workstations. This paper is unique in that its focus is not necessarily on performance, but on the whole process of developing a model, from the physical system being simulated, through its conceptual design, validation, implementation, and, of course, its performance. This paper also presents the first reported performance results indicating the impact of risk on performance. The results suggest that the optimal value of risk is sensitive to the latency parameters of the communications network.
We propose a computing technique for efficient parallelsimulation of compute-intensive DEVS models on the IBM Cell processor, combining multi-grained parallelism and various optimizations to speed up the event execut...
详细信息
This paper provides an overview of the SPEEDES persistence framework that is currently used to automate checkpoint/restart for the Joint simulation System. The persistence framework interfaces are documented in this p...
详细信息
ISBN:
(纸本)0769519709
This paper provides an overview of the SPEEDES persistence framework that is currently used to automate checkpoint/restart for the Joint simulation System. The persistence framework interfaces are documented in this paper and are proposed standards for the Standard simulation Architecture. The persistence framework fundamentally keeps track of memory allocations and pointer references within a high-speed internal database linked with applications. With persistence, an object, and the collection of objects it recursively references through pointers, can be automatically packed into a buffer that is written to disk or sent as a message to another machine. Later, that buffer can be used to reconstruct the object and all of its recursively referenced objects. The reconstructed objects will likely be instantiated at different memory locations. The persistence framework automatically updates all affected pointer references to account for this fact. The persistence framework is fully integrated with the SPEEDES rollback infrastructure and built-in container class libraries that include an implementation of the Standard Template Library. These utilities automate support for optimistic event processing required by many high-performance parallel and distributed time management algorithms. In the future, persistence will enable dynamic load balancing algorithms to migrate complex objects to different processors.
An important question for network simulation is what level of detail is required to obtain a desired level of accuracy. While in some networks, the level of detail is an open research issue (for example, radio propaga...
详细信息
ISBN:
(纸本)0769521118
An important question for network simulation is what level of detail is required to obtain a desired level of accuracy. While in some networks, the level of detail is an open research issue (for example, radio propagation models in wireless networks), it has long been assumed that wired networks could be accurately modeled by fairly simple queues with a bandwidth limit and propagation delay. To our knowledge this assumption has not been widely tested In this paper we evaluate different levels of detail for an Ethernet simulation. We consider two models for Ethernet simulation;a detailed, CSMA/CD protocol based model and a more abstract model using a DropTail, shared queue. Using web traffic with two different TCP simulation models, we evaluated the accuracy of these Ethernet models as compared to testbed measurements. We observed the DropTail Ethernet model requires significantly less execution time and can accurately model performance using a bandwidth normalization factor.
In optimistic simulations, Global Virtual Time (GVT) is considered to be the fundamental synchronization concept. During the recent years, a number of methods have been proposed for determining GVT, however most of th...
详细信息
In optimistic simulations, Global Virtual Time (GVT) is considered to be the fundamental synchronization concept. During the recent years, a number of methods have been proposed for determining GVT, however most of these methods have been found to either focus on specific types of simulation problems or assume specific hardware support. In this study, the GVT problem is addressed in the context of scalability, efficiency, portability, flow control, interactive support and real time use. Additionally, a new GVT algorithm called SPEEDES GVT that provides flow control by processing events risk-free while flushing out messages during the GVT computation is proposed.
A master/worker paradigm for executing large-scale parallel discrete event simulation programs over network-enabled computational resources is proposed and evaluated. In contrast to conventional approaches to parallel...
详细信息
This paper presents an analytical model for evaluating the performance of Time Warp simulators. The proposed model is formalized based on two important time components in parallel and distributed processing: computati...
详细信息
This paper presents an analytical model for evaluating the performance of Time Warp simulators. The proposed model is formalized based on two important time components in parallel and distributed processing: computation time and communication time. The communication time is modeled by buffer access time and message transmission time. Logical processes of the Time Warp simulation, and the processors executing them are assumed to be homogeneous. Performance metrics such as rollback probability, rollback distance, elapsed time and Time Warp efficiency are derived. More importantly, we also analyze the impact of cascading rollback waves on the overall Time Warp performance. By rendering the deviation in state numbers of sender-receiver pairs, we investigate the performance of throttled Time Warp scheme. Our analytical model shows that the deviation in state numbers and the communication delay have a profound impact on Time Warp efficiency. The performance model has been validated against implementation results obtained on a Fujitsu AP3000 parallel computer. The analytical framework can be readily used to estimate performance before the Time Warp simulator is implemented.
The proceedings contains 76 articles. Topics discussed include systems, networking, distributedsimulation, queueing systems, multiprocessor architecture, modeling techniques, parallel systems. Tools, processors, network and system simulation, optimizing parallel programs, petri nets, neural networks and genetic algorithms, real time systems and systems modelling.
In this contribution an implemented prototype of an object server on a parallel computer is described. In the first phase a model iras been proposed and simulated in Ada'95. Based on obtained experiences with run-...
详细信息
ISBN:
(纸本)0818681489
In this contribution an implemented prototype of an object server on a parallel computer is described. In the first phase a model iras been proposed and simulated in Ada'95. Based on obtained experiences with run-time features of the simulation program a corresponding prototype has been implemented in C++, tested, and the obtained results have been compared with results of the simulation.
暂无评论