In this paper, we present a parallel simulator (SWiMNet) for PCS networks using a combination of optimistic and conservative paradigms. The proposed methodology exploits event precomputation permitted by model indepen...
ISBN:
(纸本)9780769501550
In this paper, we present a parallel simulator (SWiMNet) for PCS networks using a combination of optimistic and conservative paradigms. The proposed methodology exploits event precomputation permitted by model independence within the PCS components. The low percentage of blocked calls is exploited in the channel allocation simulation of precomputed events by means of an optimistic approach. %To illustrate and verify the developed approach, Experiments were conducted with various call arrival rates and mobile host densities on a cluster of Pentium workstations. Performance results indicate that the SWiMNet achieves a speedup of 6 employing 8 workstations, and a speedup of 12 with 16 workstations.
The High Level Architecture (HLA) provides the specification of a software architecture for distributedsimulation. The baseline definition of the HLA includes the HLA Rules, The HLA Interface Specification, and the H...
ISBN:
(纸本)9780769501550
The High Level Architecture (HLA) provides the specification of a software architecture for distributedsimulation. The baseline definition of the HLA includes the HLA Rules, The HLA Interface Specification, and the HLA Object Model Template. The HLA Rules are a set of 10 basic rules that define the responsibilities and relationships among the components of an HLA federation. The HLA Interface Specification provides a specification of the functional interfaces between HLA federates and the HLA Runtime Infrastructure. The HLA OMT provides a common presentation format for HLA simulation and Federation Object *** HLA was developed over the past three years. It is currently in the process of being applied with simulations developed for analysis, training and test and evaluation and incorporated into industry standards for distributedsimulation by both the Object Management Group and the IEEE. This paper provides a discussion of key areas where there are technology challenges in the future implementation and application of the HLA.
This paper describes a new, auto-adaptive algorithm for dead reckoning in DIS. In general dead-reckoning algorithms use a fixed threshold to control the extrapolation errors. Since a fixed threshold cannot adequately ...
详细信息
ISBN:
(纸本)9780769501550
This paper describes a new, auto-adaptive algorithm for dead reckoning in DIS. In general dead-reckoning algorithms use a fixed threshold to control the extrapolation errors. Since a fixed threshold cannot adequately handle the dynamic relationships between moving entities, a multi-level threshold scheme is proposed. The definition of threshold levels is based on the concepts of area of interest (AOI) and sensitive region (SR), and the levels of threshold are adaptively adjusted based on the relative distance between entities during the simulation. Various experiments were conducted. The results show that the proposed auto-adaptive dead reckoning algorithm can achieve considerable reduction in update packets without sacrificing accuracy in extrapolation.
We have developed a set of performance prediction tools which help to estimate the achievable speedups from parallelizing a sequential simulation. The tools focus on two important factors in the actual speedup of a pa...
详细信息
ISBN:
(纸本)9780769501550
We have developed a set of performance prediction tools which help to estimate the achievable speedups from parallelizing a sequential simulation. The tools focus on two important factors in the actual speedup of a parallelsimulation program : (a) the simulation protocol used, and (b) the inherent parallelism in the simulation model. The first two tools are a performance/parallelism analyzer for a conservative, asynchronous simulation protocol, and a similar analyzer for a conservative, synchronous ("super-step") protocol. Each analyzer allows us to study how the speedup of a model changes with increasing number of processors, when a specific protocol is used. The third tool -- a critical path analyzer -- gives an ideal upper bound to the model's speedup. This paper gives an overview of the prediction tools, and reports the predictions from applying the tools to a discrete-event wafer fabrication simulation model. The predictions are close to speedups from actual parallel implementations. These tools help us to set realistic expectations of the speedup from a parallelsimulation program, and to focus our work on issues which are more likely to yield performance improvement.
This paper presents a checkpointing scheme for optimistic simulation which is a mixed approach between periodic and probabilistic checkpointing. The latter, basing on statistical data collected during the simulation, ...
ISBN:
(纸本)9780769501550
This paper presents a checkpointing scheme for optimistic simulation which is a mixed approach between periodic and probabilistic checkpointing. The latter, basing on statistical data collected during the simulation, aims at recording as checkpoints states of a logical process that have high probability to be restored due to rollback (this is done in order to make those states immediately available). The periodic part prevents performance degradation due to state reconstruction (coasting forward) cost whenever the collected statistics do not allow to identify states highly likely to be ***, this scheme can be seen as a highly general solution to tackle the checkpoint problem in optimistic simulation. A performance comparison with previous solutions is carried out through a simulation study of a store-and-forward communication network in a two-dimensional torus topology.
Ordering of simultaneous events in DES is an important issue as it has an impact on modelling expressiveness, model correctness as well as causal dependencies. In sequential DES this is a problem which has attracted m...
ISBN:
(纸本)9780769501550
Ordering of simultaneous events in DES is an important issue as it has an impact on modelling expressiveness, model correctness as well as causal dependencies. In sequential DES this is a problem which has attracted much attention over the years and most systems provide the user with tools to deal with such issues. This has also attracted some attention within the PDES community and we present an overview of these efforts. We have, however, not yet found a scheme which provides us with the desired functionality. Thus, we present and evaluate some simple schemes to achieve a well defined ordering of events and means to identify both causally dependent and independent events with identical timestamps in the context of optimistic simulations. These schemes should be applicable also to conservative PDES.
This paper introduces the Critical Channel Traversing (CCT) algorithm, a new scheduling algorithm for both sequential and parallel discrete event simulation. CCT is a general conservative algorithm that is aimed at th...
详细信息
ISBN:
(纸本)9780769501550
This paper introduces the Critical Channel Traversing (CCT) algorithm, a new scheduling algorithm for both sequential and parallel discrete event simulation. CCT is a general conservative algorithm that is aimed at the simulation of low-granularity network models on shared-memory multi-processor *** implementation of the CCT algorithm within a kernel called TasKit has demonstrated excellent performance for large ATM network simulations when compared to previous sequential, optimistic and conservative kernels. TasKit has achieved two to three times speedup on a single processor with respect to a splay tree central-event-list based sequential kernel. On a 16 processor (R8000) Silicon Graphics PowerChallenge, TasKit has achieved an event-rate of 1.2 million events per second and a speedup of 26 relative to the sequential kernel for a large ATM network *** is achieved through a multi-level scheduling scheme that supports the scheduling of large grains of computation even with low-granularity events. Performance is also enhanced by supporting good cache behavior and automatic load *** paper describes the algorithm and its motivation, proves its correctness and briefly presents performance results for TasKit.
This paper introduces a novel algorithm, the Active Virtual Network Management Protocol, for predictive network management. It explains how the Active Virtual Network Management Protocol facilitates the management of ...
详细信息
ISBN:
(纸本)9780769501550
This paper introduces a novel algorithm, the Active Virtual Network Management Protocol, for predictive network management. It explains how the Active Virtual Network Management Protocol facilitates the management of an active network by allowing future predicted state information within an active network to be available to network management algorithms. This is accomplished by coupling ideas from optimistic discrete event simulation with active networking. The optimistic discrete event simulation method used is a form of self-adjusting Time Warp. It is self-adjusting because the system adjusts for predictions which are inaccurate beyond a given tolerance. The concept of a streptichron and autoanaplasis are introduced as mechanisms which take advantage of the enhanced flexibility and intelligence of active packets. Finally, it is demonstrated that the Active Virtual Network Management Protocol is a feasible concept.
In this paper we present a software approach, namely Fast-Software-Checkpointing (FSC), to reduce the running time of the state saving protocol in optimistic parallel discrete event simulation. The idea behind FSC is ...
ISBN:
(纸本)9780769501550
In this paper we present a software approach, namely Fast-Software-Checkpointing (FSC), to reduce the running time of the state saving protocol in optimistic parallel discrete event simulation. The idea behind FSC is to use the instructions performed during the execution of an event as part of the state saving protocol, hence the total number of instructions due to state saving is reduced. Under FSC the time for saving the state of a logical process prior to the execution of an event e requires an amount of time proportional to the amount of state variables not updated by e's execution, as only these variables must be copied. This outlines that FSC shows some dualism with respect to incremental state saving. We show, however, that there exists a basic difference between the two solutions as in FSC some of the state saving instructions are actually event routine instructions, while in incremental state saving they are only added and mixed to the latter ones. We also present a simple software architecture to support FSC and simulation results to demonstrate the effectiveness of such solution. The obtained data show that FSC, combined with a sparse state saving strategy, may represent the best checkpointing solution in case of both medium/small state granularity simulations and large state granularity simulations even with small (but non-minimal) portions of the state updated by event execution. FSC may result therefore suited for a wide class of simulation problems.
Development of multiple robot systems which solve complex and dynamic problems in parallel and distributed manners is one of the key issues in robotics research. The multiple robot systems require robust methods to id...
详细信息
Development of multiple robot systems which solve complex and dynamic problems in parallel and distributed manners is one of the key issues in robotics research. The multiple robot systems require robust methods to identify robots for collaborative behaviors. This paper proposes a method using omnidirectional vision sensors for the identification between the robots. In addition to the several advantages of the omnidirectional vision sensor as a vision of a mobile robot, the omnidirectional vision sensor brings a significant benefit for realizing collaborative behaviors in multiple robot systems. After discussing on the algorithm, this paper shows several simulation results and real experimental results in a real environment.
暂无评论