The paper presents an approach for achieving better performance of Monte Carlo simulation by implementing the process in a distributed computing environment. The simulation process is analyzed and sub processes, suita...
详细信息
ISBN:
(纸本)9789549641523
The paper presents an approach for achieving better performance of Monte Carlo simulation by implementing the process in a distributed computing environment. The simulation process is analyzed and sub processes, suitable for parallel execution are defined. The proposed approach is demonstrated by applying Monte Carlo simulation for evaluating the market risk of a financial position in an experimental distributed computing environment. Comparison data, obtained by carrying out the same simulation in an environment with one computer and in distributed environment, are presented and analyzed.
Packet-level discrete-event network simulators use an event to model the movement of each packet in the network. This results in accurate models, but requires that many events are executed to simulate large, high band...
详细信息
ISBN:
(纸本)0769519709
Packet-level discrete-event network simulators use an event to model the movement of each packet in the network. This results in accurate models, but requires that many events are executed to simulate large, high bandwidth networks. Fluid-based network simulators abstract the model to consider only changes in rates of traffic flows. This can result in large performance advantages, though information about the individual packets is lost making this approach inappropriate for many simulation and emulation studies. This paper presents a hybrid model in which packet flows and fluid flows coexist and interact. This enables studies to be performed with background traffic modeled using fluid flows and foreground traffic modeled at the packet level. Results presented show up to 20 times speedup using this technique. Accuracy is within 4% for latency and 15% for jitter in many cases.
High-fidelity simulations of mixed wired and wireless network systems are dependent on detailed simulation models, especially in the lower layers of the network stack. However detailed modeling can result in prohibiti...
详细信息
ISBN:
(纸本)9780769528984
High-fidelity simulations of mixed wired and wireless network systems are dependent on detailed simulation models, especially in the lower layers of the network stack. However detailed modeling can result in prohibitive computation cost. In recent years, commercial graphics cards (GPUs) have drawn attention from the general computing community due to the superior computation capability. In this paper we present our experience with using commercial graphics cards to speed up execution of network simulation models. First, we propose a general simulation framework supporting GPU-accelerated simulation models. Software abstraction is designed to facilitate the use and development of GPU-based models. Second, we implement and evaluate two simulation models using GPUs. We observed that using the GPUs can yield significant performance improvements for large configurations of the model, as compared with pure CPU-based computations, with no degradation in the accuracy of the results. This benefit is particularly impressive for models that include significant data parallel computations. However we also observed that the overhead introduced by GPUs make them less effective in improving execution time of other net-work models. This study suggests that besides parallel computing and grid computing, network simulations can also be scaled by reaping computation capability of GPUs and, potentially, other external computational hardware.
parallel computers are used to execute discrete-event simulations in contexts where a serial computer is unable to provide answers fast enough, and/or is unable to hold the simulation state in memory. Traditional rese...
详细信息
ISBN:
(纸本)0769519709
parallel computers are used to execute discrete-event simulations in contexts where a serial computer is unable to provide answers fast enough, and/or is unable to hold the simulation state in memory. Traditional research in parallelsimulation has focused on the degree to which a parallel simulator provides speedup. This paper takes a different view and asks how a parallel simulator provides increased user defined utility as a result of being able to simulate larger problem sizes. We develop a model where the utility of simulating a particular simulation is an increasing function of the problem size, and ask whether overall utility accrues faster on a parallel computer if one uses it to simulate one large problem in parallel, several smaller problem instances concurrently and each in parallel, or concurrently many small problem instances on single processors. We show that under our model assumptions, utility is accrued faster either by running one large problem instance in parallel using all the available processors, or by running one small problem instance per processor, concurrently. When we consider how to optimize the utility per unit cost we find that one either runs a large problem using all available processors, multiple small problems with one per processor or a small problem using exactly one processor Determination of the optimal configuration depends on the user's assessment of how rapidly utility grows with the problem size. Our main contribution is to show the linkage between the effectiveness of parallelsimulation and a user's perception of the value of larger problem sizes. We show that if that utility grows less than linearly in the problem size, then use of parallelism is sub-optimal. We give precise relationships between our model parameters that govern when parallelism optimizes utility, and when it optimizes price-performance. We see that when model parameters are in a "normal" range, a user's perception of utility must grow significantly-e.g.
This paper is a reminder of the danger of allowing `risk' when synchronizing a parallel discrete-event simulation: a simulation code that runs correctly on a serial machine may, when run in parallel, fail catastro...
详细信息
This paper is a reminder of the danger of allowing `risk' when synchronizing a parallel discrete-event simulation: a simulation code that runs correctly on a serial machine may, when run in parallel, fail catastrophically. This can happen when Time Warp presents an `inconsistent' message to an LP, a message that makes absolutely no sense given the LP's state. Failure may result if the simulation modeler did not anticipate the possibility of this inconsistency. While the problem is not new, there has been little discussion of how to deal with it;furthermore the problem may not be evident to new users or potential users of parallelsimulation. This paper shows how the problem may occur, and the damage it may cause. We show how one may eliminate inconsistencies due to lagging rollbacks and stale state, but then show that so long as risk is allowed it is still possible for an LP to be placed in a state that is inconsistent with model semantics, again making it vulnerable to failure. We finally show how simulation code can be tested to ensure safe execution under a risk-free protocol. Whether risky or risk-free, we conclude that under current practice the development of correct and safe parallelsimulation code is not transparent to the modeler;certain protections must be included in model code or model testing that are not rigorously necessary if the simulation were executed only serially.
Peachy parallel Assignments are high-quality assignments for teaching parallel and distributed computing. They are selected competitively for presentation at the Edu* workshops. All of the assignments have been succes...
详细信息
ISBN:
(纸本)9780738143057
Peachy parallel Assignments are high-quality assignments for teaching parallel and distributed computing. They are selected competitively for presentation at the Edu* workshops. All of the assignments have been successfully used in class and they are selected based on the their ease of adoption by other instructors and for being cool and inspirational to students. This paper presents a paper-and-pencil assignment asking students to analyze the performance of different system configurations and an assignment in which students parallelize a simulation of the evolution of simple living organisms.
Composite events are required for many applications that require obtaining events from various sources, correlating them, and activating appropriate actions. One of the major issues in composite event system is scalab...
详细信息
ISBN:
(纸本)0769515886
Composite events are required for many applications that require obtaining events from various sources, correlating them, and activating appropriate actions. One of the major issues in composite event system is scalability. This paper reports on a research that follows the situation concept of the Amit system, and proposes a parallel execution model for the event composition. The paper describes the model and its difficulties, and shows its usefulness using simulation results.
We simulate ballistic particle deposition wherein a large number of spherical particles are 'dropped' vertically over a planar horizontal surface. Upon first contact (with the surface or with a previously depo...
详细信息
We simulate ballistic particle deposition wherein a large number of spherical particles are 'dropped' vertically over a planar horizontal surface. Upon first contact (with the surface or with a previously deposited particle) each particle stops. This model helps material scientists to study the adsorption and sediment formation [1]. The model is sequential, with particles deposited one by one. We have found an equivalent formulation using a continuous time random process and we simulate the latter in parallel using a method similar to the one previously employed for simulating Ising spins [2]. We augment the parallel algorithm for simulating Ising spins with several techniques aimed at the increase of efficiency of producing the particle configuration and statistics collection. Some of these techniques are similar to [3], [4], and [5]. We implement the resulting algorithm on a 16K PE MasPar MP-1 and a 4K PE MasPar MP-2. The parallel code runs on MasPar computers two orders of magnitude faster than an optimized sequential code runs on a fast workstation.
In this paper we discuss new synchronization algorithms for parallel and distributed Discrete Event simulations (PDES) which exploit the capabilities and behavior of the underlying communications network. Previous wor...
详细信息
In this paper we discuss new synchronization algorithms for parallel and distributed Discrete Event simulations (PDES) which exploit the capabilities and behavior of the underlying communications network. Previous work in this area has assumed the network to be a Black Box which provides a one-to-one, reliable and in-order message passing paradigm. In our work, we utilize the Broadcast capability of the ubiquitous Ethernet for synchronization computations, and both unreliable and reliable protocols for message passing, to achieve more efficient communications between the participating systems. We describe two new algorithms for computation of a distributed snapshot of global reduction operations on monotonically increasing values. The algorithms require O(N) messages (where N is the number of systems participating in the snapshot) in the normal case. We specifically target the use of this algorithm for distributed discrete event simulations to determine a global lower bound on time-stamp (LBTS), but expect the algorithm has applicability outside the simulation community.
parallelsimulation techniques are designed to increase simulation model performance by exploiting model concurrency. Unfortunately, designing efficient parallelsimulations is not always an easy task. Most existing t...
详细信息
暂无评论