Although a number of car-traffic simulators have been developed fur various purposes, none of the existing simulators enhance the simulation accuracy using sensor data or allow the system structure to re-configure the...
详细信息
Although a number of car-traffic simulators have been developed fur various purposes, none of the existing simulators enhance the simulation accuracy using sensor data or allow the system structure to re-configure the system structure depending on the application. Our goal was to develop a highly accurate, highly modular, flexible, and scalable micro-model car-traffic simulation syst,em. The HLA (High Level Architecture) was applied to every system module as a standard interface between each module. This allows an efficient means for evaluating and validating a variety of micro-model simulation schemes. Our ongoing projects consist of running several identical simulations concurrently, with different parameter sets. By sending the results of these simulations to a manager module, which analyzes both the parameter sc ta and the simulated results, the manager module can evaluate the best-simulated results and determine the next action LS comparing these results with the sensor data. In this system, the sensor data or the statistical data on the flow of traffic, obtained LS monitoring real roads, is used to improve the simulation accuracy. Future systems are being planned to employ real time sensor data, where the input of the data occurs at almost real time speed. In this paper, we discuss the design of a HLA-based car-traffic simulation system and the construction of a sensor-data fusion algorithm. We also discuss our preliminary Evaluation of the results obtained with this system. The results show that the proposed fusion algorithm can adjust the simulation accuracy to the logged sensor data within a difference of 5% (minimum 1.5%) in a specific: time period. We also found that simulations with 500 different parameter sets can be executed within 5 minutes using 8 simulator modules.
Social contact network (SCN) models the daily contacts between people in real life. It consists of agents and locations. When agents visit a location at the same time, the social interactions can be established among ...
详细信息
Social contact network (SCN) models the daily contacts between people in real life. It consists of agents and locations. When agents visit a location at the same time, the social interactions can be established among them. simulations over SCN have been employed to study social dynamics such as disease spread among population. Because of the scale of SCN and the execution time requirement, the simulations are usually run in parallel. However, a challenge to the parallelsimulation is that the structure of SCN is naturally skewed with a few hub locations that have far more visitors than others. These hub locations can cause load imbalance and heavy communication between partitions, which therefore impact the simulation performance. This article proposes a comprehensive solution to address this challenge. First, the hub locations are decomposed into small locations, so that SCN can be divided into partitions with better balanced workloads. Second, the agents are decomposed to exploit data locality, so that the overall communication across partitions can be greatly reduced. Third, two enhanced execution mechanisms are designed for locations and agents, respectively, to improve simulationparallelism. To evaluate the efficiency of the proposed solution, an epidemic simulation was developed and extensive experiments were conducted on two computer clusters using three SCN datasets with different scales. The results demonstrate that our approach can significantly improve the execution performance of the simulation.
In this paper,1 we describe LUNES-Blockchain, an agent-based simulator of blockchains that relies on parallel and distributed simulation (PADS) techniques to obtain high scalability. The software is organized as a mul...
详细信息
In this paper,1 we describe LUNES-Blockchain, an agent-based simulator of blockchains that relies on parallel and distributed simulation (PADS) techniques to obtain high scalability. The software is organized as a multi-level simulator that permits to simulate a virtual environment, made of many nodes running the protocol of a specific distributed Ledger Technology (DLT), such as the Bitcoin or the Ethereum blockchains. This virtual environment is executed on top of a lower-level Peer-to-Peer (P2P) network overlay, which can be structured based on different topologies and with a given number of nodes and edges. Functionalities at different levels of abstraction are managed separately, by different software modules and with different time granularity. This allows for accurate simulations, where (and when) it is needed, and enhances the simulation performance. Using LUNES-Blockchain, it is possible to simulate different types of attacks on the DLT. In this paper, we specifically focus on the P2P layer, considering the selfish mining, the 51% attack and the Sybil attack. For which concerns selfish mining and the 51% attack, our aim is to understand how much the hash-rate (i.e. a general measure of the processing power in the blockchain network) of the attacker can influence the outcome of the misbehavior. On the other hand, in the filtering denial of service (i.e. Sybil Attack), we investigate which dissemination protocol in the underlying P2P network makes the system more resilient to a varying number of nodes that drop the messages. The results confirm the viability of the simulation-based techniques for the investigation of security aspects of DLTs.
In scientific simulations the results generated usually come from a stochastic process. New solutions with the aim of improving these simulations have been proposed, but the problem is how to compare these solutions s...
详细信息
In scientific simulations the results generated usually come from a stochastic process. New solutions with the aim of improving these simulations have been proposed, but the problem is how to compare these solutions since the results are not deterministic. Consequently how to guarantee that the output results are statistically trusted. In this work we apply a statistical approach in order to define the transient and steady state in discrete event distributedsimulation. We used linear regression and batch method to find the optimal simulation size. As contributions of our work we can enumerate: we have applied and adapted the simple statistical approach in order to define the optimal simulation length; we propose the approximate approach to normal distribution instead of generate replications sufficiently large; and the method can be used in other kind of non-terminating science simulations where the data either have a normal distribution or can be approximated by a normal distribution.
Exploring the multi-core architecture is an important issue to obtaining high performance in parallel and distributed discrete-event simulations. However, the simulation features must fit on parallel programming model...
详细信息
Exploring the multi-core architecture is an important issue to obtaining high performance in parallel and distributed discrete-event simulations. However, the simulation features must fit on parallel programming model in order to increase the performance. In this paper we show our experience developing a hybrid MPI+OpenMP version of our parallel and distributed discrete- event individual-oriented fish schooling simulator. In the hybrid approach developed, we fit our simulation features in the following manner: the communication between the Logical Processes happens via message passing whereas the computing of the individuals by OpenMP threads. In addition, we propose a new data structure for partitioning the fish clusters which avoid the critical section in OpenMP code. As a result, the hybrid version significantly improves the total execution time for huge quantity of individuals, because it decreases both the communication and management of processes overhead, whereas it increases the utilization of cores with sharing of resources.
In this paper we deal with the impact of multi and many-core processor architectures on simulation. Despite the fact that modern CPUs have an increasingly large number of cores, most softwares are still unable to take...
详细信息
ISBN:
(纸本)9781450315104
In this paper we deal with the impact of multi and many-core processor architectures on simulation. Despite the fact that modern CPUs have an increasingly large number of cores, most softwares are still unable to take advantage of them. In the last years, many tools, programming languages and general methodologies have been proposed to help building scalable applications for multi-core architectures, but those solutions are somewhat limited. parallel and distributed simulation is an interesting application area in which efficient and scalable multi-core implementations would be desirable. In this paper we investigate the use of the Go Programming Language to implement optimistic parallelsimulations based on the Time Warp mechanism. Specifically, we describe the design, implementation and evaluation of a new parallel simulator. The scalability of the simulator is studied when in presence of a modern multi-core CPU and the effects of the Hyper-Threading technology on optimistic simulation are analyzed.
In a Time-Warp-based distributedsimulation system, a simulation process must save its states and events to handle rollbacks. Periodically, the global minimum of the timestamps of events and messages in the entire sys...
详细信息
In a Time-Warp-based distributedsimulation system, a simulation process must save its states and events to handle rollbacks. Periodically, the global minimum of the timestamps of events and messages in the entire system is calculated. This value is known as the global virtual time (GVT), and it plays an important role in a Time Warp system. GVT is only computed periodically because of the computation overhead. An important problem is to determine the optimal interval between two GVT computations. In this paper we present a new approach that uses a simple Reinforcement Learning technique to select the optimal GVT interval. Used in a Time-Warp-based distributed VLSI simulation system, our method was successful in selecting good GVT interval and improving the system's performance.
Most experimental studies of the performance of parallelsimulation protocols use speedup or number of events processed per unit time as the performance metric. Although helpful in evaluating the usefulness of paralle...
详细信息
ISBN:
(纸本)9780818675393
Most experimental studies of the performance of parallelsimulation protocols use speedup or number of events processed per unit time as the performance metric. Although helpful in evaluating the usefulness of parallelsimulation for a given simulation model, these metrics tell us little about the efficiency of the simulation protocol used. In this paper, we describe an Ideal simulation Protocol (ISP), based on the concept of critical path, which experimentally computes the best possible execution time for a simulation model on a given parallel architecture. Since ISP computes the bound by actually executing the model on the given parallel architecture, it is much more realistic than that computed by a uniprocessor critical path analysis. The paper illustrates, using parameterized synthetic benchmarks, how an ISP-based performance evaluation can lead to much better insights into the performance of parallelsimulation protocols than what would be gained from speedup graphs alone.
暂无评论