Computer simulation is the most common approach to studying wireless ad-hoc routing algorithms. The results, however, are only as good as the models the simulation uses. One should not underestimate the importance of ...
详细信息
ISBN:
(纸本)0769521118
Computer simulation is the most common approach to studying wireless ad-hoc routing algorithms. The results, however, are only as good as the models the simulation uses. One should not underestimate the importance of validation, as inaccurate models can lead to wrong conclusions. In this paper we use direct-execution simulation to validate radio models used by ad-hoc routing protocols, against real-world experiments. This paper documents a common testbed that supports direct execution of a set of ad-hoc routing protocol implementations in a wireless network simulator The testbed reads traces generated from real experiments, and uses them to drive direct-execution implementations of the routing protocols. Doing so we reproduce the same network conditions as in real experiments. By comparing routing behavior measured in real experiments with behavior computed by the simulation, we are able to validate the models of radio behavior upon which protocol behavior depends. We conclude that it is possible to have fairly accurate results using a simple radio model, but the routing behavior is quite sensitive to one of this model's parameters. The implication is that one should i) use a more complex radio model that explicitly models point-to-point path loss, or ii) use measurements from an environment typical of the one of interest, or iii) study behavior over a range of environments to identify sensitivities.
As distributed Interactive Applications (DIAs) become increasingly more prominent in the video game industry they must scale to accommodate progressively more users and maintain a globally consistent worldview. Howeve...
详细信息
ISBN:
(纸本)0769522327
As distributed Interactive Applications (DIAs) become increasingly more prominent in the video game industry they must scale to accommodate progressively more users and maintain a globally consistent worldview. However, network constraints, such as bandwidth, limit the amount of communication allowed between users. Several methods of reducing network communication packets, while maintaining consistency, exist. These include dead reckoning and the hybrid strategy-based modelling approach. This latter method combines a short-term model such as dead reckoning with a long-term strategy model of user behaviour. By employing the strategy that most closely represents user behaviour, a reduction in the number of network packets that must be transmitted to maintain consistency has been shown. In this paper a novel method for constructing multiple long-term strategies using dead reckoning and polygons is described. Furthermore the algorithms are implemented in an industry-proven game engine known as Torque. A series of experiments are executed to investigate the effects of varying the spatial density of strategy models on the number of packets that need to be transmitted to maintain the global consistency of the DIA. The results show that increasing the spatial density of strategy models allows a higher consistency to be achieved with fewer packets using the hybrid strategy-based model than with pure dead reckoning. In some cases, the hybrid strategy-based model completely replaces dead reckoning as a means of communicating updates.
Successful development of large-scale complex and distributed real-time systems commonly relies on models developed separately for simulation studies and software implementation. Systems theory provides sound modeling...
详细信息
ISBN:
(纸本)0769522327
Successful development of large-scale complex and distributed real-time systems commonly relies on models developed separately for simulation studies and software implementation. Systems theory provides sound modeling principles to characterize structural and behavioral aspects of systems across time and space. The behavior of these models can be observed using simulation protocols that can correctly interpret time-based logical dynamics. Similarly, object-orientation theories and software architecture principles enable modeling static and dynamic behavior of systems. While models described either in system-theoretic or object-orientated languages may be used for both software design and simulation modeling, each has its own strengths and weaknesses. For example, a class of system-theoretic modeling approach called Discrete-event System Specification (DEVS) provides an appropriate basis to develop simulation models exhibiting concurrent and distributed behavior. Similarly, the Unified Modeling Language with real-time (UML-RT) constructs can be used to develop software design models that can be implemented and executed. Since software models are not suitable to be used as simulation models and simulation models may not adequately lend themselves to serve as software design blueprints, it is important to examine these approaches. We show some of the key shortcomings of these simulation and software design modeling approaches by developing some detailed specifications and implementation of a coffee machine with a focus on their treatment of logical and physical time.
The paper presents a new architecture for systems based on run-time reconfigured shared memory processor clusters meant for implementation using network on chip technology. Clusters constitute local data exchange sub-...
详细信息
ISBN:
(纸本)0769522106
The paper presents a new architecture for systems based on run-time reconfigured shared memory processor clusters meant for implementation using network on chip technology. Clusters constitute local data exchange sub-networks, which dynamically connect processors with shared memory modules. The sub-networks enable exposure of data from one processor's data cache for reading by other processors to their data caches. This inter-processor data exchange paradigm, called "communication on the fly", enables direct communication between processor data caches. Dual-ported data caches are assumed to enable parallel reading and writing data between the caches and memory modules. In the proposed architecture, programs are executed according to a cache-controlled macro data flow execution model. Computational tasks are so defined, as to eliminate re-loading of data caches during task execution. A special program macro-data flow graph representation enables modeling of program behaviour for different architectural and program structure assumptions. simulation results of symbolic execution of program graphs of matrix multiplication are presented in the paper. They show suitability of the proposed architecture for very fine grain parallel computations.
Large complex system simulation in various fields of science and engineering requires tremendous computational resources;however sequential execution algorithms badly limited its performance. So recently there has bee...
详细信息
ISBN:
(纸本)3540240764
Large complex system simulation in various fields of science and engineering requires tremendous computational resources;however sequential execution algorithms badly limited its performance. So recently there has been a great deal of interest in parallel and distributedsimulation, which runs on multiple processors to accelerate simulation. This paper begins with introduction of synchronization mechanisms. The emphasis of this paper is to provide and describe the implementation of the flexible cycle algorithm. This improved algorithm solves some fatal problems of conservative or optimistic algorithms, resulting in the best of both methods. Finally we also analyze how to compute the performance parameter M of this algorithm in detail.
The paper deals with simulation and analysis tools of control system with distributed inputs and outputs based on TCP/IP and UDP/IP protocols. These protocols are not strictly prepared for industrial control applicati...
详细信息
暂无评论