This paper concerns the parallel simulation of queueing network models (QNMs) using the conservative (Chandy-Misra) paradigm. Most empirical studies of conservative parallel simulation have used QNMs as benchmarks. Fo...
ISBN:
(纸本)9780897913157
This paper concerns the parallel simulation of queueing network models (QNMs) using the conservative (Chandy-Misra) paradigm. Most empirical studies of conservative parallel simulation have used QNMs as benchmarks. For the most part, these studies concluded that the conservative paradigm is unsuitable for speeding up the simulation of QNMs, or that it is only suitable for simulating a very limited subclass of these models (e.g., those containing only FCFS servers). In this paper we argue that these are unnecessarily pessimistic conclusions. On the one hand, we show that the structure of some QNMs inherently limits the attainable simulation speedup. On the other hand, we show that QNMs without such limitations can be efficiently simulated using some recently introduced implementation *** present an analytic method for determining an upper bound on speedup, and use this method to identify QNM structures that will exhibit poor simulation performance. We then survey a number of promising implementation techniques, some of which are quite general in nature and others of which apply specifically to QNMs. We show how to extend the latter to a larger class of service disciplines than had been considered previously.
Over the past fifteen years, empirical studies of the reference behavior of a number of database systems have produced seemingly contradictory results. The presence or absence of locality of reference and sequentialit...
ISBN:
(纸本)9780897913157
Over the past fifteen years, empirical studies of the reference behavior of a number of database systems have produced seemingly contradictory results. The presence or absence of locality of reference and sequentiality have both been reported (or denied) in various papers. As such, the performance analyst or database implementor is left with little concrete guidance in the form of expected reference behavior of a database system under a realistic workload. We present empirical evidence that all of the previous results about database reference behavior are correct (or incorrect). That is, if the database reference sequence is viewed on a per-transaction instance or per-database basis, almost any reference behavior is discernible. Previous results which report the absolute absence or presence of a certain form of reference behavior were almost certainly derived from reference traces which were dominated by transactions or databases which exhibited a certain behavior. Our sample consists of roughly twenty-five million block references, from 350,000 transaction executions, directed at 175 operational on-line databases at two major corporations. As such, the sample is an order of magnitude more comprehensive than any other reported in the *** also present evidence that reference behavior is predictable and exploitable when viewed on a per-transaction basis or per-database basis. The implications of this predictability for effective buffer management are discussed.
Recent advances in fiber optic technology (viz. its promise to provide information-carrying capacity in the Gpbs range over long repeater-free distances) has triggered tremendous activity in the study of unidirectiona...
ISBN:
(纸本)9780897913157
Recent advances in fiber optic technology (viz. its promise to provide information-carrying capacity in the Gpbs range over long repeater-free distances) has triggered tremendous activity in the study of unidirectional bus networks (because signal flow in the fiber is unidirectional). A popular network structure that has received significant attention is the Dual-bus Unidirectional Broadcast System (DUBS) network topology. Most of the access mechanism studied on this structure are based on round-robin scheduling (or some variation thereof). However since round-robin schemes suffer a loss of channel capacity because of their inter-round overhead (which can be significant for long high-speed buses), a probabilistic scheduling strategy, called pi-persistent protocol, has recently been proposed and studied for single channel unidirectional bus systems. Our concern here is to apply this probabilistic scheduling strategy to each bus in DUBS, and study the corresponding network performance. In so doing, we allow stations to buffer multiple packets, represent a station's queue size by a Markov chain model, and employ an independence assumption. We find that the average packet delay is bounded and the maximum network throughput approaches two pkt/slot with increasing buffer size. Further, the protocol's performance is insensitive to bus characteristics, and it appears to be a particularly well suited for fiber-optic network application requiring long distances and high bandwidth. Simulation results, which verify the analytical model, are also included.
When many or all of the recipients of a multicast message respond to the multicast's sender, their responses may overflow the sender's available buffer space. Buffer overflow is a serious, known problem of bro...
ISBN:
(纸本)9780897913157
When many or all of the recipients of a multicast message respond to the multicast's sender, their responses may overflow the sender's available buffer space. Buffer overflow is a serious, known problem of broadcast-based protocols, and can be troublesome when as few as three or four recipients respond. We develop analytical models that calculate the expected number of buffer overflows that can be used to estimate the number of buffers necessary for an application. The common cure for buffer overflow requires that recipients delay their responses by some random amount of time in order to increase the minimum spacing between response messages, eliminate collisions on the network, and decrease the peak processing demand at the sender. In our table driven algorithm, the sender tries to minimize the multicast's latency, the elapsed time between its initial transmission of the multicast and its reception of the final response, given the number of times (rounds) it is willing to retransmit the multicast. It includes in the multicast the time interval over which it anticipates receiving the response, the round timeout. We demonstrate that the latency of single round multicasts exceeds the latency of multiple round multicasts. We show how recipients minimize the sender's buffer overflows by independently choosing their response times as a function of the round's timeout, sender's buffer size, and the number of other recipients.
In single-buffer models, the authors assume that a new message is generated by station κ at an exponentially distributed time (with mean 1/λκ) after the service to the previous message has been completed. They also...
详细信息
In single-buffer models, the authors assume that a new message is generated by station κ at an exponentially distributed time (with mean 1/λκ) after the service to the previous message has been completed. They also assume that a single server visits M stations (indexed as station 1 through M) in cyclic order (called polling), in priority order or in a composite order. The server moves from station to station instantaneously. In the case of composite order, each message at station κ belongs to one of P priority classes. After the service of a message of a certain priority at station κ, a message of the highest priority throughout the system at the station that is closest to station κ in the cyclic order is selected for the next service. Applying the method of imbedded Markov chains, the authors analyze the stochastic process of buffer occupancy states in these systems.
Threads (“lightweight” processes) have become a common element of new languages and operating systems. This paper examines the performance implications of several data structure and algorithm alternatives for thread...
ISBN:
(纸本)9780897913157
Threads (“lightweight” processes) have become a common element of new languages and operating systems. This paper examines the performance implications of several data structure and algorithm alternatives for thread management in shared-memory multiprocessors. Both experimental measurements and analytical model projections are *** applications with fine-grained parallelism, small differences in thread management are shown to have significant performance impact, often posing a tradeoff between throughput and latency. Per-processor data structures can be used to improve throughput, and in some circumstances to avoid locking, improving latency as *** method used by processors to queue for locks is also shown to affect performance significantly. Normal methods of critical resource waiting can substantially degrade performance with moderate numbers of waiting processors. We present an Ethernet-style backoff algorithm that largely eliminates this effect.
proceedings incorporates 49 papers, of which 20 are presented in the form of abstracts only. The material covers a wide range of interest related to performance evaluation. The papers are arranged according to the fol...
详细信息
ISBN:
(纸本)0897912543
proceedings incorporates 49 papers, of which 20 are presented in the form of abstracts only. The material covers a wide range of interest related to performance evaluation. The papers are arranged according to the following subjects: parallelism, techniques for benchmarking database systems, communication networks, load sharing, distributed systems, case studies, multiprocessor analysis, measurement techniques, program behavior, paging, and queueing network modeling. Topics considered include: stochastic Petri nets, local area networks (LAN's), file management, protocols, expert systems, heterogeneous configurations, load balancing, algorithms, disk scheduling, network topology, task graphs, parallel processing, queueing models, symmetrical networks, file locking, Markov processes, resource allocation, microcode instrumentation;operating system kernels, reliability theory, computer architecture, data flow, system software, packet switching, interconnection networks and graph theory.
The issue includes three conference papers. These papers discuss performance comparison of multimicro and mainframe database architectures, performance analysis of parallel processing systems, and parallel discrete ev...
详细信息
The issue includes three conference papers. These papers discuss performance comparison of multimicro and mainframe database architectures, performance analysis of parallel processing systems, and parallel discrete event simulation using shared memory.
A validation program has been developed to check the accuracy and validity of a simulation model being developed for the US Federal Aviation Administration. The program is composed of four components: conceptual model...
详细信息
ISBN:
(纸本)0911801421
A validation program has been developed to check the accuracy and validity of a simulation model being developed for the US Federal Aviation Administration. The program is composed of four components: conceptual model validation, input data validation, computerized model correctness checks, and operational validation. As part of the operational validation, model output is compared with three aviation delay databases which vary in their definitions and measurement of delay. Therefore, a major part of the validation program is to determine adjustments that must be made to model output in order to make valid comparisons.
A real storage management algorithm called Adaptive Control of Page-frame Supply (ACPS) is described. ACPS employees three strategies: prediction of the demand for real page frames, page replacement based on the predi...
ISBN:
(纸本)9780897912549
A real storage management algorithm called Adaptive Control of Page-frame Supply (ACPS) is described. ACPS employees three strategies: prediction of the demand for real page frames, page replacement based on the prediction, and working set control. Together, these strategies constitute the real page frame allocation method, and contribute to short and stable response times in conversational processing *** is experimentally applied to the VOS3 operating system. Evaluation of ACPS on a real machine shows that TSS response times are not affected too strongly by king-size jobs and ACPS is successful in avoiding paging delay and thrashing. ACPS prevents extreme shortages of real storage in almost all cases.
暂无评论