We evaluate the data traffic performance of a centralized packet access protocol for microcellular radio systems supporting both speech and data users. A time-slotted radio channel is assumed. Speech contention is dec...
详细信息
We evaluate the data traffic performance of a centralized packet access protocol for microcellular radio systems supporting both speech and data users. A time-slotted radio channel is assumed. Speech contention is decoupled from data contention to give speech priority over data. A free access stack algorithm is used for handling data contention. An out-slot access scheme is used in which the slots are divided into user-information transmission slots and contention slots for sending transmission requests. The contention slots are subdivided in minislots to improve the access capacity, The out-slot algorithm performances are compared with the performances of a previously proposed in-slot one in which ail slots can be used for sending user information. A memoryless channel, with capture and errors, is considered. The effects of speech traffic on data performance are evaluated. Moreover, the paper presents a method for evaluating the packet error probability of a packet cellular system. This method is used for evaluating the proposed algorithm in a microcellular system. An access technique with coordinated operation among cochannel cells is studied. The effects of sectorization on data performances and protocol unfairness are investigated. Different frequency reuse factors are taken into consideration.
This correspondence presents sequential algorithms for soft-decision decoding of linear block codes. They use a stack algorithm based on the trellis of the code. We are interested in the trellis as a means to avoid pa...
详细信息
This correspondence presents sequential algorithms for soft-decision decoding of linear block codes. They use a stack algorithm based on the trellis of the code. We are interested in the trellis as a means to avoid path-decoding repetitions. As well, the possibility of bidirectional decoding offers a chance to increase the likelihood of explored paths. We have developed three successive algorithms that offer a good decrement in the overall complexity, and mainly in the most complex decoding case, while giving near-maximum-likelihood performance. This is important since it determines the maximum buffer size necessary in the decoder.
In memory-constrained algorithms, access to the input is restricted to be read-only, and the number of extra variables that the algorithm can use is bounded. In this paper we introduce the compressed stack technique, ...
详细信息
In memory-constrained algorithms, access to the input is restricted to be read-only, and the number of extra variables that the algorithm can use is bounded. In this paper we introduce the compressed stack technique, a method that allows to transform algorithms whose main memory consumption takes the form of a stack into memory-constrained algorithms. Given an algorithm that runs in time using a stack of length , we can modify it so that it runs in time using a workspace of variables (for any ) or time using variables (for any ). We also show how the technique can be applied to solve various geometric problems, namely computing the convex hull of a simple polygon, a triangulation of a monotone polygon, the shortest path between two points inside a monotone polygon, a 1-dimensional pyramid approximation of a 1-dimensional vector, and the visibility profile of a point inside a simple polygon. Our approach improves or matches up to a factor the running time of the best-known results for these problems in constant-workspace models (when they exist), and gives a trade-off between the size of the workspace and running time. To the best of our knowledge, this is the first general framework for obtaining memory-constrained algorithms.
In this paper we consider the application of the stack algorithm concept to a two-level paged storage hierarchy in which both levels are directly addressable by the central processor. Since both levels are directly ad...
详细信息
In this paper we consider the application of the stack algorithm concept to a two-level paged storage hierarchy in which both levels are directly addressable by the central processor. Since both levels are directly addressable, pages need not reside in the first level of memory for a reference to be completed. The effectiveness of a page replacement algorithm in such a storage hierarchy is measured by the frequency of references to the first level of memory and the number of page promotions. It is shown that the stack algorithm concept can easily be extended to apply to two-level directly addressable memory hierarchies. A class of page replacement algorithms called stack replacement algorithms is defined. An efficient procedure exists for collecting data on the performance of stack replacement algorithms.
Web caching has been the solution of choice to web latency problems. The efficiency of a Web cache is strongly affected by the replacement algorithm used to decide which objects to evict once the cache is saturated. N...
详细信息
Web caching has been the solution of choice to web latency problems. The efficiency of a Web cache is strongly affected by the replacement algorithm used to decide which objects to evict once the cache is saturated. Numerous web cache replacement algorithms have appeared in the literature. Despite their diversity, a large number of them belong to a class known as stack-based algorithms. These algorithms are evaluated mainly via trace-driven simulation. The very few analytical models reported in the literature were targeted at one particular replacement algorithm, namely least recently used (LRU) or least frequently used (LFU). Further they provide a formula for the evaluation of the Hit Ratio only. The main contribution of this paper is an analytical model for the performance evaluation of any stack-based web cache replacement algorithm. The model provides formulae for the prediction of the object Hit Ratio, the byte Hit Ratio, and the delay saving ratio. The model is validated against extensive discrete event trace-driven simulations of the three popular stack-based algorithms, LRU, LFU, and SIZE, using NLANR and DEC traces. Results show that the analytical model achieves very good accuracy. The mean error deviation between analytical and simulation results is at most 6% for LRU, 6% for the LFU, and 10% for the SIZE stack-based algorithms. Copyright (C) 2009 John Wiley & Sons, Ltd.
A procedure to efficiently obtain fault ratios for all levels in multilevel delayed-staging (D-S) storage hierarchies without recurring to simulation runs for each configuration desired is presented. The procedure is ...
详细信息
A procedure to efficiently obtain fault ratios for all levels in multilevel delayed-staging (D-S) storage hierarchies without recurring to simulation runs for each configuration desired is presented. The procedure is based on the applicability of stack processing techniques to D-S configurations and requires simulation runs for two-level hierarchies only.
This paper describes two approaches to the problem of determining exact optimal storage capacity for Web caches based on expected workload and the monetary costs of memory and bandwidth. The first approach considers m...
详细信息
This paper describes two approaches to the problem of determining exact optimal storage capacity for Web caches based on expected workload and the monetary costs of memory and bandwidth. The first approach considers memory/bandwidth tradeoffs in an idealized model. It assumes that workload consists of independent references drawn from a known distribution (e.g. Zipf) and caches employ a "Perfect LFU" removal policy. We derive conditions under which a shared higher-level "parent" cache serving several lower-level "child" caches is economically viable. We also characterize circumstances under which globally optimal storage capacities in such a hierarchy can be determined through a decentralized computation in which caches individually minimize local monetary expenditures. The second approach is applicable if the workload at a single cache is represented by an explicit request sequence and the cache employs any one of a large family of removal policies that includes LRU. The miss costs associated with individual requests may be completely arbitrary, and the cost of cache storage need only be monotonic. We use an efficient single-pass simulation algorithm to compute aggregate miss cost as a function of cache size in O(M log M) time and O(M) memory, where M is the number of requests in the workload. Because it allows us to compute arbitrarily weighted hit rates at all cache sizes with modest computational resources, this algorithm permits us to measure cache performance with no loss of precision. The same basic algorithm also permits us to compute complete stack distance transformations in O(M log N) time and O(N) memory, where N is the number of unique items referenced. Experiments on very large reference streams show that our algorithm computes stack distances more quickly than several alternative approaches, demonstrating that it is a useful tool for measuring temporal locality in cache workloads. (C) 2001 Elsevier Science B.V. All rights reserved.
Maximum-likelihood (ML) decoding algorithms for Gaussian multiple-input multiple-output (MIMO) linear channels are considered. Linearity over the field of real numbers facilitates the design of ML decoders using numbe...
详细信息
Maximum-likelihood (ML) decoding algorithms for Gaussian multiple-input multiple-output (MIMO) linear channels are considered. Linearity over the field of real numbers facilitates the design of ML decoders using number-theoretic tools for searching the closest lattice point. These decoders are collectively referred to as sphere decoders in the literature. In this paper, a fresh look at this class of decoding algorithms is taken. In particular, two novel algorithms are developed. The first algorithm is inspired by the Pohst enumeration strategy and is shown to offer a significant reduction in complexity compared to the Viterbo-Boutros sphere decoder. The connection between the proposed algorithm and the stack sequential decoding algorithm is then established. This connection is utilized to construct the second algorithm which can also be viewed as an application of the Schnorr-Euchner strategy to ML decoding. Aided with a detailed study of preprocessing algorithms, a variant of the second algorithm is developed and shown to offer significant reductions in the computational complexity compared to all previously proposed sphere decoders with a near-ML detection performance. This claim is supported by intuitive arguments and simulation results in many relevant scenarios.
Five types of anomalous behaviour which may occur in paged virtual memory operating systems are defined. One type of anomaly, for example, concerns the fact that, with certain reference strings and paging algorithms, ...
详细信息
Five types of anomalous behaviour which may occur in paged virtual memory operating systems are defined. One type of anomaly, for example, concerns the fact that, with certain reference strings and paging algorithms, an increase in mean memory allocation may result in an increase in fault rate. Two paging algorithms, the page fault frequency and working set algorithms, are examined in terms of their anomaly potential, and reference string examples of various anomalies are presented. Two paging algorithm properties, the inclusion property and the generalized inclusion property, are discussed and the anomaly implications of these properties presented.
暂无评论