A space-efficient wait-free algorithm for implementing a shared buffer for real-time multiprocessor systems is presented in this paper. The commonly used method to implement shared buffers in real-time systems is base...
详细信息
A space-efficient wait-free algorithm for implementing a shared buffer for real-time multiprocessor systems is presented in this paper. The commonly used method to implement shared buffers in real-time systems is based on mutual exclusion. Mutual exclusion is penalised by blocking that typically leads to difficulties in guaranteeing deadlines in real-time systems. Researchers have introduced nonblocking algorithms and datastructures that address the above problems. However, many of the nonblocking algorithms that have appeared in the literature have very high space demands, some of them even unbounded, which makes them unsuitable for real-time systems. In this paper, we look at a simple, elegant and easy-to-implement algorithm that implements a shared buffer but uses unbounded time-stamps, and we show how to bound the time-stamps by using the timing information that is available in many real-time systems. Our analysis and calculations show that the algorithm resulting from our approach is space-efficient. The presented protocol can support an arbitrary number of concurrent read and write operations.
This paper presents the first main task of a new system estimation framework for codesign located before the partitioning and architectural estimation steps. The key issue of the proposed method is a dynamic estimatio...
详细信息
ISBN:
(纸本)0780365429
This paper presents the first main task of a new system estimation framework for codesign located before the partitioning and architectural estimation steps. The key issue of the proposed method is a dynamic estimation at an abstract level (without architectural target) based on a global estimation algorithm. The proposed algorithm, addresses a given function extracted from a complex application. It is based on function orientation and combines various efficient methods to quickly explore the design space and provides the next codesign steps with a small set of candidate solutions.
The importance of multidimensional index structures to numerous emerging database applications is well established. However, before these index structures can be supported as access methods (AMs) in a "commercial...
详细信息
ISBN:
(纸本)1581130848
The importance of multidimensional index structures to numerous emerging database applications is well established. However, before these index structures can be supported as access methods (AMs) in a "commercial-strength" database management system (DBMS), efficient techniques to provide transactional access to data via the index structure must be developed. Concurrent accesses to data via index structures introduce the problem of protecting ranges specified in the retrieval from phantom insertions and deletions (the phantom problem). This paper presents a dynamic granular locking approach to phantom protection in Generalized Search Trees (GiSTs), an index structure supporting an extensible set of queries and data types. The granular locking technique offers a high degree of concurrency and has a low lock overhead. Our experiments show that the granular locking technique (1) scales well under various system loads and (2) similar to the B-tree case;provides a significantly more efficient implementation compared to predicate locking for multidimensional AMs as well. Since a wide variety of multidimensional index structures can be implemented using GiST, the developed algorithms provide a general solution to concurrency control in multidimensional AMs. To the best of our knowledge, this paper provides the first such solution based on granular locking.
We give a space-efficient, one-pass algorithm for approximating the L1 difference Σi|ai - bi| between two functions, when the function values ai and bi are given as data streams, and their order is chosen by an adver...
详细信息
We give a space-efficient, one-pass algorithm for approximating the L1 difference Σi|ai - bi| between two functions, when the function values ai and bi are given as data streams, and their order is chosen by an adversary. Our main technical innovation is a method of constructing families {Vj} of limited-independence random variables that are range-summable, by which we mean that Σj=0c-1Vj(s) is computable in time polylog(c), for all seeds s. These random-variable families may be of interest outside our current application domain, i.e., massive data streams generated by communication networks. Our L1-difference algorithm can be viewed as a 'sketching' algorithm, in the sense of [Border, Charikar, Frieze, and Mitzenmacher, STOC '98, pp. 327-336], and our algorithm performs better than that of Broder et al. when used to approximate the symmetric difference of two sets with small symmetric difference.
The proceedings contain 35 papers. The special focus in this conference is on algorithms and datastructures. The topics include: Optimization over k-set polytopes and efficient k-set enumeration;line simplification w...
ISBN:
(纸本)3540662790
The proceedings contain 35 papers. The special focus in this conference is on algorithms and datastructures. The topics include: Optimization over k-set polytopes and efficient k-set enumeration;line simplification with restricted orientations;the T-join problem in sparse graphs;resizable arrays in optimal time and space;efficient evaluation of minimal perfect hash functions;design and analysis of algorithms for shared-memory multiprocessors;on the complexity of orthogonal compaction;optimizing constrained offset and scaled polygonal annuli;a generalization of the competitive ratio;performance guarantees for the TSP with a parameterized triangle inequality;robot map verification of a graph world;searching rectilinear streets completely;general multiprocessor task scheduling;the lazy bureaucrat scheduling problem;generating 3D virtual populations from pictures of a few individuals;testing the quality of manufactured balls;on an optimal split tree problems;representing trees of higher degree;indexing and dictionary matching with one error;new results on fault tolerant geometric spanners;efficient dynamic arrays for rank-based sequences;go-with-the-winner heuristic;2-point site voronoi diagrams;a parallel algorithm for finding the constrained voronoi diagram of line segments in the plane;position-independent street searching;approximation algorithms for 3-D common substructure identification in drug and protein molecules and a tight bound for beta-skeleton of minimum weight triangulations.
In this paper, an efficient embedded image compression algorithm is presented. It is based on the observation that the distribution of significant coefficients is intra-subband clustered and inter-subband self-similar...
详细信息
In this paper, an efficient embedded image compression algorithm is presented. It is based on the observation that the distribution of significant coefficients is intra-subband clustered and inter-subband self-similar. Morphological dilation is used to capture the clustered significant coefficients in each subband, resulting in the partitioning of each subband into significance clusters and insignificance space. Thus for entropy coding, different probability models are used for different regions according to their own probability distributions. When encoding the insignificant space, which contains mostly zeros, investigation reveals that the zerotree data structure is not very efficient to represent zeros across scales for texture images and a more efficient method is presented. Experimental results show that the proposed algorithm is very effective especially for texture images.
In the present paper a new fuzzy clustering algorithm is presented. It is a modified version of the min-max technique. By relying on the principal component analysis, it overcomes some undesired properties of the orig...
详细信息
In the present paper a new fuzzy clustering algorithm is presented. It is a modified version of the min-max technique. By relying on the principal component analysis, it overcomes some undesired properties of the original Simpson's algorithm. In particular, a local rotation matrix is introduced for each hyperbox according to the data subset of the related cluster, so that it is possible to arrange the hyperbox orientation along any direction of the dataspace. Consequently, the new algorithm yields more efficient networks, improving the match between the resulting clusters and local data structure.
Since volume rendering needs a lot of computation time and memory space, many researches have been suggested for accelerating rendering or reducing data size using compression techniques. However, there is little prog...
详细信息
Since volume rendering needs a lot of computation time and memory space, many researches have been suggested for accelerating rendering or reducing data size using compression techniques. However, there is little progress in a research for accomplishing these goals. This paper presents an efficient wavelet-based compression method providing fast visualization of large volume data, which is divided into individual blocks with regular resolution. Wavelet transformed block is runlength encoded in accordance with the reconstruction order resulting in a fairly good compression ratio and fast reconstruction. A cache data structure is designed to speed up the reconstruction, and an adaptive compression scheme is proposed to produce a higher quality rendered image. The compression method proposed here is combined with several accelerated volume rendering algorithms, such as brute-force volume rendering with min-max table and Lacroute's shear-warp factorization. Experimental results have shown the space requirement to be about 1/27 and the rendering time to be about 3 seconds for 512/spl times/512/spl times/512 data sets while preserving the quality of an image much like using the original data.
The problem of inferring a finite binary sequence w* is an element of (-1, 1)(n) is considered. It is supposed that at epochs t = 1, 2,..., the learner is provided with random half-spacedata in the form of finite bin...
详细信息
The problem of inferring a finite binary sequence w* is an element of (-1, 1)(n) is considered. It is supposed that at epochs t = 1, 2,..., the learner is provided with random half-spacedata in the form of finite binary sequences u((t)is an element of) {-1, 1}(n) which have positive inner-product with w*. The goal of the learner is to determine the underlying sequence w* in an efficient, on-line fashion from the data {u((t)), t greater than or equal to 1}. In this context, it is shown that the randomized, on-line directed drift algorithm produces a sequence of hypotheses {w((t)) is an element of {-1, 1}(n), t greater than or equal to 1} which converges to w* in finite time with probability 1. It is shown that while the algorithm has a minimal space complexity of 2n bits of scratch memory, it has exponential time complexity with an expected mistake bound of order Ohm(e(0.139n)). Batch incarnations of the algorithm are introduced which allow for massive improvements in running time with a relatively small cost in space (batch size). In particular, using a batch of O(n log n) examples at each update epoch reduces the expected mistake bound of the (batch) algorithm to O(n) (in an asynchronous bit update mode) and O(1) (in a synchronous bit update mode). The problem considered here is related to binary integer programming and to learning in a mathematical model of a neuron. (C) 1999 John Wiley & Sons, Inc.
Techniques for the computation of fixpoints are key to the success of many formal verification algorithms. To be efficient, these techniques must take into account how sets of states are represented. When BDDs are use...
详细信息
Techniques for the computation of fixpoints are key to the success of many formal verification algorithms. To be efficient, these techniques must take into account how sets of states are represented. When BDDs are used, this means controlling, directly or indirectly, the size of the BDDs. Traditional fixpoint computations do little to keep BDD sizes small, apart from reordering variables. In this paper, we present a new strategy that attempts to keep the size of the BDDs under control at every stage of the computation. Our contribution includes also new techniques to compute partial images, and to speed up and test convergence. We present experimental results that prove the effectiveness of our strategy by demonstrating up to 40 orders of magnitude improvement in the number of states computed.
暂无评论