A multiresolution source code is a single code giving an embedded source description that can be read at a variety of rates and thereby yields reproductions at a variety of resolutions. The resolution of a source repr...
详细信息
A multiresolution source code is a single code giving an embedded source description that can be read at a variety of rates and thereby yields reproductions at a variety of resolutions. The resolution of a source reproduction here refers to the accuracy with which it approximates the original source. Thus, a reproduction with low distortion is a "high-resolution" reproduction while a reproduction with high distortion is a "low-resolution" reproduction. This paper treats the generalization of universal lossy sourcecoding from single-resolution source codes to multiresolution source codes. Results described in this work include new definitions for weakly minimax universal, strongly minimax universal, and weighted universal sequences of fixed- and variable-rate muitiresolution source codes that extend the corresponding notions from lossless coding and (single-resolution) quantization to multiresolution quantizers. A variety of universal multiresolution sourcecoding results follow, including necessary and sufficient conditions for the existence of universal multiresolution codes, rate of convergence bounds for universal multiresolution coding performance to the theoretical bound, and a new multiresolution approach to two-stage universal source coding.
We propose a variation of the Context Tree Weighting algorithm for tree source modified such that the growth of the context resembles Lempel-Ziv parsing. We analyze this algorithm, give a concise upper bound to the in...
详细信息
We propose a variation of the Context Tree Weighting algorithm for tree source modified such that the growth of the context resembles Lempel-Ziv parsing. We analyze this algorithm, give a concise upper bound to the individual redundancy for any tree source, and prove the asymptotic optimality of the data compression rate for any stationary and ergodic source.
We investigate the task of compressing an image by using different probability models for compressing different regions of the image. In this task, using a larger number of regions would result in better compression, ...
详细信息
We investigate the task of compressing an image by using different probability models for compressing different regions of the image. In this task, using a larger number of regions would result in better compression, but would also require more bits for describing the regions and the probability models used in the regions. We discuss using quadtree methods for performing the compression. We introduce a class of probability models for images, the k-rectangular tilings of an image, that is formed by partitioning the image into Ic rectangular regions and generating the coefficients within each region by using a probability model selected from a finite class of N probability models. For an image of size n x n, we give a sequential probability assignment algorithm that codes the image with a code length which is within O(k log Nn/k) of the code length produced by the best probability model in the class. The algorithm has a computational complexity of O(Nn(3)), An interesting subclass of the class of k-rectangular tilings is the class of tilings using rectangles whose widths are powers of two. This class is far more flexible than quadtrees and yet has a sequential probability assignment algorithm that produces a code length that is within O(k log Nn/k) of the best model in the class with a computational complexity of O(Nn(2) log n) (similar to the computational complexity of sequential probability assignment using quadtrees), We also consider progressive transmission of the coefficients of the image.
In this correspondence we investigate the performance of the Lempel-Ziv incremental parsing scheme on nonstationary sources. We show that it achieves the best rate achievable by a finite-state block coder for the nons...
详细信息
In this correspondence we investigate the performance of the Lempel-Ziv incremental parsing scheme on nonstationary sources. We show that it achieves the best rate achievable by a finite-state block coder for the nonstationary source. We also show a similar result for a lossy coding scheme given by Yang and Kieffer which uses a Lempel-Ziv scheme to perform lossy coding.
The Karhunen-Loeve transform (KLT) is optimal for transform coding of a Gaussian source. This is established for all stale-invariant quantizers, generalizing previous results. A backward adaptive technique for combati...
详细信息
The Karhunen-Loeve transform (KLT) is optimal for transform coding of a Gaussian source. This is established for all stale-invariant quantizers, generalizing previous results. A backward adaptive technique for combating the data dependence of the KLT is proposed and analyzed, When the adapted transform converges to a KLT, the scheme is universal among transform coders. A variety of convergence results are proven.
A new lossy variant of the Fixed-Database Lempel-Ziv coding algorithm for encoding at a fixed distortion level is proposed, and its asymptotic optimality and universality for memoryless sources (with respect to bounde...
详细信息
A new lossy variant of the Fixed-Database Lempel-Ziv coding algorithm for encoding at a fixed distortion level is proposed, and its asymptotic optimality and universality for memoryless sources (with respect to bounded single-letter distortion measures) is demonstrated: As the database size m increases to infinity, the expected compression ratio approaches the rate-distortion function. The complexity and redundancy characteristics of the algorithm are comparable to those of its lossless counterpart. A heuristic argument suggests that the redundancy is of order (log log m)/log m, and this is also confirmed experimentally;simulation results are presented that agree well with this rate. Also, the complexity of the algorithm is Seen to be comparable to that of the corresponding lossless scheme. We show that there is a tradeoff between compression performance and encoding complexity, and we discuss how the relevant parameters can be chosen to balance this tradeoff in practice. We also discuss the performance of the algorithm when applied to sources with memory, and extensions to the cases of unbounded distortion measures and infinite reproduction alphabets.
Many image compression techniques require the quantization of multiple vector sources with significantly different distributions. With vector quantization (VQ), these sources are optimally quantized using separate cod...
详细信息
Many image compression techniques require the quantization of multiple vector sources with significantly different distributions. With vector quantization (VQ), these sources are optimally quantized using separate codebooks, which may collectively require an enormous memory space, Since storage is limited in most applications, a convenient may to gracefully trade between performance and storage is needed, Earlier work addressed this problem by clustering the multiple sources into a small number of source groups, where each group shares a codebook, We propose a new solution based on a size-limited universal codebook that can be viewed as the union of overlapping source codebooks, This framework allows each source codebook to consist of any desired subset of the universal codevectors and provides greater design flexibility which improves the storage-constrained performance. A key feature of this approach is that no two sources need be encoded at the same rate. An additional advantage of the proposed method is its close relation to universal, adaptive, finite-state and classified quantization, Necessary conditions for optimality of the universal codebook and the extracted source codebooks are derived. An iterative design algorithm is introduced to obtain a solution satisfying these conditions. Possible applications of the proposed technique are enumerated, and its effectiveness is illustrated for coding of images using finite-state vector quantization, multistage vector quantization, and tree-structured vector quantization.
The Gold-mashing data compression algorithm is an adaptive vector quantization algorithm with vector dimension n. Its asymptotic optimality has been analyzed in Parts I and II of this series, In the paper, a redundanc...
详细信息
The Gold-mashing data compression algorithm is an adaptive vector quantization algorithm with vector dimension n. Its asymptotic optimality has been analyzed in Parts I and II of this series, In the paper, a redundancy problem of the Gold-mashing data compression algorithm is considered. It is demonstrated that for any memoryless source with finite alphabet A and generic distribution p and for any R > 0, the redundancy of the Gold-washing data compression algorithm,vith dimension n (defined as the difference between the average performance of the algorithm and the distortion-rate function D(p, R) of p) is upper-bounded by where [GRAPHICS] partial derivative/partial derivative R D (p, R) is the partial derivative of D(p, R) with respect to R, \A\ is the cardinality of A, and zeta > 0 is a parameter used to control the threshold in the Gold-washing algorithm. In connection with the recent results of Zhang, Yang, and Wei on the redundancy of lossy sourcecoding, this shows that the Gold-washing algorithm has the optimal convergence rate among all adaptive finite-state vector quantizers.
A random number generator generates fair coin flips by processing deterministically an arbitrary source of nonideal randomness. Pin optimal random number generator generates asymptotically Fair coin flips from a stati...
详细信息
A random number generator generates fair coin flips by processing deterministically an arbitrary source of nonideal randomness. Pin optimal random number generator generates asymptotically Fair coin flips from a stationary ergodic source at a rate of bits per source symbol equal to the entropy rate of the source, Since optimal noiseless data compression codes produce incompressible outputs, it is natural to investigate their capabilities as optimal random number generators. In this paper we show under general conditions that optimal variable-length source codes asymptotically achieve optimal variable-length random bit generation in a rather strong sense, In particular, we show in a-hat sense the Lempel-Ziv algorithm earn be considered an optimal universal random bit generator front arbitrary stationary ergodic random sources with unknown distributions.
Nonasymptotic coding and converse theorems are derived for universal data-compression algorithms in cases where the training sequence ("history") that is available to the encoder consists of the most recent ...
详细信息
Nonasymptotic coding and converse theorems are derived for universal data-compression algorithms in cases where the training sequence ("history") that is available to the encoder consists of the most recent segment of the input data string that has been processed, but is not large enough so as to yield the ultimate compression, namely, the entropy of the source.
暂无评论