We show that high-resolution images can be encoded and decoded efficiently in parallel, We present an algorithm based on the hierarchical MLP method, used either with Huffman coding or with a new variant of arithmetic...
详细信息
We show that high-resolution images can be encoded and decoded efficiently in parallel, We present an algorithm based on the hierarchical MLP method, used either with Huffman coding or with a new variant of arithmetic coding called quasi-arithmetic coding. The coding step can be parallelized, even though the codes for different pixels are of different lengths;parallelization of the prediction and error modeling components is straightforward.
To acheive both high compression ratio and information preserving, it is an efficiet way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of ear...
详细信息
ISBN:
(纸本)0819427802
To acheive both high compression ratio and information preserving, it is an efficiet way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of early stage of breast cancer. Therefore in coding, detection and segmentation of microcalcification enable us to preserve it well by allocating more bits to it than to other regions. Segmentation of microcalcification is performed both in spatial domain and in wavelet transform domain. Peak error controllable quantization step, which is off-line designed, is suitable for medical image compression. For region-adaptive quantization, block-based wavelet transform coding is adopted and different peak-error-constrained quantizers are applied to blocks according to the segmentation result. In view of perservation of microcalcification, the proposed coding scheme shows better performance than JPEG.
In this paper, an adaptive multi-dictionary model for data compression is proposed. Dictionary techniques applied in lossless compression coding can be modeled from the dictionary management point of view which is sim...
详细信息
In this paper, an adaptive multi-dictionary model for data compression is proposed. Dictionary techniques applied in lossless compression coding can be modeled from the dictionary management point of view which is similar to that of cache memory. The behavior of a compression technique can be described by nine parameters defined in the proposed model, which provides a unified framework to describe the behavior of lossless compression techniques including existing probability-based Huffman coding and arithmetic coding, and dictionary-based LZ-family coding and its variants. Those methods can be interpreted as special cases under the proposed model. New compression techniques can be developed by choosing proper management policies in order to meet special encoding/decoding software or hardware requirements, or to achieve better compression performance. (C) 1998 Elsevier Science B.V. All rights reserved.
The robustness to transmission errors of JPEG coded images is investigated and techniques are proposed to reduce their effects. After an analysis of the JPEG transfer format, three main classes of transfer format cons...
详细信息
The robustness to transmission errors of JPEG coded images is investigated and techniques are proposed to reduce their effects. After an analysis of the JPEG transfer format, three main classes of transfer format constituents are distinguished, and a JPEG compatible approach is proposed to stop error propagation in the entropy coded data, with encoder and decoder reset after fixed coding interval lengths. With the use of restart intervals, the propagation of errors is stopped but no correction has taken place. Therefore, a concealment procedure is defined acid investigated. It consists of two steps. First, error detection must be performed and three different techniques are assessed and compared. Then, block error concealment is achieved. Simulation results are reported. Depending on the entropy coding and on the neighborhood templates used for detection and concealment, prediction based or interpolation based, improvements between 3 and 10dB in peak-to-peak signal-to-noise ratios (PSNR) are provided by the robust decoder with respect to conventional JPEG decoders, for bit error rates around 10(-4). (C) 1998 Elsevier Science B.V. All rights reserved.
Many image compression techniques have been developed for remote sensing imagery over the last thirty years. What are considered as standard techniques such as the use of principal component analysis, discrete cosine ...
详细信息
ISBN:
(纸本)0819429597
Many image compression techniques have been developed for remote sensing imagery over the last thirty years. What are considered as standard techniques such as the use of principal component analysis, discrete cosine transform, predictive coding, etc. have shown their limitations. Wavelet transform techniques have been increasingly used in recent years. In this paper a new and efficient technique is presented that provides a nearly lossless compression of the multichannel remote sensing imagery by combining the use of wavelet decomposition, non-uniform quantization, arithmetic coding, and geometric vector quantizer (GVQ) to achieve the compression task with very minimal loss. The detailed procedures will be illustrated with real remote sensing images.
In this paper, we address the problem of lossless and nearly-lossless multispectral compression of remote-sensing data acquired using SPOT satellites. Lossless compression algorithms classically have two stages: Trans...
详细信息
ISBN:
(纸本)0819427446
In this paper, we address the problem of lossless and nearly-lossless multispectral compression of remote-sensing data acquired using SPOT satellites. Lossless compression algorithms classically have two stages: Transformation of the available data, and coding. The purpose of the first stage is to express the data as uncorrelated data in an optimal way. In the second stage, coding is performed by means of an arithmetic coder. In this paper, we discuss two well-known approaches for spatial as well as multispectral compression of SPOT images: 1) The efficiency of several predictive techniques (MAP, CALIC, 3D predictors), are compared, and the advantages of 2D versus 3D error feedback and context modeling are examinated;2) The use of wavelet transforms for lossless multispectral compression are discussed. Then, applications of the above mentionned methods for quincunx sampling are evaluated. Lastly, some results, on how predictive and wavelet techniques behave when nearly-lossless compression is needed, are given.
The super high-definition images (SHDI) expected to be prominent in the systems of the 21st century consist of approximately 4000 scan lines and have extremely high fidelity. This paper describes a coding scheme for s...
详细信息
The paper presents a novel software and hardware design of a universal arithmetic coding algorithm where 256 ASCII codes of symbols, as a specific example, are in the alphabet. Essentially. the two coding equations ar...
详细信息
The paper presents a novel software and hardware design of a universal arithmetic coding algorithm where 256 ASCII codes of symbols, as a specific example, are in the alphabet. Essentially. the two coding equations are modified by specifying the code values as the lower end-point value of the coding range and the width of this range. Therefore the procedures of sending output codes, solving the so-called underflow problem, and updating the coding range can be unified and simply controlled by the value of the coding range. As a result, a hardware architecture can be directly designed to implement the algorithm on real-time basis where the single operation of normalisation can be implemented in parallel. In addition, specific design of decoding the compressed output, theoretical analysis and realtime architectures of both encoding and decoding are described. Practical C source codes of main functions and experimental results are also reported.
arithmetic coding is a technique which converts a given probability distribution into an optimal code and is commonly used in compression schemes. The use of arithmetic coding as an encryption scheme is considered. Th...
详细信息
arithmetic coding is a technique which converts a given probability distribution into an optimal code and is commonly used in compression schemes. The use of arithmetic coding as an encryption scheme is considered. The simple case of a single binary probability distribution with a fixed (but unknown) probability is considered. We show that for a chosen plaintext attack w+2 symbols is sufficient to uniquely determine a w-bit probability. For many known plaintexts w+m+O(logm) symbols, where m is the length of an initial sequence containing just one of (the two possible) symbols, is sufficient. It is noted that many extensions to this basic scheme are vulnerable to the same attack provided the arithmetic coder can be repeatedly reset to its initial state. If it cannot be reset then their vulnerability remains an open question.
The problem of generating a random number with an arbitrary probability distribution by using a general biased M-coin is studied, An efficient and very simple algorithm based on the successive refinement of partitions...
详细信息
The problem of generating a random number with an arbitrary probability distribution by using a general biased M-coin is studied, An efficient and very simple algorithm based on the successive refinement of partitions of the unit interval [0, 1), which we call the interval algorithm, is proposed, A fairly tight evaluation on the efficiency is given, Generalizations of the interval algorithm to the following cases are investigated: 1) output sequence is independent and identically distributed (i.i.d.);2) output sequence is Markov;3) input sequence is Markov;4) input sequence and output sequence are both subject to arbitrary stochastic processes.
暂无评论