The principle of minimum entropy of error estimation (MEEE) is formulated for discrete random variables. In the case when the random variable to be estimated is binary, me show that the MEEE is given by a Neyman-Pears...
详细信息
The principle of minimum entropy of error estimation (MEEE) is formulated for discrete random variables. In the case when the random variable to be estimated is binary, me show that the MEEE is given by a Neyman-Pearson-type strictly monotonous test, In addition, the asymptotic behavior of the error probabilities is proved to be equivalent to that of the Bayesian test.
A novel image compression techinque is presented for low cost multimedia applications. The technique is based on quadtree segmented two-dimensional predictive coding for exploiting correlation between adjacent image b...
详细信息
A novel image compression techinque is presented for low cost multimedia applications. The technique is based on quadtree segmented two-dimensional predictive coding for exploiting correlation between adjacent image blocks and uniformity in variable block size image blocks. Low complexity visual pattern block truncation coding (VP-BTC) defined with a set of sixteen visual patterns is employed to code the high activity image blocks. Simulation results showed that the new technique achieved high performance with superior subjective quality at low bit rate.
The combination of speech coders and entropy coders is investigated, for bit rate reduction. Three speech coders of the CELP (code excited linear prediction) type are considered and the residual correlation in LSP (li...
详细信息
The combination of speech coders and entropy coders is investigated, for bit rate reduction. Three speech coders of the CELP (code excited linear prediction) type are considered and the residual correlation in LSP (line spectrum pairs) coefficients and gains in a speech frame is exploited. The lossless entropy coders use Huffman, LZW (lempel ziv welch) and GZIP (LZ-Huffman) techniques. The greatest efficiency is provided by the adaptive Huffman approach, with a 15 % gain in each type of compressed parameter and an overall average bit rate reduction of 7 % for the FS1016 coder and 5 % for the Tetra and LBC coders.
Though reversible predictive coding and reversible subband coding exist already as reversible coding of gray-level still images, reversible method has almost not been proposed against transform coding. Therefore, in t...
详细信息
ISBN:
(纸本)0819421030
Though reversible predictive coding and reversible subband coding exist already as reversible coding of gray-level still images, reversible method has almost not been proposed against transform coding. Therefore, in this paper, we propose some reversible transform coding methods. In case that we use conventional transform coding as it is, we have to make the number of levels of the transform coefficient very large in order to reconstruct the input signal with no distortion. Therefore, we propose transform codings that have reversibility whereas the number of levels of the transform coefficient are not very large. We propose reversible coding methods that correspond to the discrete Walsh-Hadamard, Haar, and cosine transform. Furthermore, we propose a method that uses the difference of the n-th order, a method of which the number of levels of the transform coefficient is the same as that of the input signal, and a reversible overlap transform coding method. Simulation shows that the compression efficiency of the proposed method is almost the same as that of predictive coding.
Lossy compression techniques provide far greater compression ratios than lossless and are, therefore, usually preferred in image processing applications. However, as more and more applications of digital image process...
详细信息
ISBN:
(纸本)0819422355
Lossy compression techniques provide far greater compression ratios than lossless and are, therefore, usually preferred in image processing applications. However, as more and more applications of digital image processing have to combine image compression and highly automated image analysis, it becomes of critical importance to study the interrelations existing between image compression and feature extraction. In this contribution we present a clear and systematic comparison of contemporary general purpose lossy image compression techniques with respect to fundamental features, namely lines and edges detected in images. To this end, a representative set of benchmark edge detection and line extraction operators is applied to original and compressed images. The effects are studied in detail, delivering clear guidelines which combination of compression technique and edge detection algorithm is best used for specific applications.
Presents a new image analysis technique for edge detection using multistage predictive coding (MPC). MPC is a progressive data compression technique which decomposes an image into a set of image components in multiple...
详细信息
Presents a new image analysis technique for edge detection using multistage predictive coding (MPC). MPC is a progressive data compression technique which decomposes an image into a set of image components in multiple stages from which the original image can be recovered. The proposed coding scheme to implement MPC is multistage delta modulation (MDM) which includes a multistage quantizer for image decomposition and a priority code for image reconstruction. The utilization of the multistage quantizer is to decompose an image into multiple stages progressively in accordance with the significance of the image description. The task of the priority code is to prioritize all stages generated by the multistage quantizer; the lower the stage, the higher the priority. Using the priority code enables to extract features from the most significant edges to the least important details based on decreasing priorities. The experimental results are very encouraging and show that MDM is indeed a promising edge detection and feature extraction technique for real-time processing.< >
We point out that a prevalent form of fractal image coding can be viewed as a kind of generalized predictive coding. Several key issues in predictive coding are the prediction gain, the design of codebooks for predict...
详细信息
We point out that a prevalent form of fractal image coding can be viewed as a kind of generalized predictive coding. Several key issues in predictive coding are the prediction gain, the design of codebooks for predictors and prediction residuals, shaping of reconstruction errors, and codec complexity. Fractal coding can yield higher prediction gains than conventional predictive coding by its use of noncausal predictors and long-term predictors. However, noncausal prediction necessitates iterative decoding and long-term predictors require search over a large area, both of which increase codec complexity. Design of predictors and prediction codebooks for fractal coding has relied much on heuristics. Drawing on known results about predictive coding, we outline several directions for codec design, among which are short-term prediction and transform coding or vector quantization of prediction residuals. Shaping of reconstruction errors by noise-feedback or analysis-by-synthesis coding may also be beneficial.< >
In predictive coding for lossless image compression, full knowledge of the prediction error distribution and efficient coding with an arithmetic coding method is the best one can do with the 0-order model assumption. ...
详细信息
In predictive coding for lossless image compression, full knowledge of the prediction error distribution and efficient coding with an arithmetic coding method is the best one can do with the 0-order model assumption. The zero-order error distributions typically are Laplacian with zero mean. Higher-order error distributions are often skewed with a mean that is often positive or negative. Additional compression is achieved by an accurate characterization of context-dependent error distributions. This paper presents the results of a study the different characteristics of the error distributions found in higher-order conditioning contexts of the LOCO and CALIC algorithms. The study includes nonstationary behavior.
We present a novel means for predicting the shape of a person's mouth from the corresponding speech signal and explore applications of this prediction to video coding. The prediction is accomplished by modeling th...
详细信息
We present a novel means for predicting the shape of a person's mouth from the corresponding speech signal and explore applications of this prediction to video coding. The prediction is accomplished by modeling the probability distribution of the audiovisual features by a Gaussian mixture density. The optimal estimate for the visual features given the acoustic features can then be computed using this probability distribution. The ability to predict a person's mouth shape from the corresponding audio leads to a number of interesting joint audio-video coding strategies. In the cross-modal predictive coding system described, a model-based video coder compares measured visual parameters with predicted visual parameters, and sends the difference between the two to the receiver. Since the decoder also receives the acoustic data, it can form the prediction and then reconstruct the original parameters by adding the transmitted error signal.
Abstract only given. Discusses the compression of an important class of computer images, called aerial ortho images, that result from geodetic transformation computations [Kinsner, 1994]. The computations introduce nu...
详细信息
Abstract only given. Discusses the compression of an important class of computer images, called aerial ortho images, that result from geodetic transformation computations [Kinsner, 1994]. The computations introduce numerical noise, making the images nearly incompressible losslessly because of their high entropy. The use of classical lossy compression schemes is also not desirable because their effects on the original image are unknown. We then propose the use of image denoising coupled with lossless image compression, that preserves selected image characteristics. Two denoising schemes for a compression ratio of 2:1 are compared. The first scheme is based on a Donoho's (1992) wavelet shrinking scheme which preserves image smoothness. We study the effect of various shrinking parameter values on the compression ratio and image quality, where 35.5 dB peak signal-to-noise ratio (PSNR) is obtained for a compression ratio of 2.03:1. This approach preserves high-frequency information, so that sharp edges do not become blurred as in classical filtering methods. This is critically important, because the main feature of ortho images is in its flatness and its precision of edge position. The second scheme is based on preserving pixel predictability [Kostelich and Schreiber, 1993), leading to a variant of planar predictive coding. This approach adds, to the edge preserving capability, the limitation in pixel deviation between the original and denoised images to be within one grayscale level. As a result, two different predictive coding schemes achieve a compression ratio of 2:1 at 49.9 dB and 51.2 dB PSNR.
暂无评论