As a main method for copyright protection of digital video data, video watermarking has been proposed and investigated. However, unlike still image, video watermarking technology must meet the real-time requirement. I...
详细信息
ISBN:
(纸本)0780385543
As a main method for copyright protection of digital video data, video watermarking has been proposed and investigated. However, unlike still image, video watermarking technology must meet the real-time requirement. In this paper a new real-time watermarking, differential number watermarking (DNW), which can be directly performed in the VLC domain, will be proposed. The label bits are embedded in a pattern of number differences between two subregions by selectively removing high frequency components. The DNW algorithm has only half the complexity of other VLC domain watermarking algorithms. And comparing with the DEW algorithm, it doesn't need quantization step. The experimental results show that the DNW algorithm performs better on watermark's visual quality impact, capacity and robustness than the DEW algorithm.
Scalable shape encoding is one of the important steps to achieving highly scalable object-based video coding. In this paper, a new scalable vertex-based shape intra-coding scheme has been described. To improve the enc...
详细信息
Scalable shape encoding is one of the important steps to achieving highly scalable object-based video coding. In this paper, a new scalable vertex-based shape intra-coding scheme has been described. To improve the encoding performance, we propose a new vertex selection scheme, which can reduce the number of approximation vertices. We also propose a new vertex encoding method, in which the information on the coarser layers and statistical entropy coding are exploited for high encoding efficiency. Experimental results show that the proposed scheme can provide 25-60% gain over the scalable encoding method in Buhan Jordan et al. (1998). For some sequences, it can achieve 5-10% gain over the conventional non-scalable vertex-based coding method (O'Connell (1997)) in bit rate, at the price of additional complexity.
In the skew-coordinates DCT coding method, an image is partitioned along the edge into variably shaped blocks and coded using the skew-coordinates DCT adopted to the edge direction. Compared with the conventional squa...
详细信息
In the skew-coordinates DCT coding method, an image is partitioned along the edge into variably shaped blocks and coded using the skew-coordinates DCT adopted to the edge direction. Compared with the conventional square block DCT, it is known that the skew-coordinates DCT can improve power packing efficiency and can reduce mosquito noise in the reconstructed image. In this paper, the entropy coding method of the skew-coordinates DCT is studied in order to improve the compression ratio. As an entropy coding method, we adopt an adaptive code allocation method based on the Gaussian mixture distribution model and study the construction of the mixture model. Statistical characteristics of the DCT coefficients in real images are investigated and it is shown that the warping effect of the skew-coordinates DCT can reduce the local variation of the variance distribution of the DCT coefficients and that, consequently, a simple mean-power model is suitable as the mixture model. Finally, the result of a computer simulation experiment shows that the proposed method is useful in improving coding performance. (C) 2001 Scripta Technica.
We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability estimate that depends on previously encoded bits. The technique can achieve arbitrarily ...
详细信息
ISBN:
(纸本)0780371232
We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability estimate that depends on previously encoded bits. The technique can achieve arbitrarily small redundancy, admits a simple and fast decoder, and may have advantages over arithmetic coding.
We present two methods of entropy coding for the lattice codevectors. We compare our entropy coding methods with one method previously presented in the literature from the point of view of rate-distortion as well as o...
详细信息
We present two methods of entropy coding for the lattice codevectors. We compare our entropy coding methods with one method previously presented in the literature from the point of view of rate-distortion as well as of the computation complexity and memory requirements. The results are presented for artificial Laplacian and Gaussian data, as well as for LSF parameters of speech signals. In the latter case, the multiple scale lattice VQ (MSLVQ) is used for quantization, which reduces the rate gain of the entropy coding method when compared with the fixed rate case, but allows a dynamic allocation of the bits in the whole speech coding scheme.
Many modern analog media coders employ some form of entropy coding (EC). Usually, a simple per-letter EC is used to keep the coder's complexity and price low. In some coders, individual symbols are grouped into sm...
详细信息
Many modern analog media coders employ some form of entropy coding (EC). Usually, a simple per-letter EC is used to keep the coder's complexity and price low. In some coders, individual symbols are grouped into small fixed-size vectors before EC is applied. We extend this approach to form variable-size vector EC (VSVEC) in which vector sizes may be from 1 to several hundreds. The method is, however, complexity-constrained in the sense that the vector size is always as large as allowed by a pre-set complexity limit. The idea is studied in the framework of a modified discrete cosine transform (MDCT) coder. It is shown experimentally, using diverse audio material, that a rate reduction of about 37% can be achieved. The method is, however, not specific to MDCT coding but can be incorporated in various speech, audio, image and video coders.
In the absence of channel noise, variable-length quantizers perform better than fixed-rate Lloyd-Max quantizers for any source with a non-uniform density function. However, channel errors can lead to a loss of synchro...
详细信息
In the absence of channel noise, variable-length quantizers perform better than fixed-rate Lloyd-Max quantizers for any source with a non-uniform density function. However, channel errors can lead to a loss of synchronization resulting in a propagation of error. To avoid having variable rate, one can use a vector quantizer selected as a sub-set of high probability points in the Cartesian product of a set of scalar quantizers and represent its elements with binary code-words of the same length (quantizer shaping). We choose these elements from a lattice, resulting in a higher quantization gain in comparison to simply using the Cartesian product of a set of scalar quantizers. We introduce a class of lattices which have a low encoding complexity, and at the same time result in a noticeable quantization gain. We combine the procedure of lattice encoding with that of quantizer shaping using hierarchical dynamic programming. In addition, by devising appropriate partitioning and merging rules, we obtain sub-optimum schemes of low complexity and small performance degradation. The proposed methods show a substantial improvement in performance and/or a reduction in the complexity with respect to the best known results. Copyright (C) 2003 AEI.
An entropy-constrained quantizer Q is optimal if it minimizes the expected distortion D(Q) subject to a constraint on the output entropy H(Q). In this correspondence, we use the Lagrangian formulation to show the exis...
详细信息
An entropy-constrained quantizer Q is optimal if it minimizes the expected distortion D(Q) subject to a constraint on the output entropy H(Q). In this correspondence, we use the Lagrangian formulation to show the existence and study the structure of optimal entropy-constrained quantizers that achieve a point on the lower convex hull of the operational distortion-rate function D-h(R) = inf(Q){D(Q) : H(Q) less than or equal to R}. In general, an optimal entropy-constrained quantizer may have a countably infinite number of codewords. Our main results show that if the tail of the source distribution is sufficiently light (resp., heavy) with respect to the distortion measure, the Lagrangian-optimal entropy-constrained quantizer has a finite (resp., infinite) number of codewords. In particular, for the squared error distortion measure, if the tail of the source distribution is lighter than the tail of a Gaussian distribution, then the Lagrangian-optimal quantizer has only a finite number of codewords, while if the tail is heavier than that of the Gaussian, the Lagrangian-optimal quantizer has an infinite number of codewords.
In this paper, we propose to employ predictive coding for lossy compression of synthetic aperture radar (SAR) raw data. We exploit the known result that a blockwise normalized SAR raw signal is a Gaussian stationary p...
详细信息
In this paper, we propose to employ predictive coding for lossy compression of synthetic aperture radar (SAR) raw data. We exploit the known result that a blockwise normalized SAR raw signal is a Gaussian stationary process in order to design an optimal decorrelator for, this signal. We show that, due to the statistical properties of the SAR signa, 1, an along-range linear predictor With few, taps is able to effectively capture most of the raw signal correlation. The proposed predictive coding algorithm, which performs. quantization of the prediction error, optionally followed by entropy coding, exhibits a number of advantages, and notably an interesting performance/complexity trade-off, with respect to other techniques such as flexible block adaptive quantization (FBAQ) or methods. based on transform-coding;fractional output bit-rates can, also be achieved in the entropy-constrained mode. Simulation results on real-world SIR-C/X-SAR as well as simulated raw and image data show that the proposed algorithm outperforms FBAQ as to SNR, at a computational cost compatible with modern SAR systems.
Context-Based Adaptive Binary Arithmetic coding (CABAC) as a normative part of the new ITU-T/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique w...
详细信息
Context-Based Adaptive Binary Arithmetic coding (CABAC) as a normative part of the new ITU-T/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel low-complexity method for binary arithmetic coding and probability estimation that is well suited for efficient hardware and software implementations. CABAC significantly outperforms the baseline entropy coding method of H.264/AVC for the typical area of envisaged target applications. For a set of test sequences representing typical material used in broadcast Applications and for a range of acceptable video quality of about 30 to 38 dB, average bit-rate savings of 9%-14% are achieved.
暂无评论