We consider the problem of optimum joint information embedding and lossy compression with respect to a fidelity criterion. The goal is to find the minimum achievable compression (composite) rate R-c as a function of t...
详细信息
We consider the problem of optimum joint information embedding and lossy compression with respect to a fidelity criterion. The goal is to find the minimum achievable compression (composite) rate R-c as a function of the embedding rate R-c and the average distortion level Delta allowed, such that the average probability of error in decoding of the embedded message can be made arbitrarily small for sufficiently large block length. We characterize the minimum achievable composite rate for both the public and the private versions of the problem and demonstrate how this minimum can be approached in principle. We also provide an alternative single-letter expression of the maximum achievable embedding rate (embedding capacity) as a function of R-c and Delta, above which there exist no reliable embedding schemes.
The rapid development of wireless networks and mobile devices has made mobile video communication a particularly promising service. We previously proposed an effective video form, scalable portrait video. In low-bandw...
详细信息
The rapid development of wireless networks and mobile devices has made mobile video communication a particularly promising service. We previously proposed an effective video form, scalable portrait video. In low-bandwidth conditions, portrait video possesses clearer shape, smoother motion, and much cheaper computational cost than discrete cosine transform (DCT)-based schemes. However, the bit rate of portrait video cannot be accurately modeled by a rate-distortion function as in DCT-based schemes. How to effectively control the bit rate is a hard challenge for portrait video. In this paper, we propose a novel model-based rate-control method. Although the coding parameters cannot be directly calculated from the target bit rate, we build a model between the bit-rate reduction and the percentage of less probable symbols (LPS) based on the principle of entropy coding, which is referred to as the LPS-rate model. We use this model to obtain the desired coding parameters. Experimental results show that the proposed method not only effectively controls the bit rate, but also significantly reduces the number of skipped frames. The principle of this method can also be applied to general bit plane coding in other image processing and video compression technologies.
We consider the problem of optimum joint public information embedding and lossy compression with respect to a fidelity criterion. The decompressed composite sequence (stego-text) is distorted by a stationary memoryles...
详细信息
We consider the problem of optimum joint public information embedding and lossy compression with respect to a fidelity criterion. The decompressed composite sequence (stego-text) is distorted by a stationary memoryless attack, resulting in a forgery which in turn is fed into the decoder, whose task is to retrieve the embedded information. The goal of this paper is to characterize the maximum achievable embedding rate R-e (the embedding capacity C-e) as a function of the compression (composite) rate R-c and the allowed average distortion level Delta, such that the average probability of error in decoding of the embedded message can be made arbitrarily small for sufficiently large block length. We characterize the embedding capacity and demonstrate how it can be approached in principle. We also provide a single-letter expression of the minimum achievable composite rate as a function of R-e and Delta below which there exists no reliable embedding scheme.
We study the effect of the introduction of side information into the causal source coding setting of Neuhoff and Gilbert. We find that the spirit of their result, namely, the sufficiency of time-sharing scalar quantiz...
详细信息
We study the effect of the introduction of side information into the causal source coding setting of Neuhoff and Gilbert. We find that the spirit of their result, namely, the sufficiency of time-sharing scalar quantizers (followed by appropriate lossless coding) for attaining optimum performance within the family of causal source codes, extends to many scenarios involving availability of side information (at both encoder and decoder, or only on one side). For example, in the case where side information is available at both encoder and decoder, we find that time-sharing side-information-dependent scalar quantizers (at most two for each side-information symbol) attains optimum performance. This remains true even when the reproduction sequence is allowed noncausal dependence on the side information and even for the case where the source and the side information, rather than consisting of independent and identically distributed (i.i.d.) pairs, form, respectively, the output of a memoryless channel and its stationary ergodic input.
In predictive image coding, the least squares (LS)-based adaptive predictor is noted as an efficient method to improve prediction result around edges. However pixel-by-pixel optimization of the predictor coefficients ...
详细信息
In predictive image coding, the least squares (LS)-based adaptive predictor is noted as an efficient method to improve prediction result around edges. However pixel-by-pixel optimization of the predictor coefficients leads to a high coding complexity. To reduce computational complexity, we activate the LS optimization process only when the coding pixel is around an edge or when the prediction error is large. We propose a simple yet effective edge detector using only causal pixels. The system can look ahead to determine if the coding pixel is around an edge and initiate the LS adaptation to prevent the occurrence of a large prediction error. Our experiments show that the proposed approach can achieve a noticeable reduction in complexity with only a minor degradation in the prediction results.
H.264 is the newest video coding standard and has achieved a significant improvement in coding efficiency. The entropy coding methods used in H.264 are CAVLC and CABAL Although these two variable length code methods c...
详细信息
H.264 is the newest video coding standard and has achieved a significant improvement in coding efficiency. The entropy coding methods used in H.264 are CAVLC and CABAL Although these two variable length code methods can achieve high compression, they are very sensitive to channel errors. This paper presents a joint source-channel MAP (maximum a posteriori probability) decoding method to dealing with this sensitivity to channel errors and applied it to the decoding of the motion vector in H.264 coded video stream. Although H.264 codec has proposed several error resilience methods, we believe this method could provide additional error resilience to H.264 stream. Experiment indicates that our JSCD achieves significant improvement than a separate scheme.
We present the implementation of a lossless hyperspectral image compression method for novel parallel environments. The method is an interband version of a linear prediction approach for hyperspectral images. The inte...
详细信息
We present the implementation of a lossless hyperspectral image compression method for novel parallel environments. The method is an interband version of a linear prediction approach for hyperspectral images. The interband linear prediction method consists of two stages: predictive decorrelation that produces residuals and the entropy coding of the residuals. The compression part Is embarrassingly parallel, while the decompression part uses pipelining to parallelize the method. The results and comparisons with other methods are discussed. The speedup of the thread version is almost linear with respect to the number of processors. 2005 SPIE and IS&T.
This paper presents methods for performing steganography and steganalysis using a statistical model of the cover medium. The methodology is general, and can be applied to virtually any type of media. It provides answe...
详细信息
This paper presents methods for performing steganography and steganalysis using a statistical model of the cover medium. The methodology is general, and can be applied to virtually any type of media. It provides answers for some fundamental questions that have not been fully addressed by previous steganographic methods, such as how large a message can be hidden without risking detection by certain statistical methods, and how to achieve this maximum capacity. Current steganographic methods have been shown to be insecure against simple statistical attacks. Using the model-based methodology, an example steganography method is proposed for JPEG images that achieves a higher embedding efficiency and message capacity than previous methods while remaining secure against first order statistical attacks. A method is also described for defending against "blockiness" steganalysis attacks. Finally, a model-based steganalysis method is presented for estimating the length of messages hidden with Jsteg in JPEG images.
In the RESUME project we explore the use of reconfigurable hardware for the design of portable multimedia systems by developing a scalable wavelet-based video codec. A scalable video codec provides the ability to prod...
详细信息
ISBN:
(纸本)0769522882
In the RESUME project we explore the use of reconfigurable hardware for the design of portable multimedia systems by developing a scalable wavelet-based video codec. A scalable video codec provides the ability to produce a smaller video stream with reduced frame rate, resolution or image quality starting from the original encoded video stream with almost no additional computation. This is important for portable devices that have different Quality of Service (QoS) requirements and power restrictions. Conventional video codecs do not possess this property;reduced quality is obtained through the arduous process of decoding the encoded video stream and recoding it at a lower quality. Producing such a smaller stream has therefore a very high computational cost. In this article we present the results of our investigation into the hardware implementation of such a scalable video codec. In particular we found that the implementation of the entropy codec is a significant bottleneck. We present an alternative, hardware-friendly algorithm for entropy coding with superior data locality (both temporal and spatial), with a smaller memory footprint and superior compression while maintaining all required scalability properties.
作者:
Hu, RMChen, SXAi, HJXiong, NXWuhan Univ
Natl Engn Res Ctr Multimedia Software Key Lab Multimedia & Network Commun Engn Comp Sch Wuhan 430072 Peoples R China
VS Audit coding Standard is the first standard for Hi-Fi audio in China. The framework of A VS Audio was introduced. Many key technologies are described in details, including long/short window switch decision based on...
详细信息
ISBN:
(纸本)0769524052
VS Audit coding Standard is the first standard for Hi-Fi audio in China. The framework of A VS Audio was introduced. Many key technologies are described in details, including long/short window switch decision based on energy and unpredictability, integer MDCT for lossless time-frequency transform, square polar stereo coding, and context-dependent bit-plane coding for scalable entropy coding. The informal subject test result is given between A VS audio codec and several dominating audio codecs. It is shown that A PS audio codec is enough for Hi-Fi audio applications.
暂无评论