Correlated steganography considers the case in which the cover work is chosen to be correlated with the covert message that is to be hidden. The advantage of this is that, at least theoretically, the number of bits ne...
详细信息
Correlated steganography considers the case in which the cover work is chosen to be correlated with the covert message that is to be hidden. The advantage of this is that, at least theoretically, the number of bits needed to encode the hidden message can be considerably reduced since it is based on the conditional entropy of the message given the cover. This may be much less than the entropy of the message itself. And if the number of bits needed to embed the hidden message is significantly reduced, then it is more likely that the steganographic algorithm will be secure, i.e. undetectable. In this paper, we describe an example of correlated steganography. Specifically, we are interested in embedding a covert image into a cover image. Comparative experiments indicate that selecting a cover Work that is correlated with the covert message can reduce the number of bits needed to represent the covert image below that needed by standard JPEG compression, provided the two images are sufficiently correlated.
This paper presents a bi-level image compression method based on chain codes and entropy coders. However, the proposed method also includes an order estimation process to estimate the order of dependencies that may ex...
详细信息
This paper presents a bi-level image compression method based on chain codes and entropy coders. However, the proposed method also includes an order estimation process to estimate the order of dependencies that may exist among the chain code symbols prior to the entropy coding stage. For each bi-level image, the method first obtains its chain code representation and then estimates its order of symbol dependencies. This order value is used to find the conditional and joint symbol probabilities corresponding to our newly defined Markov model. Our order estimation process is based on the Bayesian information criterion (BIC), a statistically based model selection technique that has proved to be a consistent order estimator. In our experiments, we show how our order estimation process can help achieve more efficient compression levels by providing comparisons against some of the most commonly used image compression standards such as the Graphics Interchange Format (GIF), Joint Bi-level Image Experts Group (JBIG), and JBIG2.
Many important developments in video compression technologies have occurred during the past two decades. The block-based discrete cosine transform with motion compensation hybrid coding scheme has been widely employed...
详细信息
Many important developments in video compression technologies have occurred during the past two decades. The block-based discrete cosine transform with motion compensation hybrid coding scheme has been widely employed by most available video coding standards, notably the ITU-T H.26x and ISO/IEC MPEG-x families and video part of China audio video coding standard (AVS). The objective of this paper is to provide a review of the developments of the four basic building blocks of hybrid coding scheme, namely predictive coding, transform coding, quantization and entropy coding, and give theoretical analyses and summaries of the technological advancements. We further analyze the development trends and perspectives of video compression, highlighting problems and research directions.
In this paper, we propose a high-throughput binary arithmetic coding architecture for CABAC (Context Adaptive Binary Arithmetic coding) which is one of the entropy coding tools used in the H.264/AVC main and high prof...
详细信息
In this paper, we propose a high-throughput binary arithmetic coding architecture for CABAC (Context Adaptive Binary Arithmetic coding) which is one of the entropy coding tools used in the H.264/AVC main and high profiles. The full CABAC encoding functions, including binarization, context model selection, arithmetic encoding and bits generation, are implemented in this proposal. The binarization and context model selection are implemented in a proposed binarizer, in which a FIFO is used to pack the binarization results and output 4 bins in one clock. The arithmetic encoding and bits generation are implemented in a four-stage pipeline with the encoding ability of 4 bins/clock. In order to improve the processing speed, the context variables access and update for 4 bins are paralleled and the pipeline path is balanced. Also, because of the outstanding bits issue, a bits packing and generation strategy for 4 bins paralleled processing is proposed. After implemented in verilog-HDL and synthesized with Synopsys Design Compiler using 90 nm libraries, this proposal can work at the clock frequency of 250MHz and takes up about 58K standard cells, 3.2 Kbits register files and 27.6K bits ROM. The throughput of processing 1000 M bins per second can be achieved in this proposal for the HDTV applications.
Context-based adaptive variable length coding (CAVLC) is an entropy coding scheme employed in H.264/AVC for transform coefficient compression. The CAVLC encodes levels of nonzero-valued coefficients. Then indicates th...
详细信息
Context-based adaptive variable length coding (CAVLC) is an entropy coding scheme employed in H.264/AVC for transform coefficient compression. The CAVLC encodes levels of nonzero-valued coefficients. Then indicates their positions with run-before which is number of zeros preceding each nonzero coefficient in scan order. In H.264, the run_before is coded using lookup tables depending on number of zero-valued coefficients that have not been coded. This paper presents in improved run-before coding method which encodes run-before using tables taking both zero-valued and nonzero-valued coefficients into consideration. Simulation results report that the proposed method yields an average of 4.40% bit rate reduction for run_before coding over H.264 baseline profile with intra-only coding structure. It corresponds to 0.52% bit rate saving over total bit rate on average.
This brief presents an innovative high-speed context-adaptive variable-length encoder. First, a direct forward algorithm rather than backward tracking is proposed to compute the coding parameters. The forward computat...
详细信息
This brief presents an innovative high-speed context-adaptive variable-length encoder. First, a direct forward algorithm rather than backward tracking is proposed to compute the coding parameters. The forward computation without data reordering can shorten the latency time and the processing cycle. Based on the algorithm, the real-time chip is designed with a parallel structure and pipelined control, which can encode one codeword per cycle. The maximum processing time for one block is the number of nonzero coefficients (NC)+4 cycles. The output bit rate can achieve 125M/s when implemented with 0.18-mu m CMOS technology. The chip occupies about 15k gates, and the power dissipation is about 5.38 mW.
This paper presents ACRIC (Adaptive Cross-point Regions for lossless Image Compression), a scheme for losslessly encoding and decoding images, especially medical images. Developed from the scheme ACRIC (Cross-point Re...
详细信息
ISBN:
(纸本)9783642120190
This paper presents ACRIC (Adaptive Cross-point Regions for lossless Image Compression), a scheme for losslessly encoding and decoding images, especially medical images. Developed from the scheme ACRIC (Cross-point Regions for lossless Image Compression), ACRIC gives new ideas to build adaptive cross-point regions without using their fixed lengths. Based on the effect of Gray coding on cross points which have grey values around grey levels 2(n), the data bits of cross points trend to same states in cross-point regions after Gray coding, so we can optimize the probability of cross points for the step of modeling in the process of entropy coding.
This article introduces a lossless encoding scheme for interleaved input from a fixed number of binary sources, each one characterized by a known probability value. The algorithm achieves compression performance close...
详细信息
ISBN:
(纸本)9780819482945
This article introduces a lossless encoding scheme for interleaved input from a fixed number of binary sources, each one characterized by a known probability value. The algorithm achieves compression performance close to the entropy, providing very fast encoding and decoding speed. The algorithm can efficiently benefit from independent parallel decoding units, and it is demonstrated to have significant advantages in hardware implementations over previous technologies.
Lossy compression of hyperspectral and ultraspectral images is traditionally performed using 3D transform coding. This approach yields good performance, but the complexity and memory requirements make it unsuitable fo...
详细信息
ISBN:
(纸本)9781424479948
Lossy compression of hyperspectral and ultraspectral images is traditionally performed using 3D transform coding. This approach yields good performance, but the complexity and memory requirements make it unsuitable for onboard compression. In this paper we propose a low-complexity lossy compression scheme based on prediction, quantization and rate-distortion optimization. The scheme employs coset codes coupled with the new concept of "informed quantization", and requires no entropy coding. The performance of the resulting algorithm is competitive with that of state-of-the-art 3D transform coding schemes, but the complexity is immensely lower, making it suitable for onboard compression at high throughputs.
In this paper we present new models for rate-distortion curves for entropy coded lattice codevectors. Exact models for both the rate and the distortion are proposed for the lattice Z(n) for generalized Gaussian source...
详细信息
ISBN:
(纸本)9781424442966
In this paper we present new models for rate-distortion curves for entropy coded lattice codevectors. Exact models for both the rate and the distortion are proposed for the lattice Z(n) for generalized Gaussian sources. The resulting precision with respect to experimental values is improved by 50% over previously proposed models. In addition an approximate model for general lattices is proposed for Gaussian sources, its precision being verified against experimental values and shown to improve the estimation precision from 10% to 4%.
暂无评论