Block Truncation coding (ETC) is a simple and fast image compression algorithm which achieves constant bit rate of 2.0 bits per pixel, The method is however suboptimal. In the present paper we propose a modification o...
详细信息
Block Truncation coding (ETC) is a simple and fast image compression algorithm which achieves constant bit rate of 2.0 bits per pixel, The method is however suboptimal. In the present paper we propose a modification of ETC in which the compression ratio will be improved by coding the quantization data and the bit plane by arithmetic coding with an adaptive modelling scheme. The results compare favorable with other ETC variants. The bit rate for test image Lena is 1.53 bits per pixel with the mean square error of 16.51.
High-order entropy coding (HOEC) has the potential to provide higher compression ratios than the usually used zero-order entropy coding (ZOEC) approaches. However, serious implementation difficulties severely limit th...
详细信息
High-order entropy coding (HOEC) has the potential to provide higher compression ratios than the usually used zero-order entropy coding (ZOEC) approaches. However, serious implementation difficulties severely limit the practical value of HOEC for grayscale image compression. We examine the bit-plane decomposition (BPD) representation as a simple alternative that bypasses some of the implementation difficulties of HOEC. We show, however, that BPD introduces undesired coding overhead when used to represent grayscale images. We therefore propose a new binary image representation called magnitude-based binary decomposition (MBBD) which avoids any coding overhead when used to represent grayscale images. Thus, MBBD both bypasses the implementation difficulties of HOEC and does not have the drawbacks of the BPD. We present numerical experiments that verify the theoretical analysis of the BPD and MBBD representations. In addition, our experiments demonstrate that MBBD-HOEC yields better results than ZOEC for lossy image compression and is also very effective for progressive image transmission.
The worldwide commercialization of fifth generation (5G) wireless networks are pushing toward the deployment of immersive and high-quality VR-based telepresence systems. Among them, 3D object is generally digitized an...
详细信息
The worldwide commercialization of fifth generation (5G) wireless networks are pushing toward the deployment of immersive and high-quality VR-based telepresence systems. Among them, 3D object is generally digitized and represented as point cloud. However, realistically reconstructed 3D point clouds generally contain thousands up to millions of points, which brings a huge amount of data. Therefore, efficient compression of point cloud is an essential part to enable emerging immersive 3D visual communication. In point cloud compression, the graph transform is an effective tool to compact the energy of color signals on the voxels in the 3D space. However, as the eigenbasis of the graph transform is obtained from the graph Laplacian of the constructed graph, the corresponding eigenvalues will be related to the probability distributions of the transformed coefficients, which finally affect the coding efficiency of entropy coding for the quantized coefficients. To overcome the interdependence between graph transform and entropy coding, this paper proposes a jointly optimized graph transform and entropy coding scheme for compressing point clouds. Firstly, we modify the traditional graph Laplacian constructed on the geometry of the point clouds by multiplying a color signal-related matrix. Secondly, we theoretically devise the expected rate and distortion induced by quantization on the graph transformed coefficients. Finally, we propose a Lagrangian multiplier based algorithm to derive the optimum scaling matrix given a quantization parameter. Experimental results are presented to demonstrate that the proposed joint graph transform and entropy coding scheme can significantly outperform its transform coding based counterparts in compressing the color attribute of point clouds.
entropy coding based on k-th order Exp-Golomb (EGk) codes is a key part in the new AVS video coding standard issued by Audio Video coding Standard Workgroup of China. An efficient design based on code-value compact me...
详细信息
ISBN:
(纸本)9780780397361
entropy coding based on k-th order Exp-Golomb (EGk) codes is a key part in the new AVS video coding standard issued by Audio Video coding Standard Workgroup of China. An efficient design based on code-value compact memory structure (CVCMS) is proposed to reduce the computational complexity and memory requirement. Only 789 byte memory is required for variable length coding (VLC) tables in CVCMS, which is just about 5.92% compared with that of the reference software. Furthermore, code-value is stored in memory, which reduced the computational complexity of EGk coding. The simulation results show that the proposed entropy coding for AVS video coding standard reduces the computational cost by 26.48%.
Context-]adaptive binary arithmetic coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the newest standard High Efficiency Video coding (HEVC). While it provides high coding eff...
详细信息
Context-]adaptive binary arithmetic coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the newest standard High Efficiency Video coding (HEVC). While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus, limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both coding efficiency and throughput were considered. This paper highlights the key techniques that were used to enable HEVC to potentially achieve higher throughput while delivering coding gains relative to H.264/AVC. These techniques include reducing context coded bins, grouping bypass bins, grouping bins with the same context, reducing context selection dependencies, reducing total bins, and reducing parsing dependencies. It also describes reductions to memory requirements that benefit both throughput and implementation costs. Proposed and adopted techniques up to draft international standard (test model HM-8.0) are discussed. In addition, analysis and simulation results are provided to quantify the throughput improvements and memory reduction compared with H.264/AVC. In HEVC, the maximum number of context-coded bins is reduced by 8x, and the context memory and line buffer are reduced by 3x and 20x, respectively. This paper illustrates that accounting for implementation cost when designing video coding algorithms can result in a design that enables higher processing speed and lowers hardware costs, while still delivering high coding efficiency.
Run-length coding (RLC) and variable-length coding (VLC) are widely used techniques for lossless data compression A high-speed entropy coding system using these two techniques is considered for digital high definition...
详细信息
Run-length coding (RLC) and variable-length coding (VLC) are widely used techniques for lossless data compression A high-speed entropy coding system using these two techniques is considered for digital high definition television (HDTV) applications. Traditionally, VLC decoding is Implemented through a tree-searching algorithm as the Input bits are received serially. For HDTV applications, it is very difficult to Implement a real-time VLC decoder of this kind due to the very high data rate required. In this paper, we introduce a parallel structured VLC decoder which decodes each codeword in one dock cycle regardless of its length. The required dock rate of the decoder is thus lower and parallel processing architectures become easy to adopt in the entropy coding system. The parallel entropy coder and decoder will be Implemented in two experimental prototype chips which are designed to encode and decode more than 52 million samples/s. Some related system Issues, such as the synchronization of variable-length codewords and error concealment, are also discussed In this paper.
The aim of this research is to investigate the characterization of firing pattern of neuron at CA1 for dysfunctional memory mice via entropy coding. The spike trains were recorded at CA1 of hippocampus slice for two g...
详细信息
ISBN:
(纸本)9780769533049
The aim of this research is to investigate the characterization of firing pattern of neuron at CA1 for dysfunctional memory mice via entropy coding. The spike trains were recorded at CA1 of hippocampus slice for two groups: senescence-accelerated-prone (SAM) -P/8 type (SAW-P/8) mice group and normal control mice group. Shannon entropy based on inter spike intervals (ISIs) histogram was used to measure the code of neuron activity at CA1 of hippocampus slice for two groups (10 samples for each group). The different of the entropy codes for two groups was tested by t-test. The results show that Shannon entropy for SAM-P/8 mice group was 9.30+/-0.44 bit, which is apparently greater than that for normal mice group, which was 7.26+/-0.33 bit. The conclusion is that the higher entropy value for SAM-P/8 mice group is revealed lower information level than the normal group, which suggests the dysfunction of synaptic plasticity for senescence-accelerated-prone mice. The results might support the research of memory dysfunction from the view of neural coding pattern.
The advantage of employing high-order conditional probability for entropy coding has been indicated in Shannon's information theory. Some studies have shown that the improvement In coding efficiency by high-order ...
详细信息
The advantage of employing high-order conditional probability for entropy coding has been indicated in Shannon's information theory. Some studies have shown that the improvement In coding efficiency by high-order conditional entropy coding is substantial. Nevertheless, high-order conditional entropy coding has not been practical due to its high complexity and lack of hardware to extract the conditioning state efficiently. In this paper, we adopt the recently developed incremental-tree-extension technique to design the conditional tree for high-order conditional entropy coding. In order to make the high-speed conditional entropy coder feasible, we introduce several key innovations in the areas of complexity reduction and hardware architecture. For complexity reduction, we develop two techniques: code table reduction and nonlinear quantization of conditioning pixels. For hardware architecture, we propose a pattern-matching technique for fast conditioning state extraction and a multistage pipelined structure to handle the case of a large number of conditioning pixels. Using the complexity reduction techniques and the hardware structures, we demonstrate that It is possible to implement practical high-order conditional entropy codecs using current low-cost very large-scale Integration (VLSI) technology.
A novel security arithmetic coding scheme based on nonlinear dynamic filter (NDF) with changeable coefficients is proposed in this paper. The NDF is employed to generate the pseudorandom number generator (NDF-PRNG) an...
详细信息
A novel security arithmetic coding scheme based on nonlinear dynamic filter (NDF) with changeable coefficients is proposed in this paper. The NDF is employed to generate the pseudorandom number generator (NDF-PRNG) and its coefficients are derived from the plaintext for higher security. During the encryption process, the mapping interval in each iteration of arithmetic coding (AC) is decided by both the plaintext and the initial values of NDF, and the data compression is also achieved with entropy optimality simultaneously. And this modification of arithmetic coding methodology which also provides security is easy to be expanded into the most international image and video standards as the last entropy coding stage without changing the existing framework. Theoretic analysis and numerical simulations both on static and adaptive model show that the proposed encryption algorithm satisfies highly security without loss of compression efficiency respect to a standard AC or computation burden. (C) 2009 Elsevier B.V. All rights reserved.
We study the quantization problem for certain types of jump, processes. The probabilities for the number of jumps are assumed to be bounded by Poisson weights. Otherwise, jump positions and increments can be rather ge...
详细信息
We study the quantization problem for certain types of jump, processes. The probabilities for the number of jumps are assumed to be bounded by Poisson weights. Otherwise, jump positions and increments can be rather generally distributed and correlated. We show in particular that in many cases entropy coding error and quantization error have distinct rates. Finally, we investigate the quantization problem for the special case of R-d-valued compound Poisson processes. (C) 2008 Elsevier Inc. All rights reserved.
暂无评论