The windowed Huffman algorithm is introduced. The Huffman code tree is constructed based on the probabilities of symbols' occurrences within finite history in this windowed algorithm. A window buffer is used to st...
详细信息
The windowed Huffman algorithm is introduced. The Huffman code tree is constructed based on the probabilities of symbols' occurrences within finite history in this windowed algorithm. A window buffer is used to store the most recently processed symbols. Experimental results show that by choosing a suitable window size, the length of codes generated by the windowed Huffman algorithm is shorter than those generated by the static Huffman algorithm, dynamic algorithms, and the residual Huffman algorithm, and even smaller than the first-order entropy. Furthermore, three policies to adjust window size dynamically are also discussed. The windowed Huffman algorithm with an adaptive-size window performs as well as, or better than, that with an optimal fixed-size window. The new algorithm is well suited for online encoding and decoding of data with varying probability distributions.
This brief presents a new context-based adaptive variable length coding (CAVLC) architecture. The prototype is designed for the H.264/AVC baseline profile entropy coder. The proposed design offers area savings by redu...
详细信息
This brief presents a new context-based adaptive variable length coding (CAVLC) architecture. The prototype is designed for the H.264/AVC baseline profile entropy coder. The proposed design offers area savings by reducing the size of the statistic buffer. The arithmetic table elimination technique further reduces the area. The split VLC tables simplify the process of bit-stream generation and also help in reducing some area. The proposed architecture is implemented on Xilinx Virtex II field-programmable gate array (20000fg6764). Simulation result shows that the architecture is capable of processing common/quarter-common intermediate format frame sequences in real-time at a core speed of 50 MHz with 6.85-K logic gates.
It is known that the combination of uniform quantization and entropy coding performs optimally at asymptotically high rates. However, due to synchronization problems resulting from the use of variable length codes, th...
详细信息
It is known that the combination of uniform quantization and entropy coding performs optimally at asymptotically high rates. However, due to synchronization problems resulting from the use of variable length codes, this is true only in the absence of transmission errors. Thus, the performance over noisy channels can be improved by using either fixed length codes and Lloyd-Max quantization or by adding end-of-line symbols. In this paper, computer simulations are used to compare the above two schemes for a variety of error rates, line lengths and amounts of error protection. It is found that the ''uniform'' scheme gives the best SNR for a given rate in the case of memoryless Gaussian and Laplacian sources, provided that its parameters are chosen sensibly. It is expected that these results will generalize to other sources.
In this paper, we propose a high-throughput binary arithmetic coding architecture for CABAC (Context Adaptive Binary Arithmetic coding) which is one of the entropy coding tools used in the H.264/AVC main and high prof...
详细信息
In this paper, we propose a high-throughput binary arithmetic coding architecture for CABAC (Context Adaptive Binary Arithmetic coding) which is one of the entropy coding tools used in the H.264/AVC main and high profiles. The full CABAC encoding functions, including binarization, context model selection, arithmetic encoding and bits generation, are implemented in this proposal. The binarization and context model selection are implemented in a proposed binarizer, in which a FIFO is used to pack the binarization results and output 4 bins in one clock. The arithmetic encoding and bits generation are implemented in a four-stage pipeline with the encoding ability of 4 bins/clock. In order to improve the processing speed, the context variables access and update for 4 bins are paralleled and the pipeline path is balanced. Also, because of the outstanding bits issue, a bits packing and generation strategy for 4 bins paralleled processing is proposed. After implemented in verilog-HDL and synthesized with Synopsys Design Compiler using 90 nm libraries, this proposal can work at the clock frequency of 250MHz and takes up about 58K standard cells, 3.2 Kbits register files and 27.6K bits ROM. The throughput of processing 1000 M bins per second can be achieved in this proposal for the HDTV applications.
entropy coding is a widely used technique for lossless data compression. The entropy coding schemes supporting the direct access capability on the encoded stream have been investigated in recent years. However, all pr...
详细信息
entropy coding is a widely used technique for lossless data compression. The entropy coding schemes supporting the direct access capability on the encoded stream have been investigated in recent years. However, all prior schemes require auxiliary space to support the direct access ability. This paper proposes a rearranging method for prefix codes to support a certain level of direct access to the encoded stream without requiring additional data space. Then, an efficient decoding algorithm is proposed based on lookup tables. The simulation results show that when the encoded stream does not allow additional space, the number of bits per access read of the proposed method is above two orders of magnitude less than the conventional method. In contrast, the alternative solution consumes at least one more bit per symbol on average than the proposed method to support direct access. This indicates that the proposed scheme can achieve a good trade-off between space usage and access performance. In addition, if a small amount of additional storage space is allowed (it is approximately 0.057% in the simulation), the number of bits per access read in our proposal can be significantly reduced by 90%.
Recently, deep learning-based image compression has made significant progresses, and has achieved better rate-distortion (R-D) performance than the latest traditional method, H.266/VVC, in both MS-SSIM metric and the ...
详细信息
Recently, deep learning-based image compression has made significant progresses, and has achieved better rate-distortion (R-D) performance than the latest traditional method, H.266/VVC, in both MS-SSIM metric and the more challenging PSNR metric. However, a major problem is that the complexities of many leading learned schemes are too high. In this paper, we propose an efficient and effective image coding framework, which achieves similar R-D performance with lower complexity than the state of the art. First, we develop an improved multi-scale residual block (MSRB) that can expand the receptive field and capture global information more efficiently, which further reduces the spatial correlation of the latent representations. Second, an importance scaling network is introduced to directly scale the latents to achieve content-adaptive bit allocation without sending side information, which is more flexible than previous importance map methods. Third, we apply a post-quantization filter (PQF) to reduce the quantization error, motivated by the Sample Adaptive Offset (SAO) filter in video coding. Moreover, our experiments show that the performance of the system is less sensitive to the complexity of the decoder. Therefore, we design an asymmetric paradigm, in which the encoder employs three stages of MSRBs to improve the learning capacity, whereas the decoder only uses one stage of MSRB, which reduces the decoder complexity and still yields satisfactory performance. Experimental results show that compared to the state-of-the-art method, the encoding and decoding time of the proposed method are about 17 times faster, and the R-D performance is only reduced by about 1% on both Kodak and Tecnick-40 datasets, which is still better than H.266/VVC(4:4:4) and other leading learning-based methods. Our source code is publicly available at https://***/fengyurenpingsheng.
In this paper, we propose to employ predictive coding for lossy compression of synthetic aperture radar (SAR) raw data. We exploit the known result that a blockwise normalized SAR raw signal is a Gaussian stationary p...
详细信息
In this paper, we propose to employ predictive coding for lossy compression of synthetic aperture radar (SAR) raw data. We exploit the known result that a blockwise normalized SAR raw signal is a Gaussian stationary process in order to design an optimal decorrelator for, this signal. We show that, due to the statistical properties of the SAR signa, 1, an along-range linear predictor With few, taps is able to effectively capture most of the raw signal correlation. The proposed predictive coding algorithm, which performs. quantization of the prediction error, optionally followed by entropy coding, exhibits a number of advantages, and notably an interesting performance/complexity trade-off, with respect to other techniques such as flexible block adaptive quantization (FBAQ) or methods. based on transform-coding;fractional output bit-rates can, also be achieved in the entropy-constrained mode. Simulation results on real-world SIR-C/X-SAR as well as simulated raw and image data show that the proposed algorithm outperforms FBAQ as to SNR, at a computational cost compatible with modern SAR systems.
We demonstrate high spectral efficiency transmission over 549 km of field deployed single-mode fiber using probabilistic-shaped 144QAM. We achieved 41.5 Tb/s over the C-band at a spectral efficiency of 9.02 b/s/Hz usi...
详细信息
We demonstrate high spectral efficiency transmission over 549 km of field deployed single-mode fiber using probabilistic-shaped 144QAM. We achieved 41.5 Tb/s over the C-band at a spectral efficiency of 9.02 b/s/Hz using 32-Gbaud channels at a channel spacing of 33.33 GHz, and 38.1 Tb/s at a spectral efficiency of 8.28 b/s/Hz using 48-Chaud channels at a channel spacing of 50 CHz. To the best of our knowledge, these are the highest total capacities and spectral efficiencies reported in a metro field environment using C-band only. In high spectral efficiency transmission, it is necessary to optimize back-to-back performance in order to maximize the link loss margin. Our results are enabled by the joint optimization of constellation shaping and coding overhead to minimize the gap to Shannon's capacity, transmitter- and receiver-side digital backpropagation, signal clipping optimization, and I/Q imbalance compensation.
This paper describes a low-complexity, high-efficiency, lossy-to-lossless 3D image coding system. The proposed system is based on a novel probability model for the symbols that are emitted by bitplane coding engines. ...
详细信息
This paper describes a low-complexity, high-efficiency, lossy-to-lossless 3D image coding system. The proposed system is based on a novel probability model for the symbols that are emitted by bitplane coding engines. This probability model uses partially reconstructed coefficients from previous components together with a mathematical framework that captures the statistical behavior of the image. An important aspect of this mathematical framework is its generality, which makes the proposed scheme suitable for different types of 3D images. The main advantages of the proposed scheme are competitive coding performance, low computational load, very low memory requirements, straightforward implementation, and simple adaptation to most sensors. (C) 2013 Elsevier Inc. All rights reserved.
Optimal context quantizers for minimum conditional entropy can be constructed by dynamic programming in the probability simplex space. The main difficulty, operationally, is the resulting complex quantizer mapping fun...
详细信息
Optimal context quantizers for minimum conditional entropy can be constructed by dynamic programming in the probability simplex space. The main difficulty, operationally, is the resulting complex quantizer mapping function in the context space, in which the conditional entropy coding is conducted. To overcome this difficulty, we propose new algorithms for designing context quantizers in the context space based on the multiclass Fisher discriminant and the kernel Fisher discriminant (KFD). In particular, the KFD can describe linearly nonseparable quantizer cells by projecting input context vectors onto a high-dimensional curve, in which these cells become better separable. The new algorithms outperform the previous linear Fisher discriminant method for context quantization. They approach the minimum empirical conditional entropy context quantizer designed in the probability simplex space, but with a practical implementation that employs a simple scalar quantizer mapping function rather than a large lookup table.
暂无评论