"Virtual Sliding Window" algorithm presented in this paper is an adaptive mechanism for estimating the probability of ones at the output of binary non-stationary sources. It is based on "Imaginary slidi...
详细信息
ISBN:
(纸本)1424402158
"Virtual Sliding Window" algorithm presented in this paper is an adaptive mechanism for estimating the probability of ones at the output of binary non-stationary sources. It is based on "Imaginary sliding window" idea proposed by ***. The proposed algorithm was used as an alternative adaptation mechanism in Context-Based Adaptive binary arithmetic coding (CABAC) - an entropy coding scheme of H.264/AVC standard for video compression. The "virtual sliding window" algorithm was integrated into an open-source codec supporting H.264/AVC standard. Comparison of the "virtual sliding window" algorithm with the original adaptation mechanism from CABAC is presented. Test results for standard video sequences are included. These results indicate that using the proposed algorithm improves rate-distortion performance compared to the original CABAC adaptation mechanism. Besides improvement in rate-distortion performances the "Virtual Sliding Window" algorithm has one more advantage. CABAC uses a finite state machine (FSM) for estimation of the probability of ones at the output of a binary source. Transitions for FSM are defined by a table stored in memory. The disadvantage of CABAC consists infrequent reference to this table (one time for every binary symbol encoding), which is critical for DSP implementation. The "Virtual Sliding Window" algorithm allows to avoid using the table of transitions.
In this paper, we propose a high-throughput binary arithmetic coding architecture for CABAC (Context Adaptive binary arithmetic coding) which is one of the entropy coding tools used in the H.264/AVC main and high prof...
详细信息
In this paper, we propose a high-throughput binary arithmetic coding architecture for CABAC (Context Adaptive binary arithmetic coding) which is one of the entropy coding tools used in the H.264/AVC main and high profiles. The full CABAC encoding functions, including binarization, context model selection, arithmetic encoding and bits generation, are implemented in this proposal. The binarization and context model selection are implemented in a proposed binarizer, in which a FIFO is used to pack the binarization results and output 4 bins in one clock. The arithmetic encoding and bits generation are implemented in a four-stage pipeline with the encoding ability of 4 bins/clock. In order to improve the processing speed, the context variables access and update for 4 bins are paralleled and the pipeline path is balanced. Also, because of the outstanding bits issue, a bits packing and generation strategy for 4 bins paralleled processing is proposed. After implemented in verilog-HDL and synthesized with Synopsys Design Compiler using 90 nm libraries, this proposal can work at the clock frequency of 250MHz and takes up about 58K standard cells, 3.2 Kbits register files and 27.6K bits ROM. The throughput of processing 1000 M bins per second can be achieved in this proposal for the HDTV applications.
Context-Based Adaptive binary arithmetic coding (CABAC) as a normative part of the new ITU-T/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique w...
详细信息
Context-Based Adaptive binary arithmetic coding (CABAC) as a normative part of the new ITU-T/ISO/IEC standard H.264/AVC for video compression is presented. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a novel low-complexity method for binary arithmetic coding and probability estimation that is well suited for efficient hardware and software implementations. CABAC significantly outperforms the baseline entropy coding method of H.264/AVC for the typical area of envisaged target applications. For a set of test sequences representing typical material used in broadcast Applications and for a range of acceptable video quality of about 30 to 38 dB, average bit-rate savings of 9%-14% are achieved.
We describe an alternative mechanism for approximate binary arithmetic coding. The quantity that Is approximated is the ratio between the probabilities of the two symbols, Analysis Is given to show that the inefficien...
详细信息
We describe an alternative mechanism for approximate binary arithmetic coding. The quantity that Is approximated is the ratio between the probabilities of the two symbols, Analysis Is given to show that the inefficiency so introduced Is less than 0.7% on average;and in practice the compression loss is negligible.
Electrocardiogram (ECG) coding is required in several applications such as ambulatory recording, patient data bases, medical education systems and ECG data transmission through communication channels. Digital storage ...
详细信息
ISBN:
(纸本)9783540368397
Electrocardiogram (ECG) coding is required in several applications such as ambulatory recording, patient data bases, medical education systems and ECG data transmission through communication channels. Digital storage media is not expensive in recent years, but increase in the need of ECG data in medical care there exists and imperative need to evolve effective compression techniques of ECG data. In general, the procedure of signal compress involves four steps: preprocessing, signal decomposition or transformation, quantization and coding. A new technique for ECG signal compression using wavelet packet transform (WPT) is presented in this paper. By applying the binary arithmetic coding on rearranged sparse information among subbands of quantized coefficient wavelet, we have obtained a compression ratio of more than 9 with PRD less than 4% for ECG signal extracted from MIT-BIH database.
We showcase INTERFERE, a hologram compression framework selected as the basis of the JPEG Pleno Holography standard. It supports view-dependent coding with simultaneous spatial and angular random access. INTERFERE uti...
详细信息
ISBN:
(纸本)9781510685284;9781510685291
We showcase INTERFERE, a hologram compression framework selected as the basis of the JPEG Pleno Holography standard. It supports view-dependent coding with simultaneous spatial and angular random access. INTERFERE utilizes an adaptive quantization mechanism that can assign a variable bit width across small phase-space regions. This ensures that the transform coefficients are compactly represented before entropy coding. In this work, we design a new entropy coding mechanism with division-free binary arithmetic coding, allowing us to better capitalize on the compact quantized representation. We also create new probability models for driving the binaryarithmetic coder, which can fully harness SIMD instructions while exhibiting significantly smaller memory requirements, enabling it to reside in the CPU cache. Speed-ups ranging from 6x - 600x in decoding and encoding times on the CPU were achieved over our previous solution without compromising rate-distortion performance. With this work, we propose a practical hologram codec that is class-leading in rate-distortion performance, ease of random access, and encoding/decoding throughput, thereby providing a practical solution to one of digital holography's most pressing challenges.
This paper proposes a high-throughput and low-complexity decoder (D_LBAC) based on Logarithmic binary arithmetic coding (LBAC). It can easily implement multiple symbols decoding. The proposed scheme does not use multi...
详细信息
This paper proposes a high-throughput and low-complexity decoder (D_LBAC) based on Logarithmic binary arithmetic coding (LBAC). It can easily implement multiple symbols decoding. The proposed scheme does not use multiplication and division operations nor look up tables (LUTs). It has a simple algorithm structure and only requires additions and shift operations. Experimental results show that it has about 0.2-0.7 % bit-rate savings and can decode 3.5 symbols per cycle on average. The hardware implementation design described in this paper can achieve a high symbol processing capability and the lower hardware costs.
This paper presents a modification to Context-based Adaptive binary arithmetic coding (CABAC) in High Efficiency Video coding (HEVC), which includes an improved context modeling for transform coefficient levels and a ...
详细信息
This paper presents a modification to Context-based Adaptive binary arithmetic coding (CABAC) in High Efficiency Video coding (HEVC), which includes an improved context modeling for transform coefficient levels and a binary arithmetic coding (BAC) engine with low memory requirement. In the improved context modeling for transform coefficient levels, the context model index for significance map is dependent on the number of the significant neighbors covered by a local template and its position within transform block (TB). To limit the total number of context models for significance map, TBs are split into different regions based on the coefficient position. The same region in different TBs shares the same context model set. For the first and second bins of the truncated unary scheme of absolute level minus one, their context model indices depend on the neighbors covered by a local template of the current transform coefficient level. Specifically, the context model index for the first bin is determined by the number of neighbors covered by the local template with absolute magnitude equal to 1 and larger than 1;for the second bin, its context model index is determined by the number of neighbors covered by the local template with absolute magnitude larger than 1 and larger than 2. Moreover, TB is also split into different regions to incorporate the coefficient position in the context modeling of the first bin in luma component. In the BAC engine with low memory requirement, the probability is estimated based on a multi-parameter probability update mechanism, in which the probability is updated with two different adaption speeds and use the average as the estimated probability for the next symbol. Moreover, a multiplication with low bit capacities is used in the coding interval subdivision to substitute the large look-up table to reduce its memory consumption. According to the experiments conducted on HM14.0 under HEVC main profile, the improved context modeling for trans
In this paper, a high throughput context-based adaptive binary arithmetic coding decoder design is proposed. This decoder employs a syntax element prediction method to solve pipeline hazard problems. It also uses a ne...
详细信息
In this paper, a high throughput context-based adaptive binary arithmetic coding decoder design is proposed. This decoder employs a syntax element prediction method to solve pipeline hazard problems. It also uses a new hybrid memory two-symbol parallel decoding in order to enhance performance as well as to reduce costs. The critical path delay of the two-symbol binaryarithmetic decoding engine is improved by 28% with an efficient mathematical transform. Experimental results show that the throughput of our proposed design can reach 485.76 Mbins/s in the high bit-rate coding and 446.2 Mbins/s on average at 264MHz operating frequency, which is sufficient to support H.264/AVC level 5.1 real-time decoding.
Context-based Adaptive binary arithmetic coding (CABAC) is the entropy coding module in the HEVC/H.265 video coding standard. As in its predecessor, H.264/AVC, CABAC is a well-known throughput bottleneck due to its st...
详细信息
Context-based Adaptive binary arithmetic coding (CABAC) is the entropy coding module in the HEVC/H.265 video coding standard. As in its predecessor, H.264/AVC, CABAC is a well-known throughput bottleneck due to its strong data dependencies. Besides other optimizations, the replacement of the context model memory by a smaller cache has been proposed for hardware decoders, resulting in an improved clock frequency. However, the effect of potential cache misses has not been properly evaluated. This work fills the gap by performing an extensive evaluation of different cache configurations. Furthermore, it demonstrates that application-specific context model prefetching can effectively reduce the miss rate and increase the overall performance. The best results are achieved with two cache lines consisting of four or eight context models. The 2 × 8 cache allows a performance improvement of 13.2 percent to 16.7 percent compared to a non-cached decoder due to a 17 percent higher clock frequency and highly effective prefetching. The proposed HEVC/H.265 CABAC decoder allows the decoding of high-quality Full HD videos in real-time using few hardware resources on a low-power FPGA.
暂无评论