In this paper, we propose a new hardware-efficient adaptive binary range coder (ABRC) and its very-large-scale integration (VLSI) architecture. To achieve this, we follow an approach that allows to reduce the bit capa...
详细信息
In this paper, we propose a new hardware-efficient adaptive binary range coder (ABRC) and its very-large-scale integration (VLSI) architecture. To achieve this, we follow an approach that allows to reduce the bit capacity of the multiplication needed in the interval division part and shows how to avoid the need to use a loop in the renormalization part of ABRC. The probability estimation in the proposed ABRC is based on a lookup table free virtual sliding window. To obtain a higher compression performance, we propose a new adaptive window size selection algorithm. In comparison with an ABRC with a single window, the proposed system provides a faster probability adaptation at the initial encoding/decoding stage, and more accurate probability estimation for very low entropy binary sources. We show that the VLSI architecture of the proposed ABRC attains a throughput of 105.92 MSymbols/s on the FPGA platform, and consumes 18.15 mW for the dynamic part power. In comparison with the state-of-the-art MQ-coder (used in JPEG2000 standard) and the M-coder (used in H.264/Advanced Video Coding and H.265/High Efficiency Video Coding standards), the proposed ABRC architecture provides comparable throughput, reduced memory, and power consumption. Experimental results obtained for a wavelet video codec with JPEG2000-like bit-plane entropy coder show that the proposed ABRC allows to reduce the bit rate by 0.8%-8% in comparison with the MQ-coder and from 1.0%-24.2% in comparison with the M-coder.
Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenec...
详细信息
Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data.
Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenec...
详细信息
Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data.
This paper is dedicated to the complexity comparison of adaptive binary arithmetic coding integer software implementations. Firstly, for binary memoryless sources with known probability distribution, we prove that enc...
详细信息
ISBN:
(纸本)9783642228759
This paper is dedicated to the complexity comparison of adaptive binary arithmetic coding integer software implementations. Firstly, for binary memoryless sources with known probability distribution, we prove that encoding time for arithmetic encoder is a linear function of a number of input binary symbols and source entropy. Secondly, we show that the byte-oriented renormalization allows to decrease encoding time up to 40% in comparison with bit-oriented renormalization. Finally, we study influence of probability estimation algorithm for encoding time and show that probability estimation algorithm using "Virtual Sliding Window" has less computation complexity than state machine based probability estimation algorithm from H.264/AVC standard.
暂无评论