Geometry-based point cloud compression (G-PCC) standard is devoted to generic point clouds and offers excellent performance. However, redundancies are found in the original octree codec of G-PCC. This paper addresses ...
详细信息
Geometry-based point cloud compression (G-PCC) standard is devoted to generic point clouds and offers excellent performance. However, redundancies are found in the original octree codec of G-PCC. This paper addresses the found issues to optimize the octree codec of G-PCC to be more precise. First, uniform contexts are proposed based on the neighbouring child nodes and nodes for the bitwise mode. Second, we merge the single-child mode into the planar mode. A method of eligibility determination for this new planar mode guided by the required number of coded bits is proposed. In addition, the characteristics of each plane in the point cloud are utilized to design contexts for the new planar mode. Third, we use the exponential moving average to optimize the contexts for the bitwise and planar modes, guaranteeing low memory consumption and high compression performance. Experimental evaluation, performed on a diversity of point clouds in the common test condition of G-PCC, demonstrates that the proposed methods enhance the compression performance of the octree codec in G-PCC with a gain of - 2.5 and - 5.8% for lossless and lossy geometry compression, respectively. At the same time, the computational complexity is reduced. Part of this work has been adopted to the latest G-PCC Edition 2. Compared to methods published in recent years, the improved octree codec of G-PCC also shows the advantages of compressing a diversity of point clouds.
In this paper, we propose a new hardware-efficient adaptive binary range coder (ABRC) and its very-large-scale integration (VLSI) architecture. To achieve this, we follow an approach that allows to reduce the bit capa...
详细信息
In this paper, we propose a new hardware-efficient adaptive binary range coder (ABRC) and its very-large-scale integration (VLSI) architecture. To achieve this, we follow an approach that allows to reduce the bit capacity of the multiplication needed in the interval division part and shows how to avoid the need to use a loop in the renormalization part of ABRC. The probability estimation in the proposed ABRC is based on a lookup table free virtual sliding window. To obtain a higher compression performance, we propose a new adaptive window size selection algorithm. In comparison with an ABRC with a single window, the proposed system provides a faster probability adaptation at the initial encoding/decoding stage, and more accurate probability estimation for very low entropy binary sources. We show that the VLSI architecture of the proposed ABRC attains a throughput of 105.92 MSymbols/s on the FPGA platform, and consumes 18.15 mW for the dynamic part power. In comparison with the state-of-the-art MQ-coder (used in JPEG2000 standard) and the M-coder (used in H.264/Advanced Video coding and H.265/High Efficiency Video coding standards), the proposed ABRC architecture provides comparable throughput, reduced memory, and power consumption. Experimental results obtained for a wavelet video codec with JPEG2000-like bit-plane entropy coder show that the proposed ABRC allows to reduce the bit rate by 0.8%-8% in comparison with the MQ-coder and from 1.0%-24.2% in comparison with the M-coder.
We propose an adaptive binary implementation of the range version of Asymmetric Numeral Systems (rANS). First, as rANS encoder processes symbols in reverse order, we estimate the probabilities in forward order, store ...
详细信息
We propose an adaptive binary implementation of the range version of Asymmetric Numeral Systems (rANS). First, as rANS encoder processes symbols in reverse order, we estimate the probabilities in forward order, store them into the encoder memory, and use them during the reverse encoding. It guarantees that both the encoder and the decoder have exactly the same probability estimation for each symbol. Second, we show how this approach can be implemented using probability estimation via Virtual Sliding Window (VSW). Finally, we demonstrate that comparing to rANS with a static model, the proposed adaptive binary rANS provides better compression performance having similar decoding complexity.
Motivation: The past decade has seen the introduction of new technologies that lowered the cost of genomic sequencing increasingly. We can even observe that the cost of sequencing is dropping significantly faster than...
详细信息
Motivation: The past decade has seen the introduction of new technologies that lowered the cost of genomic sequencing increasingly. We can even observe that the cost of sequencing is dropping significantly faster than the cost of storage and transmission. The latter motivates a need for continuous improvements in the area of genomic data compression, not only at the level of effectiveness (compression rate), but also at the level of functionality (e.g. random access), configurability (effectiveness versus complexity, coding tool set...) and versatility (support for both sequenced reads and assembled sequences). In that regard, we can point out that current approaches mostly do not support random access, requiring full files to be transmitted, and that current approaches are restricted to either read or sequence compression. Results: We propose AFRESh, an adaptive framework for no-reference compression of genomic data with random access functionality, targeting the effective representation of the raw genomic symbol streams of both reads and assembled sequences. AFRESh makes use of a configurable set of prediction and encoding tools, extended by a Context-Adaptive Binary arithmetic coding scheme (CABAC), to compress raw genetic codes. To the best of our knowledge, our paper is the first to describe an effective implementation CABAC outside of its' original application. By applying CABAC, the compression effectiveness improves by up to 19% for assembled sequences and up to 62% for reads. By applying AFRESh to the genomic symbols of the MPEG genomic compression test set for reads, a compression gain is achieved of up to 51% compared to SCALCE, 42% compared to LFQC and 44% compared to ORCOM. When comparing to generic compression approaches, a compression gain is achieved of up to 41% compared to GNU Gzip and 22% compared to 7-Zip at the Ultra setting. Additionaly, when compressing assembled sequences of the Human Genome, a compression gain is achieved up to 34% compared
The compression performance of grammar-based codes is revisited from a new perspective. Previously, the compression performance of grammar-based codes was evaluated against that of the best arithmetic coding algorithm...
详细信息
The compression performance of grammar-based codes is revisited from a new perspective. Previously, the compression performance of grammar-based codes was evaluated against that of the best arithmetic coding algorithm with finite contexts. In this correspondence, we first define semifinite-state sources and finite-order semi-Markov sources. Based on the definitions of semifinite-state sources and finite-order semi-Markov sources, and the idea of run-length encoding (RLE), we then extend traditional RLE algorithms to context-based RLE algorithms: RLE algorithms with k contexts and RLE algorithms of order k, where k is a nonnegative integer. For each individual sequence x, let r(sr,k)*(x) and r(sr/k)*(x) be the best compression rate given by RLE algorithms;kith k contexts and by RLE algorithms of order k, respectively. It is proved that for any x, r(sr,k)* is no greater than the best compression rate among all arithmetic coding algorithms with k contexts. Furthermore, it is shown that there exist stationary, ergodic semi-Markov sources for which the best RLE algorithms without any context outperform the best arithmetic coding algorithms with any finite number of contexts. Finally, we show that the worst case redundancies of grammar-based codes against r(sr,k)*(x) and r(sr/k)*(x) among all length-n individual sequences x from a finite alphabet are upper-bounded by d(1) log log n / log n and d(2) log log n / log n, respectively, where d(1) and d(2) are constants. This redundancy result is stronger than all previous corresponding results.
An improvement of a discrete cosine transform (DCT)-based method for electrocardiogram (ECG) compression is presented. The appropriate use of a block based DCT associated to a uniform scalar dead zone quantiser and ar...
详细信息
An improvement of a discrete cosine transform (DCT)-based method for electrocardiogram (ECG) compression is presented. The appropriate use of a block based DCT associated to a uniform scalar dead zone quantiser and arithmetic coding show very good results, confirming that the proposed strategy exhibits competitive performances compared with the most popular compressors used for ECG compression.
Distribution matching transforms independent and Bernoulli(1/2) distributed input bits into a sequence of output symbols with a desired distribution. Fixed-to-fixed length, invertible, and low complexity encoders and ...
详细信息
Distribution matching transforms independent and Bernoulli(1/2) distributed input bits into a sequence of output symbols with a desired distribution. Fixed-to-fixed length, invertible, and low complexity encoders and decoders based on constant composition and arithmetic coding are presented. The encoder achieves the maximum rate, namely, the entropy of the desired distribution, asymptotically in the blocklength. Furthermore, the normalized divergence of the encoder output and the desired distribution goes to zero in the blocklength.
A signal-dependent wavelet transform based on the lifting scheme is proposed. The transform can be made reversible (i.e. an integer-to-integer transform). The reversible transform. followed by arithmetic coding, is ap...
详细信息
A signal-dependent wavelet transform based on the lifting scheme is proposed. The transform can be made reversible (i.e. an integer-to-integer transform). The reversible transform. followed by arithmetic coding, is applied to lossless image compression. Simulation results indicate that the proposed method is superior to the S + P method.
A threshold scheme for secret sharing can protect a secret with high reliability and flexibility. These advantages can be achieved only when all the participants are honest, i.e. all the participants willing to pool t...
详细信息
A threshold scheme for secret sharing can protect a secret with high reliability and flexibility. These advantages can be achieved only when all the participants are honest, i.e. all the participants willing to pool their shadows shall always present the true ones. Cheating detection is an important issue in the secret sharing scheme. However, cheater identification is more effective than cheating detection in realistic applications. If some dishonest participants exist, the other honest participants will obtain a false secret, while the cheaters may individually obtain the true one. The paper presents a method to enforce the security of any threshold scheme with the ability to detect cheating and identify cheaters. By applying a one-way hashing function along with the use of arithmetic coding, the proposed method can be used to deterministically detect cheating and identify the cheaters, no matter how many cheaters are involved in the secret reconstruction.
A new method for generating ternary spreading sequences with aperiodic zero correlation zones and large family sizes is presented. The sequences are proposed as spreading sequences to provide high capacity and cancel ...
详细信息
A new method for generating ternary spreading sequences with aperiodic zero correlation zones and large family sizes is presented. The sequences are proposed as spreading sequences to provide high capacity and cancel multipath interference in a multi-carrier DS-CDMA system.
暂无评论