Context-based adaptive binary arithmetic coding (CABAC) is the only entropy coding method in HEVC. According to statistics, CABAC encoders account for more than 25% of the high efficiency video coding (HEVC) coding ti...
详细信息
ISBN:
(数字)9781728186351
ISBN:
(纸本)9781728186368
Context-based adaptive binary arithmetic coding (CABAC) is the only entropy coding method in HEVC. According to statistics, CABAC encoders account for more than 25% of the high efficiency video coding (HEVC) coding time. Therefore, the improved CABAC algorithm can effectively improve the coding speed of HEVC. On this basis, a selective encryption scheme based on the improved CABAC algorithm is proposed. Firstly, the improved CABAC algorithm is used to optimize the regular mode encoding, and then the cryptographic algorithm is used to selectively encrypt the syntax elements in bypass mode encoding. The experimental results show that the encoding time is reduced by nearly 10% when there is great interference to the video information. The scheme is both safe and effective.
The wearable devices became wide wont to monitor body signals for long health care and residential care applications because of the speedy development of moveable natural philosophy. They sight very important signals ...
详细信息
ISBN:
(纸本)9781538695333
The wearable devices became wide wont to monitor body signals for long health care and residential care applications because of the speedy development of moveable natural philosophy. They sight very important signals through physiological sensors and so transmit them to a cloud info for analysis and watching functions through wireless communication systems. A sensible digitizer (ADC) was accomplished by a mixed-signal application-specific integrated circuit (ASIC) that is intended supported adaptive resolution and lossless compression techniques for graph (ECG) signal watching. The adaptive resolution technique is employed to pick out totally different sampling frequencies of the ADC consistent with amplitude changes within the input signals. Consistent with the characteristic of the signals the sampling clock for the ADC are often adaptively elect. Data compression techniques used are economical methodology to avoid wasting transmission power by reducing the number of knowledge being transmitted over the network. The signal transmission rate has been reduced and maintained high-quality detection and meets the low power consumption of the planning. The ADC application of mixed signal design is implemented using VHDL coding.
In order to deal with the context dilution problem introduced in the lossless compression of M-ary sources, a lossless compression algorithm based on a context tree model is proposed. By making use of the principle th...
详细信息
ISBN:
(纸本)9781450376822
In order to deal with the context dilution problem introduced in the lossless compression of M-ary sources, a lossless compression algorithm based on a context tree model is proposed. By making use of the principle that conditioning reduces entropy, the algorithm constructs a context tree model to make use of the correlation among adjacent image pixels. Meanwhile, the M-ary tree is transformed into a binary tree to analyze the statistical information of the source in more details. In addition, the escape symbol is introduced to deal with the zero-frequency symbol problem when the model is used by an arithmetic encoder. The increment of the description length is introduced for the merging of tree nodes. The experimental results show that the proposed algorithm can achieve better compression results.
We present a novel deep compression algorithm to reduce the memory footprint of LiDAR point clouds. Our method exploits the sparsity and structural redundancy between points to reduce the bitrate. Towards this goal, w...
详细信息
ISBN:
(数字)9781728171685
ISBN:
(纸本)9781728171692
We present a novel deep compression algorithm to reduce the memory footprint of LiDAR point clouds. Our method exploits the sparsity and structural redundancy between points to reduce the bitrate. Towards this goal, we first encode the point cloud into an octree, a data-efficient structure suitable for sparse point clouds. We then design a tree-structured conditional entropy model that can be directly applied to octree structures to predict the probability of a symbol's occurrence. We validate the effectiveness of our method over two large-scale datasets. The results demonstrate that our approach reduces the bitrate by 10- 20% at the same reconstruction quality, compared to the previous state-of-the-art. Importantly, we also show that for the same bitrate, our approach outperforms other compression algorithms when performing downstream 3D segmentation and detection tasks using compressed representations. This helps advance the feasibility of using point cloud compression to reduce the onboard and offboard storage for safety-critical applications such as self-driving cars, where a single vehicle captures 84 billion points per day.
A principally new approach to restructuring a data proposed - namely - an internal data restructuring, the essence of which is to identify patterns in the internal data structure of an information resource based on a ...
详细信息
ISBN:
(数字)9781728197999
ISBN:
(纸本)9781728198002
A principally new approach to restructuring a data proposed - namely - an internal data restructuring, the essence of which is to identify patterns in the internal data structure of an information resource based on a quantitative sign. Here the quantitative characteristic, namely the characteristic of the number of series units (NSU), is the instrument used to carry out the data restructuring. The evaluation of the effectiveness of the developed method of internal restructuring from the standpoint of a more compact representation of the encoded data is carried out. A statistical approach based on the classical Huffman algorithm is used as a coding tool. The developed method of internal restructuring allows to solve an urgent scientific and applied problem associated with increasing efficiency of information resource data (IRD) entropy coding in terms of information presentation length reduction.
We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system. Specifically, the original image is first decomposed into the lossy reconstruction obtained after compressin...
详细信息
ISBN:
(数字)9781728171685
ISBN:
(纸本)9781728171692
We leverage the powerful lossy image compression algorithm BPG to build a lossless image compression system. Specifically, the original image is first decomposed into the lossy reconstruction obtained after compressing it with BPG and the corresponding residual. We then model the distribution of the residual with a convolutional neural network-based probabilistic model that is conditioned on the BPG reconstruction, and combine it with entropy coding to losslessly encode the residual. Finally, the image is stored using the concatenation of the bitstreams produced by BPG and the learned residual coder. The resulting compression system achieves state-of-the-art performance in learned lossless full-resolution image compression, outperforming previous learned approaches as well as PNG, WebP, and JPEG2000.
The idea behind the suggested developed image compression system depends on the application of DWT on error prediction values (Residual Image) instead of the original image. The error prediction values have simpler va...
详细信息
ISBN:
(数字)9781728182315
ISBN:
(纸本)9781728182322
The idea behind the suggested developed image compression system depends on the application of DWT on error prediction values (Residual Image) instead of the original image. The error prediction values have simpler values compared to the original image, to reduce the coefficient values of all the wavelet bands. As a result, the detail bands can overcome for the first decomposition level and all / some of detail bands for the second decomposition level. In addition, the effect of quantization process is less. The system has been applied on the natural and gray-scale images. The results obtained was similar for both image types. Seven types of different wavelet for four families were compared: {dbl, db3, db5, syml, coifl, bior 2.2, bi02.4.}. The db3 wavelet produced the best (PSNR), which nearest from Coifl wavelet.
Surveillance video applications grow dramatically in public safety and daily life, which often detect and recognize moving objects inside video signals. Existing surveillance video compression schemes are still based ...
详细信息
ISBN:
(数字)9781728133201
ISBN:
(纸本)9781728133218
Surveillance video applications grow dramatically in public safety and daily life, which often detect and recognize moving objects inside video signals. Existing surveillance video compression schemes are still based on traditional hybrid coding frameworks handling temporal redundancy by block-wise motion compensation mechanism, lacking the extraction and utilization of inherent structure information. In this paper, we alleviate this issue by decomposing surveillance video signals into the structure of a global spatio-temporal feature (memory) and skeleton for each frame (clue). The memory is abstracted by a recurrent neural network across Group of Pictures (GoP) inside one video sequence, representing appearance for elements that appeared inside GoP. While the skeleton is obtained by the specific pose estimator, it served as a clue for recalling memory. In addition, we introduce an attention mechanism to learn the relationships between appearance and skeletons. And we reconstruct each frame with an adversarial training process. Experimental results demonstrate that our approach can effectively generate realistic frames from appearance and skeleton accordingly. Compared with the latest video compression standard H.265, it shows much higher compression performance on surveillance video.
Recently learned image compression has achieved many great progresses, such as representative hyperprior and its variants based on convolutional neural networks (CNNs). However, CNNs are not fit for scalable coding an...
详细信息
ISBN:
(数字)9781728163956
ISBN:
(纸本)9781728163963
Recently learned image compression has achieved many great progresses, such as representative hyperprior and its variants based on convolutional neural networks (CNNs). However, CNNs are not fit for scalable coding and multiple models need to be trained separately to achieve variable rates. In this paper, we incorporate differentiable quantization and accurate entropy models into recurrent neural networks (RNNs) architectures to achieve a scalable learned image compression. First, we present an RNN architecture with quantization and entropy coding. To realize the scalable coding, we allocate the bits to multiple layers, by adjusting the layer-wise lambda values in Lagrangian multiplier-based rate-distortion optimization function. Second, we add an RNN-based hyperprior to improve the accuracy of entropy models for multiple-layer residual representations. Experimental results demonstrate that our performance can be comparable with recent CNN-based hyperprior methods on Kodak dataset. Besides, our method is a scalable and flexible coding approach, to achieve multiple rates using one single model, which is very appealing.
Recent approaches to compression of deep neural networks, like the emerging standard on compression of neural networks for multimedia content description and analysis (MPEG-7 part 17), apply scalar quantization and en...
详细信息
ISBN:
(数字)9781728163956
ISBN:
(纸本)9781728163963
Recent approaches to compression of deep neural networks, like the emerging standard on compression of neural networks for multimedia content description and analysis (MPEG-7 part 17), apply scalar quantization and entropy coding of the quantization indexes. In this paper we present an advanced method for quantization of neural network parameters, which applies dependent scalar quantization (DQ) or trellis-coded quantization (TCQ), and an improved context modeling for the entropy coding of the quantization indexes. We show that the proposed method achieves 5.778% bitrate reduction and virtually no loss (0.37%) of network performance in average, compared to the baseline methods of the second test model (NCTM) of MPEG-7 part 17 for relevant working points.
暂无评论