datacompression plays a key role in efficient data storage, transmission, and processing. With the fast development of deep learning techniques, deep neural networks have been used in this field to achieve a higher c...
详细信息
ISBN:
(纸本)9798350359329;9798350359312
datacompression plays a key role in efficient data storage, transmission, and processing. With the fast development of deep learning techniques, deep neural networks have been used in this field to achieve a higher compression rate. Deep learning-based general-purpose lossless compression techniques are formulated as an autoregressive sequential prediction problem. These methods are state-of-the-art in terms of compression ratio but not practical due to runtime and resource constraints. Recent advances in lossless image compression using non-autoregressive methods for probability modeling prove to be a faster and more practical approach. In this paper, we propose ByteZip, a lossless compression method based on the non-autoregressive approach for known or defined structured byte streams. ByteZip involves hierarchical probabilistic modeling using autoencoders and density mixture models. This approach reduces the overhead of sequential processing. The goal is to design a practical lossless compressor with faster compression and decompression along with a competitive compression ratio. Experiments show that the proposed approach achieves a 64x higher compression speed than the state-of-the-art transformer-based model TRACE with an overhead of only 5% less size reduction on average. Our approach outperforms general-purpose compressors such as Gzip (23% more size reduction on average) and 7z (16% more size reduction on average).
Deep neural networks make it possible to learn key characteristics of data without having to assume mathematically tractable models. This in turn results in an ability to compress data in a model-free way. One of the ...
详细信息
ISBN:
(纸本)9798350343205;9798350343199
Deep neural networks make it possible to learn key characteristics of data without having to assume mathematically tractable models. This in turn results in an ability to compress data in a model-free way. One of the promising areas for the application of deep learning to the physical layer of communication networks is compression. Hundred-fold compression with a small loss of the CSI in massive MIMO systems has been shown to be both feasible and necessary. However, model-free and data-driven compression comes with a downside: the encoding and decoding models need to be trained on a large set of CSI arrays indicative of a wide spectrum of propagation and environmental conditions. As a result, in the early stages of the deployment of deep CSI compression models, it would be necessary to detect if and when users' channels have drifted significantly away from the distribution of the CSI data on which the deep compression model was trained. In this paper, we present both 1) a technique for detecting harmful channel drift and 2) a lightweight scheme for fine-tuning the deep compression models to adjust to such shifts. Using public-domain synthetic channel data as well as 3GPP-compliant simulated data, we demonstrate the practicality of our proposed deep compression and detection framework. We close with recommendations for a viable implementation of the proposed drift detection by the standards bodies.
An interval graph is the intersection graph of intervals on the real line. We consider the problem of constructing space efficient data structures for two subclasses of interval graphs: those with maximum degree sigma...
详细信息
ISBN:
(纸本)9798350385885;9798350385878
An interval graph is the intersection graph of intervals on the real line. We consider the problem of constructing space efficient data structures for two subclasses of interval graphs: those with maximum degree sigma(1) and those with chromatic number at most sigma(2). We show that both bounded degree and bounded chromatic number interval graphs have a tight lower bound of n lg sigma(i) - o(n lg sigma(i)) bits (i = 1, 2). This improves the lower bound of Chakraborty and Jo from 1/6 n lg sigma(i) - O(n). For bounded chromatic number interval graphs, we give the first succinct data structure occupying n lg sigma(2)+ O(n) bits that supports navigational operations and distance queries in O(sigma(2) lg n) time. To match Chakraborty and Jo's time complexity of O(lg lg sigma(2)), which uses (sigma(2) - 1)n+ O(n) bits, we use 2n lg sigma(2) + O(n) bits instead.
The two meta-algorithms Concat Compress and Cross Compress, which can be used to measure the similarity of files, were subjected to an extensive practical test together with the compression algorithms Re-Pair, gzip an...
详细信息
This paper conducts a comparative analysis of the impact of unsparsing by data scaling on lossless datacompression methods. The most commonly used datacompression algorithms such as gzip, zlib, bzip2, and lzma are t...
详细信息
In order to improve automated AI-based partial discharge (PD) analysis, it will be investigated whether the use of the entire PD current waveform has advantages over using only the apparent charge and phase reference....
详细信息
ISBN:
(纸本)9798350374995;9798350374988
In order to improve automated AI-based partial discharge (PD) analysis, it will be investigated whether the use of the entire PD current waveform has advantages over using only the apparent charge and phase reference. As a part of the research, this paper investigates the use of compression algorithms to reduce the amount of data when transmitting, storing and analyzing high-resolution PD current waveforms. For this purpose, traditional compression methods are extended by an AI-based approach. This leads to significantly higher compression rates, which makes it possible to reduce the complexity of AI based analysis systems.
The smart grid integrates digital technologies for efficient power delivery and real-time monitoring, but managing its large data volumes remains challenging. This study investigates Compressed Sensing (CS) as a solut...
详细信息
Due to the constraints of bandwidth and power in implantable brain-computer interface, large amount of neural data cannot be easily transferred over a wireless link. Wavelet-based compression before transmission is a ...
详细信息
ISBN:
(纸本)9798350354966;9798350354959
Due to the constraints of bandwidth and power in implantable brain-computer interface, large amount of neural data cannot be easily transferred over a wireless link. Wavelet-based compression before transmission is a popular technique for reducing the data rate. But it struggles to achieve a well-balanced trade-off between compression ratio and reconstruction error. In this article, we propose an iterative spike compression method based on discrete wavelet transform and heuristic algorithm. It can automatically search for the optimal combination of wavelet coefficients, trading longer processing time for higher compression ratio and lower reconstruction error. It has been validated in simulation that the proposed algorithm achieves a compression ratio of 9.25 on detected spike data with a reconstruction error of 3.63%. The proposed processor has been validated in the Cadence Virtuoso Environment using X-FAB's 180nm CMOS process. It occupies a chip area of 2.56 mm(2) and consumes 385 mu W power.
In the blind single image super-resolution (SISR) task, existing works have been successful in restoring image-level unknown degradations. However, when a single video frame becomes the input, these works usually fail...
详细信息
ISBN:
(纸本)9798350318920;9798350318937
In the blind single image super-resolution (SISR) task, existing works have been successful in restoring image-level unknown degradations. However, when a single video frame becomes the input, these works usually fail to address degradations caused by video compression, such as mosquito noise, ringing, blockiness, and staircase noise. In this work, we for the first time, present a video compression-based degradation model to synthesize low-resolution image data in the blind SISR task. Our proposed image synthesizing method is widely applicable to existing image datasets, so that a single degraded image can contain distortions caused by the lossy video compression algorithms. This overcomes the leak of feature diversity in video data and thus retains the training efficiency. By introducing video coding artifacts to SISR degradation models, neural networks can super-resolve images with the ability to restore video compression degradations, and achieve better results on restoring generic distortions caused by image compression as well. Our proposed approach achieves superior performance in SOTA no-reference Image Quality Assessment, and shows better visual quality on various datasets. In addition, we evaluate the SISR neural network trained with our degradation model on video super-resolution (VSR) datasets. Compared to architectures specifically designed for the VSR purpose, our method exhibits similar or better performance, evidencing that the presented strategy on infusing video-based degradation is generalizable to address more complicated compression artifacts even without temporal cues. The code is available at https://***/Kiteretsu77/VCISR-official.
Navigating challenging conditions characterized by stringent bandwidth constraints and noisy channels, Distributed Sensor Networks (DSNs) demand robust feature compression techniques for accurate data fusion. The adve...
详细信息
ISBN:
(纸本)9798350303582;9798350303599
Navigating challenging conditions characterized by stringent bandwidth constraints and noisy channels, Distributed Sensor Networks (DSNs) demand robust feature compression techniques for accurate data fusion. The advent of Supervised Contrastive Learning with Mask-Sparsity (SCL-MS) presents a neural approach to feature compression, offering compact and semantically aligned continuous representations tailored for DSNs. However, the unconventional structure of SCL-MS poses a challenge for traditional neural scalar quantization. This paper introduces an innovative quantization method, Doubly Progressive Quantization, specifically crafted for SCL-MS. Experiments on distribution image classification tasks show that with marginal reduction in accuracy, the proposed method can achieve a compression gain of 37 times compared with the baseline without quantization. In addition, the proposed method is shown to outperform decision-level data fusion in terms of noise-resilience and node-scaling.
暂无评论