In this paper, we propose a source coding scheme that represents data from unknown distributions through frequency and support information. Existing encoding schemes often compress data by sacrificing computational ef...
详细信息
ISBN:
(纸本)9798350344868;9798350344851
In this paper, we propose a source coding scheme that represents data from unknown distributions through frequency and support information. Existing encoding schemes often compress data by sacrificing computational efficiency or by assuming the data follows a known distribution. We take advantage of the structure that arises within the spatial representation and utilize it to encode run-lengths within this representation using Golomb coding. Through theoretical analysis, we show that our scheme yields an overall bit rate that nears entropy without a computationally complex encoding algorithm and verify these results through numerical experiments.
In Versatile Video Coding (VVC), Cross-component Linear Model (CCLM) predicts chroma samples by assuming a linear relationship between luma and chroma components. In performing CCLM for video in YUV 4:2:0 chroma forma...
详细信息
ISBN:
(纸本)9798350347951
In Versatile Video Coding (VVC), Cross-component Linear Model (CCLM) predicts chroma samples by assuming a linear relationship between luma and chroma components. In performing CCLM for video in YUV 4:2:0 chroma format, collocated luma samples are firstly downsampled by a low-pass filter to match luma resolution with chroma, and one linear model of luma-chroma sample pairs is applied on the reconstructed luma samples to generate the predicted chroma samples. However, the low-pass downsampling procedure ignores relative spatial variations among luma samples in proximity, such as edge and gradient information. To solve this issue, a new coding technique, namely gradient linear model (GLM), is proposed for further compression efficiency exploration beyond VVC. Instead of using a low-pass filter in CCLM, the GLM utilizes high-pass gradient filters to generate the downsampled luma values. In this paper, two GLM schemes are provided with different trade-offs between coding gain and complexity, including: 1) a 2-parameter scheme that shares the CCLM module framework but replaces the downsampling filter with high-pass gradient filters;2) a 3-parameter scheme that further combines the luma gradients with the low-pass downsampled luma values. Based on the enhanced compression model (ECM-5.0) software from the joint video experts team (JVET), simulation results show that the 2-parameter GLM achieves average Bjentegaard delta-rate (BD-rate) savings of {1.01%, 1.66%, 1.81%} and {0.69%, 0.95%, 1.12%} for {Y, U, V} components under the All Intra and Random Access configurations, respectively, and the 3-parameter GLM provides {1.28%, 3.23%, 3.28%} and {0.92%, 2.19%, 2.26%} BD-rate savings for {Y, U, V} components under the All Intra and Random Access configurations, respectively. Both of the proposed GLM schemes have been adopted to the ECM software platform.
Using the orthogonal transformation method, this paper presents the performance analysis of FFT, DHT, DCT, and DST for ECG datacompression. When the heart muscle contracts or expands as a response to the command sign...
详细信息
Light field hyperspectral imaging captures the spatial, spectral and angular information of a scene, resulting in a massive amount of high-dimensional data. The storage and transmission of this data require high bandw...
详细信息
In numerous application areas such as medical imaging and remote sensing, image compression methods play an important role in managing data storage and transmission costs. In this paper, we present an empirical study ...
详细信息
Neural image compression methods have seen increasingly strong performance in recent years. However, they suffer orders of magnitude higher computational complexity compared to traditional codecs, which hinders their ...
ISBN:
(纸本)9798350307184
Neural image compression methods have seen increasingly strong performance in recent years. However, they suffer orders of magnitude higher computational complexity compared to traditional codecs, which hinders their real-world deployment. This paper takes a step forward in closing this gap in decoding complexity by adopting shallow or even linear decoding transforms. To compensate for the resulting drop in compression performance, we exploit the often asymmetrical computation budget between encoding and decoding, by adopting more powerful encoder networks and iterative encoding. We theoretically formalize the intuition behind, and our experimental results establish a new frontier in the trade-off between rate-distortion and decoding complexity for neural image compression. Specifically, we achieve rate-distortion performance competitive with the established mean-scale hyperprior architecture of Minnen et al. (2018) at less than 50K decoding FLOPs/pixel, reducing the baseline's overall decoding complexity by 80%, or over 90% for the synthesis transform alone. Our code can be found at https: //***/mandt-lab/shallow-ntc.
Re-pair is a grammar-based compression algorithm. In the poster session of the datacompressionconference 2021, we propose the basic concepts of Parallel Re-pair, a parallel variant of Re-pair that achieves shorter c...
详细信息
ISBN:
(纸本)9781665478939
Re-pair is a grammar-based compression algorithm. In the poster session of the datacompressionconference 2021, we propose the basic concepts of Parallel Re-pair, a parallel variant of Re-pair that achieves shorter compression time with multi-core CPUs. However, our experimental results show it achieves only 1.6 to 2.4 times faster with 32 processors. In this poster session, we propose practical implementation of Parallel Re-pair with Intel Threading Building Blocks.
At present, when user-side distributed PV participates in the multi-level aggregation regulation of power grid, it involves the collection and interaction of massive multiple heterogeneous data, and the data redundanc...
详细信息
Recent advancements in underwater multimedia compression, particularly deep learning techniques, are explored to address the increasing data demands of the Internet of Underwater Things (IoUT). Efficient compression i...
详细信息
LiDAR is a key sensor for 3D mapping, and as the measurement accuracy of the sensor continues to improve, the order of magnitude and the memory required to acquire 3D point cloud data are getting larger and larger. Ho...
详细信息
暂无评论