A universal scheme is proposed for the lossless compression of two-dimensional tables and matrices. Instead of standard row- or column-based compression, we propose to sort each column first and record both the sorted...
详细信息
ISBN:
(纸本)9781665478939
A universal scheme is proposed for the lossless compression of two-dimensional tables and matrices. Instead of standard row- or column-based compression, we propose to sort each column first and record both the sorted table and the corresponding permutation table of the sorting permutations. These two tables are then separately compressed. In this new scheme, both intra- and inter-column correlations can be efficiently captured, giving rise to improved compression ratio in particular when both column-wise and row-wise dependencies cooccur. This scheme reduces the problem of the compression of an arbitrary two-dimensional table to that of a 'permutation table' together with a 'sorted table', where the former is only dependent on the table dimension and the latter can be effectively compressed column-by-column using predictive methods. Based on this scheme, a new algorithm is proposed, SortComp (sort-and-compress). For correlated columns, we give an estimation of the asymptotic bit rate of the algorithm and compare it to column-oriented compression schemes. Numerical experiments on real-life csv datasets validate the advantages of SortComp compared to existing row- and column-oriented compression algorithms.
Progressive compression allows images to start loading as low-resolution versions, becoming clearer as more data is received. This increases user experience when, for example, network connections are slow. Today, most...
详细信息
The Advanced Encryption Standard (AES)'s computational speed and flexibility for systems with limited resources are both greatly improved by the unique methodologies introduced in this work. We speed up encryption...
详细信息
Today's scientific high-performance computing (HPC) applications are often running on large-scale environments, producing extremely large volumes of data that need to be compressed effectively for efficient storag...
详细信息
ISBN:
(纸本)9798350383225
Today's scientific high-performance computing (HPC) applications are often running on large-scale environments, producing extremely large volumes of data that need to be compressed effectively for efficient storage or data transfer. Error-bounded lossy compression is arguably the most efficient way to this end, because it can get very high compression ratios while controlling the data distortion strictly based on user requirements for compression errors. However, error-bounded lossy compressors may have serious artifact issues in situations with relatively large error bound or high compression ratios, which is highly undesirable to users. In this paper, we comprehensively characterize the artifacts for multiple state-of-the-art error-bounded lossy compressors (including SZ-1.4, SZ-2.1, SZ-3.0, FPZIP, ZFP, MGARD) and provide an in-depth analysis for the root cause of these artifacts. We summarize the artifact issue into three types and also develop an efficient artifact detection algorithm for each type of artifact. We finally evaluate our artifact detection methods using four scientific datasets, which demonstrates that the proposed methods are able to detect artifact issues under linear time complexity.
The distributed filtering problem based on datacompression is studied for linear discrete time-varying systems over sensor networks. At each sensor node, sequential inverse covariance intersection (SICI) fusion algor...
详细信息
ISBN:
(纸本)9798350334722
The distributed filtering problem based on datacompression is studied for linear discrete time-varying systems over sensor networks. At each sensor node, sequential inverse covariance intersection (SICI) fusion algorithm is used to compress the estimates received from neighbor nodes. A distributed recursive filter is presented based on compressed data. The filtering gain (FG) and consensus gain (CG) are solved by locally minimizing an upper bound of the filtering error covariance matrix (FECM). It avoids the calculation of cross-covariance matrices (CCMs) and reduces the computational burden. Simulation results verify the effectiveness of the algorithm.
Image sensors used in real camera systems are equipped with colour filter arrays which sample the light rays in different spectral bands. Each colour channel can thus be obtained sep-arately by considering the corresp...
详细信息
data-free knowledge distillation (DFKD) aims to obtain a lightweight student model without original training data. Existing works generally synthesize data from the pre-trained teacher model to replace the original tr...
详细信息
ISBN:
(纸本)9781665468916
data-free knowledge distillation (DFKD) aims to obtain a lightweight student model without original training data. Existing works generally synthesize data from the pre-trained teacher model to replace the original training data for student learning. To more effectively train the student model, the synthetic data shall be customized to the current student learning ability. However, this is ignored in the existing DFKD methods and thus negatively affects the student training. To address this issue, we propose Customizing Synthetic data for data-Free Student Learning (CSD) in this paper, which achieves adaptive data synthesis using a self-supervised augmented auxiliary task to estimate the student learning ability. That is, data synthesis is dynamically adjusted to enlarge the cross entropy between the labels and the predictions from the self-supervised augmented task, thus generating the hard samples for the student model. The experiments on various datasets and teacher-student models show the effectiveness of our proposed method. Code is available at: https://***/luoshiya/CSD
We propose a structured pruning method to achieve a light-weighted decoder of learned image compression to accommodate various terminals. The structured pruning method identifies the effectiveness of each channel of d...
详细信息
ISBN:
(纸本)9781665478939
We propose a structured pruning method to achieve a light-weighted decoder of learned image compression to accommodate various terminals. The structured pruning method identifies the effectiveness of each channel of decoder via gradient ascent and gradient descent while maintaining the encoder and entropy model. To our best knowledge, this paper is the first attempt to design a structured pruning method for universal pretrained learned image compression. Experimental results demonstrate that the proposed method can reduce about 40% parameters and save 25% inference time at the cost of 0.23 dB BD-PSNR and 4.33% BD-rate change.
The crux of this study is to support browser-based visualizations of spatiotemporally evolving phenomena. Such phenomena arise in myriad domains spanning terrestrial, oceanic, and atmospheric processes. The data are v...
详细信息
The possibility of using wavelet transformations for bidirectional resize (scaling) of TV images is considered in the article. It is preliminary twice reduction of image sizes before compression and restoration of the...
详细信息
暂无评论