Recent advancements in information technology and the development of smart infrastructure within smart cities have led to an unprecedented increase in data generation. Such large volumes of data can easily overburden ...
详细信息
Although ultra-high resolution point clouds have been acquired more easily, the huge amount of the data makes it more challenging to store and transmit, and also difficult to be applied in lightweight terminal. To add...
详细信息
ISBN:
(纸本)9798350349405;9798350349399
Although ultra-high resolution point clouds have been acquired more easily, the huge amount of the data makes it more challenging to store and transmit, and also difficult to be applied in lightweight terminal. To address this challenge, we propose an Adaptive Downsampling and Spatial Upconversion framework for point cloud compression (ADSU). In the proposed adaptive downsampling, we introduce two key components, feature-aware augmented graph convolution (FAGC) and adaptive-sampling-based global graph aggregation module (ASGGA), to capture correlations between local and global features. For upsampling, we propose a high-frequency feature generation module (HFFG) to generate detailed information, which plays a crucial role in achieving precise reconstruction. Experimental results demonstrate that the combination of our proposed ADSU with popular point cloud compression methods can significantly improve compression performance.
A rollup is a scaling solution built on top of an existing blockchain. Rollups separate execution from consensus, but are required to post the data used for state updates to the underlying blockchain. This data is req...
详细信息
Recently, cross modal compression (CMC) is proposed to compress highly redundant visual data into a compact, common, human-comprehensible domain (such as text) to preserve semantic fidelity for semantic-related applic...
详细信息
ISBN:
(纸本)9798350347951
Recently, cross modal compression (CMC) is proposed to compress highly redundant visual data into a compact, common, human-comprehensible domain (such as text) to preserve semantic fidelity for semantic-related applications. However, CMC only achieves a certain level of semantic fidelity at a constant rate, and the model aims to optimize the probability of the ground truth text but not directly semantic fidelity. To tackle the problems, we propose a novel scheme named rate-distortion optimized CMC (RDO-CMC). Specifically, we model the text generation process as a Markov decision process and propose rate-distortion reward which is used in reinforcement learning to optimize text generation. In rate-distortion reward, the distortion measures both the semantic fidelity and naturalness of the encoded text. The rate for the text is estimated by the sum of the amount of information of all the tokens in the text since the amount of information of each token is a lower bound of coding bits. Experimentally, RDO-CMC effectively controls the rate in the CMC framework and achieves competitive performance on MSCOCO dataset.
Federated learning obtains the global model through the cooperative training of each participating device while protecting data privacy. However, the huge communication cost caused by each client sending complete mode...
详细信息
ISBN:
(纸本)9798350381993;9798350382006
Federated learning obtains the global model through the cooperative training of each participating device while protecting data privacy. However, the huge communication cost caused by each client sending complete model update becomes an important problem for its wide application. Model sparsification and quantization are practical solutions to reduce singleround uplink communication, respectively. However, when the compression ratio increases, (1) the current model sparsification methods based on fine granularity by uploading significant gradients are difficult to effectively select important parameters by using thresholds;(2) the unified bit-width quantization of parameters of layers with different importance in the model will lead to greater information loss. In this paper, we propose a joint compression framework based on functional structure sparsity and hybrid quantization sensing to compress the communication cost to more than two orders of magnitude. First, we proposed to take filters as sparse granularity through the visual analysis of parameter updating rules in the training process, and searched for filters with strong representational ability among filters with different representational abilities in the convolution layer. Second, we propose an adaptive layer-sensing quantization method according to the parameter distribution of each layer, which assigns different bit widths to different layers in the model after structural sparsification. Finally, we design a model aggregation and update scheme based on the above joint compression framework, which reduces the error caused by compression through model reconstruction and parameter reuse. Experiments show that our proposed framework compress single communication to more than two orders of magnitude in different tasks while ensuring the convergence speed and final model performance.
Due to the limitations of spacecraft data processing capabilities and the data transmission rate to the ground, the image compression system is essential for imaging payloads on satellites. JPEG2000, with its advantag...
详细信息
Information compression is shown to pack the size of data accumulated in advanced documents. We have proposed a block adaptive model (BAM) compression strategy to pack the size of image while keeping the decompression...
详细信息
With the promise of federated learning (FL) to allow for geographically-distributed and highly personalized services, the efficient exchange of model updates between clients and servers becomes crucial. FL, though dec...
详细信息
ISBN:
(纸本)9798350386066;9798350386059
With the promise of federated learning (FL) to allow for geographically-distributed and highly personalized services, the efficient exchange of model updates between clients and servers becomes crucial. FL, though decentralized, often faces communication bottlenecks, especially in resource-constrained scenarios. Existing datacompression techniques like gradient sparsification, quantization, and pruning offer some solutions, but may compromise model performance or necessitate expensive retraining. In this paper, we introduce FEDSZ, a specialized lossy-compression algorithm designed to minimize the size of client model updates in FL. FEDSZ incorporates a comprehensive compression pipeline featuring data partitioning, lossy and lossless compression of model parameters and metadata, and serialization. We evaluate FEDSZ using a suite of error-bounded lossy compressors, ultimately finding SZ2 to be the most effective across various model architectures and datasets including AlexNet, MobileNetV2, ResNet50, CIFAR-10, Caltech101, and Fashion-MNIST. Our study reveals that a relative error bound 10(-2) achieves an optimal tradeoff, compressing model states between 5.55-12.61x while maintaining inference accuracy within < 0.5% of uncompressed results. Additionally, the runtime overhead of FEDSZ is < 4.7% or between of the wall-clock communication-round time, a worthwhile trade-off for reducing network transfer times by an order of magnitude for networks bandwidths < 350Mbps. Intriguingly, we also find that the error introduced by FEDSZ could potentially serve as a source of differentially private noise, opening up new avenues for privacy-preserving FL.
Effective medical image management requires a balance between compression and security without compromising quality. This study presents a two-step strategy: first, using lossless JPEG compression to reduce storage re...
详细信息
As LiDAR sensors have become ubiquitous, the need for an efficient LiDAR datacompression algorithm has increased. Modern LiDARs produce gigabytes of scan data per hour (Fig. 1) and are often used in applications with...
详细信息
ISBN:
(纸本)9798350323658
As LiDAR sensors have become ubiquitous, the need for an efficient LiDAR datacompression algorithm has increased. Modern LiDARs produce gigabytes of scan data per hour (Fig. 1) and are often used in applications with limited compute, bandwidth, and storage resources. We present a fast, lossless compression algorithm for LiDAR range and attribute scan sequences including multiple-return range, signal, reflectivity, and ambient infrared. Our algorithm-dubbed "Jiffy"-achieves substantial compression by exploiting spatiotemporal redundancy and sparsity. Speed is accomplished by maximizing use of single-instruction-multiple-data (SIMD) instructions. In autonomous driving, infrastructure monitoring, drone inspection, and handheld mapping benchmarks, the Jiffy algorithm consistently outcompresses competing lossless codecs while operating at speeds in excess of 65M points/sec on a single core. In a typical autonomous vehicle use case, single-threaded Jiffy achieves 6x compression of centimeter-precision range scans at 500+ scans per second. To ensure reproducibility and enable adoption, the software is freely available as an open source library(3).
暂无评论