Compressing images is a method for shrinking the image's dimensions using a particular algorithm. Image compression is a solution associated with transmitting and storing large amounts of data for digital images. ...
详细信息
In the past two decades, hardware and software have advanced quickly due to technological advancement. Global data distribution and storage have become simpler as a result. However, the bandwidth cannot handle the vol...
详细信息
作者:
He, RunzhuQu, YouliBeijing Jiaotong University
Ministry of Education School of Computer and Information Technology Dept. Key Laboratory of Big Data & Artifcial Intelligence in Transportation Beijing China
In large-scale search engines, the core data structure is the inverted index, essentially a collection of integer sequences within inverted lists. Precise partitioning of these sequences enables efficient query proces...
详细信息
The need to trust data has become a key requirement in modern distributed systems. To facilitate measurable trust and confidence in data and applications spanning heterogeneous systems, the emerging concept of a data ...
详细信息
ISBN:
(纸本)9798350351712;9798350351729
The need to trust data has become a key requirement in modern distributed systems. To facilitate measurable trust and confidence in data and applications spanning heterogeneous systems, the emerging concept of a data Confidence Fabric (DCF) offers a compelling solution. data producers and processors provide metadata, known as annotations, recording trust insertion along the data delivery chain. Thus, it is possible to asses the trustworthiness of data before processing it. While a DCF, such as the Alvarium framework, enables this management of trust, there is a cost in terms of the overheads associated with security annotations themselves. To improve efficiency of a DCF, we therefore propose a set of techniques for making annotations more compact and to reduce the number of DCF transactions. Our work shows that transaction efficiency gains of up to 93% in our considered use cases can be achieved.
A point cloud (PC) is a popular 3D data representation that poses challenges due to its size, dimensionality, and unstructured nature. This paper introduces the Residual Neural Radiance Field for Point Cloud Attribute...
详细信息
ISBN:
(纸本)9798350349405;9798350349399
A point cloud (PC) is a popular 3D data representation that poses challenges due to its size, dimensionality, and unstructured nature. This paper introduces the Residual Neural Radiance Field for Point Cloud Attribute Coding (ResNeRF-PCAC), a novel approach for point cloud attribute compression. ResNeRF-PCAC combines sparse convolutions with neural radiance fields, to create a highly efficient attribute coding solution. It initially downscales the point cloud to generate a coarse thumbnail point cloud and encodes it using the G-PCC attribute encoder. The thumbnail PC is upsampled using a super-resolution network to generate a recolored PC. Color attribute residuals are then computed between the original and the super-resolved recolored PC. A ResNeRF network is employed to predict these residuals. The trained ResNeRF weights are compressed into a bitstream. The thumbnail bitstream and the compressed model weights are then transmitted to the decoder. Sparse convolution-based super-resolving network weights are shared and common across all content and need not to be signaled. Experiments on the MPEG-8i dataset demonstrate superior performance in terms of reconstruction quality and compression ratio compared to G-PCC-RAHT and G-PCC-Predlift for both v14 and v21.
Lossless datacompression is necessary to reduce transmission costs while maintaining data integrity. This paper describes Kompressor as a Huffman and Lempel Ziv (LZ) compression algorithm project using Java. All majo...
详细信息
Phasor Measurement Units generates massive data volumes, making its datacompression essential in modern power systems. PMUs collect high-frequency voltage and current phasor readings from across the grid in real time...
详细信息
Vision Transformers (ViTs) have emerged as powerful models in computer vision but their large parameter counts and substantial memory access costs pose challenges for deployment on resource-constrained platforms. This...
详细信息
The integration of artificial intelligence (AI) into wireless communication systems is set to profoundly transform the design and optimization of emerging sixth-generation (6G) networks. The success of AI-driven wirel...
详细信息
ISBN:
(纸本)9798350304060;9798350304053
The integration of artificial intelligence (AI) into wireless communication systems is set to profoundly transform the design and optimization of emerging sixth-generation (6G) networks. The success of AI-driven wireless systems hinges on the quality of the air-interface data, which is fundamental to the performance of AI algorithms. Within data quality assessment (DQA), the measurement of similarity and diversity stands as crucial. Similarity assesses the consistency of datasets in mirroring their intrinsic statistical properties, which is essential for AI model accuracy. In contrast, diversity relates to the models' ability to generalize across various contexts. This paper concentrates on these aspects of DQA and proposes a comprehensive framework for analyzing similarity and diversity in wireless air-interface data. Catering to various data types, such as channel state information (CSI), signal-to-interference-plus-noise ratio (SINR), and reference signal received power (RSRP), the framework is validated using CSI data. Through this validation, we demonstrate the framework's efficacy in improving CSI compression and recovery in Massive Multiple-Input Multiple-Output (MIMO) systems, highlighting its significance and versatility in complex wireless network environments.
Recent advances in Learned Image compression (LIC) utilize the powerful Masked Image Modeling (MIM) framework (e.g., MAE). However, the random masking treats all patches equally, which could lead (i) missing critical ...
详细信息
ISBN:
(纸本)9798350344868;9798350344851
Recent advances in Learned Image compression (LIC) utilize the powerful Masked Image Modeling (MIM) framework (e.g., MAE). However, the random masking treats all patches equally, which could lead (i) missing critical information and (ii) poor robustness, since different patches may have different contributions to image reconstruction. The principle experiment shows that the patch-level inherent feature evaluation is highly correlated with the quality of the reconstructed image. Based on this key insight, we propose a simple mask selection approach based on the patch's inherent feature. Specifically, we design plug-and-play frequency and spatial inherent feature modules to select masks, enhancing various MIMs based on the MAE model and achieving high-quality image reconstruction and strong robustness at high mask rates for data-intensive compression tasks.
暂无评论