版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Department of Information Science and Electronic Engineering Zhejiang University China State Key Laboratory of Internet of Things for Smart City The Department of Electrical and Computer Engineering University of Macau China School of Computer Science and Engineering Nanyang Technological University Singapore
出 版 物:《arXiv》 (arXiv)
年 卷 期:2024年
核心收录:
主 题:Spatio temporal data
摘 要:Point clouds have gained prominence in numerous applications due to their ability to accurately depict 3D objects and scenes. However, compressing unstructured, high-precision point cloud data effectively remains a significant challenge. In this paper, we propose NeRC3, a novel point cloud compression framework leveraging implicit neural representations to handle both geometry and attributes. Our approach employs two coordinate-based neural networks to implicitly represent a voxelized point cloud: the first determines the occupancy status of a voxel, while the second predicts the attributes of occupied voxels. By feeding voxel coordinates into these networks, the receiver can efficiently reconstructs the original point cloud’s geometry and attributes. The neural network parameters are quantized and compressed alongside auxiliary information required for reconstruction. Additionally, we extend our method to dynamic point cloud compression with techniques to reduce temporal redundancy, including a 4D spatial-temporal representation termed 4D-NeRC3. Experimental results validate the effectiveness of our approach: for static point clouds, NeRC3 outperforms octree-based methods in the latest G-PCC standard. For dynamic point clouds, 4D-NeRC3 demonstrates superior geometry compression compared to state-of-the-art G-PCC and V-PCC standards and achieves competitive results for joint geometry and attribute compression. Copyright © 2024, The Authors. All rights reserved.