Owing to the continuous expansion in data scale, the calculation, storage, and transmission of 3D data have been plagued by numerous issues. The point cloud data, in particular, often contain duplicated and anomalous ...
详细信息
Owing to the continuous expansion in data scale, the calculation, storage, and transmission of 3D data have been plagued by numerous issues. The point cloud data, in particular, often contain duplicated and anomalous points, which can hinder tasks such as measurement. To address this issue, it is crucial to utilize point cloud pre-processing methods that combine subsampling and denoising. These methods help obtain clean, evenly distributed, and compact points to enhance the accuracy of the data. In this study, an efficient point cloud subsampling method is proposed that combines point cloud denoising capabilities. This method can effectively preserve salient features while improving the quality of point cloud data. By constructing the octree structure of the point cloud, the corresponding node code is obtained according to the spatial coordinates of the point cloud, and the feature vector of the node is calculated based on the analysis of covariance. Node feature similarity is introduced to distinguish the node into feature and non-feature nodes, forming the node feature code, and the layer threshold is introduced to filter outliers. Experimental results demonstrate that our proposed algorithm has a time ratio of over four compared to the curvature-based algorithm. Additionally, it exhibits an average grey entropy that is 1.6xe-3 lower than that of the random sampling method. And considering both time cost and subsampling effectiveness, proposed algorithm outperforms the state-of-the-art subsampling strategies, such as Approximate Intrinsic Voxel Structure and SampleNet. This approach is effective in removing noise while preserving important features, thereby reducing overall size of the point cloud. The high computational efficiency of our algorithm makes it a valuable reference for fast and precise measurements that require timeliness. It successfully addresses the challenges posed by the continuous expansion of data scale and offers significant advantages over ex
3D laser scanning technology as a new technology in the field of surveying and mapping, compared with traditional technology has obvious advantages, and it has become an important method to get the Three -dimensional ...
详细信息
ISBN:
(数字)9783319633091
ISBN:
(纸本)9783319633091;9783319633084
3D laser scanning technology as a new technology in the field of surveying and mapping, compared with traditional technology has obvious advantages, and it has become an important method to get the Three -dimensional data of objects. At the same time, with the development of the scanning technology, we get more and more point cloud data. Therefore, how to streamline the cloud data, remove the invalid data and retain the necessary data has become an important research content. In this paper, we focus on the point cloud data preprocessing, analyze the shortcomings of the current point cloud data reduction method, and propose a uniform and robust algorithm based on octree coding. Apply the octree coding method to divide the point cloud neighborhood space into multiple sub-cubes with the specified side length, and keep the nearest point of each sub-cube from the center point to realize the simplification of the point cloud.
Point clouds are a widely used format for representing 3D objects and scenes. With the aim of enhancing the streamlined transfer and storage of such data, extensive attention has been directed towards point cloud comp...
详细信息
ISBN:
(纸本)9798350374223;9798350374216
Point clouds are a widely used format for representing 3D objects and scenes. With the aim of enhancing the streamlined transfer and storage of such data, extensive attention has been directed towards point cloud compression (PCC). This has led to a significant research emphasis on advancing PCC methods. Audio Video coding Standard workgroup (AVS) of China have launched a PCC project, which employs the octree representation to compress the geometry information of point cloud data. This paper aims to improve the coding efficiency of AVS PCC. Specifically, the neighbouring occupancy information is used in various ways to construct efficient context driving geometry entropy coding. To reduce the memory footprint in context construction, a context reduction mechanism is proposed utilizing the historical coding information. Moreover, an adaptive context switching method is proposed to fit the diversity of point cloud distribution. Experimental results show that the proposed octree coding method achieves coding gains over 3.0% and 8.0% for lossless and lossy coding conditions, respectively without runtime increase. Due to the superiority of the proposed method, it has been recently adopted as the geometry coding method in AVS PCC.
In this paper we propose a new paradigm for encoding the geometry of dense point cloud sequences, where a convolutional neural network (CNN), which estimates the encoding distributions, is optimized on several frames ...
详细信息
In this paper we propose a new paradigm for encoding the geometry of dense point cloud sequences, where a convolutional neural network (CNN), which estimates the encoding distributions, is optimized on several frames of the sequence to be compressed. We adopt lightweight CNN structures, we perform training as part of the encoding process and the CNN parameters are transmitted as part of the bitstream. The newly proposed encoding scheme operates on the octree representation for each point cloud, consecutively encoding each octree resolution level. At every octree resolution level, the voxel grid is traversed section-by-section (each section being perpendicular to a selected coordinate axis), and in each section, the occupancies of groups of two-by-two voxels are encoded at once in a single arithmetic coding operation. A context for the conditional encoding distribution is defined for each two-by-two group of voxels based on the information available about the occupancy of the neighboring voxels in the current and lower resolution layers of the octree. The CNN estimates the probability mass functions of the occupancy patterns of all the voxel groups from one section in four phases. In each new phase, the contexts are updated with the occupancies encoded in the previous phase, and each phase estimates the probabilities in parallel, providing a reasonable trade-off between the parallelism of the processing and the informativeness of the contexts. The CNN training time is comparable to the time spent in the remaining encoding steps, leading to competitive overall encoding times. The bitrates and encoding-decoding times compare favorably with those of recently published compression schemes.
Recently, 3D visual representation models such as light fields and point clouds are becoming popular due to their capability to represent the real world in a more complete and immersive way, paving the road for new an...
详细信息
Recently, 3D visual representation models such as light fields and point clouds are becoming popular due to their capability to represent the real world in a more complete and immersive way, paving the road for new and more advanced visual experiences. The point cloud representation model is able to efficiently represent the surface of objects/scenes by means of a set of 3D points and associated attributes and is increasingly being used from autonomous cars to augmented reality. Emerging imaging sensors have made it easier to perform richer and denser point cloud acquisitions, notably with millions of points, making it impossible to store and transmit these very high amounts of data without appropriate coding. This bottleneck has raised the need for efficient point cloud coding solutions in order to offer more immersive visual experiences and better quality of experience to the users. In this context, this paper proposes an efficient lossy coding solution for the geometry of static point clouds. The proposed coding solution uses an octree-based approach for a base layer and a graph-based transform approach for the enhancement layer where an Inter-layer residual is coded. The performance assessment shows very significant compression gains regarding the state-of-the-art, especially for the most relevant lower and medium rates.
The recent diffusion of wearable and portable Augmented and Mixed Reality devices have highlighted some open problems in the transmission and visualization of three-dimensional point clouds such as the adaptation of t...
详细信息
ISBN:
(纸本)9781538662496
The recent diffusion of wearable and portable Augmented and Mixed Reality devices have highlighted some open problems in the transmission and visualization of three-dimensional point clouds such as the adaptation of the bit stream to different devices and networks or the optimization of the rendering/coding complexity. The current paper proposes a rendering-aware approach for the compression of static point cloud models that employs a multi-resolution representation of the model in spherical coordinates. This approach proves to be extremely effective in shaping the transmitted bit stream and rendering operations according to the complexity of the 3D model and the available calculation resources. Experimental results show that the proposed solution outperforms one of the most recent state-of-the-art solutions in terms of rate-distortion performance and computional effort.
Aiming at the high workload, low precision, strong stress of the traditional manual measurement to obtain the sheep growth parameters, a novel measurement technology was proposed. The specimen of the Sunite sheep abou...
详细信息
ISBN:
(纸本)9783030061791;9783030061784
Aiming at the high workload, low precision, strong stress of the traditional manual measurement to obtain the sheep growth parameters, a novel measurement technology was proposed. The specimen of the Sunite sheep about 2-3 years old were chosen for study. By reverse engineering technology, point cloud data of sheep was captured by the 3D laser scanner. Because of noise point cloud data, the improved algorithm of k-nearest neighbor was used to process the data. To improve the subsequent processing time and efficiency, octree coding was employed to reduce data, which can get evenly distribution of point cloud data and retain sheep features. Then, 3D surface model of sheep body was reconstructed using Delaunay triangulation. Some parameters were extracted, including sheep body length, body height, hip height, hip width and chest width. Compared actual parameters values with computing values of two ways, by Geomagic platform and the proposed algorithms on the Matlab, average relative errors of two ways were 1.23% and 1.01%, respectively. So results of the proposed algorithm were with small error range. Using the point clouds can reconstruct sheep surface for computing body size without stress.
Aiming at the high workload, low precision, strong stress of the traditional manual measurement to obtain the sheep growth parameters, a novel measurement technology was proposed. The specimen of the Sunite sheep abou...
详细信息
ISBN:
(纸本)9783030061784
Aiming at the high workload, low precision, strong stress of the traditional manual measurement to obtain the sheep growth parameters, a novel measurement technology was proposed. The specimen of the Sunite sheep about 2–3 years old were chosen for study. By reverse engineering technology, point cloud data of sheep was captured by the 3D laser scanner. Because of noise point cloud data, the improved algorithm of k-nearest neighbor was used to process the data. To improve the subsequent processing time and efficiency,octree coding was employed to reduce data, which can get evenly distribution of point cloud data and retain sheep features. Then, 3D surface model of sheep body was reconstructed using Delaunay triangulation. Some parameters were extracted, including sheep body length, body height, hip height, hip width and chest width. Compared actual parameters values with computing values of two ways, by Geomagic platform and the proposed algorithms on the Matlab,average relative errors of two ways were 1.23% and 1.01%, respectively. So results of the proposed algorithm were with small error range. Using the point clouds can reconstruct sheep surface for computing body size without stress.
This work is dedicated to the compression of 3D point clouds in order to allow an efficient and quick transmission of point cloud datasets (PCD) for visualization over the internet. Standard methods include quantizati...
详细信息
ISBN:
(纸本)9781509057436
This work is dedicated to the compression of 3D point clouds in order to allow an efficient and quick transmission of point cloud datasets (PCD) for visualization over the internet. Standard methods include quantization of vectors or cloud simplification via octree structures. While a quantization into a bit representation will transform vectors in discrete positions, we added octree structures for a fixed level for re-indexing the quantized points relative to the individual local position of the subdivisions. Each subdivision multiplies the resolution of one coordinate by two, by adding a "virtual" bit. This bit can be then removed from the quantization bytes. So the combination of quantization and fixed octree structures decreases the amount of needed quantization bits without losing resolution. On the contrary, it is possible to increase the resolution of a PCD by adding the "virtual" bit to the quantized data without significantly changing the size of the dataset. We demonstrated the feasibility of this technique for the web by developing a lightweight framework, running in a browser-based environment.
This paper proposes novel scalable mesh coding designs exploiting the intraband or composite statistical dependencies between the wavelet coefficients. A Laplacian mixture model is proposed to approximate the distribu...
详细信息
This paper proposes novel scalable mesh coding designs exploiting the intraband or composite statistical dependencies between the wavelet coefficients. A Laplacian mixture model is proposed to approximate the distribution of the wavelet coefficients. This model proves to be more accurate when compared to commonly employed single Laplacian or generalized Gaussian distribution models. Using the mixture model, we determine theoretically the optimal embedded quantizers to be used in scalable wavelet-based coding of semiregular meshes. In this sense, it is shown that the commonly employed successive approximation quantization is an acceptable, but in general, not an optimal solution. Novel scalable intraband and composite mesh coding systems are proposed, following an information-theoretic analysis of the statistical dependencies between the coefficients. The wavelet subbands are independently encoded using octree-based coding techniques. Furthermore, context-based entropy coding employing either intraband or composite models is applied. The proposed codecs provide both resolution and quality scalability. This lies in contrast to the state-of-the-art interband zerotree-based semiregular mesh coding technique, which supports only quality scalability. Additionally, the experimental results show that, on average, the proposed codecs outperform the interband state-of-the-art for both normal and nonnormal meshes. Finally, compared with a zerotree coding system, the proposed coding schemes are better suited for software/hardware parallelism, due to the independent processing of wavelet subbands.
暂无评论