版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Seoul Natl Univ Dept Mech Engn IAMD Seoul 08826 South Korea Seoul Natl Univ IOER Seoul 08826 South Korea
出 版 物:《IEEE ROBOTICS AND AUTOMATION LETTERS》 (IEEE Robot. Autom.)
年 卷 期:2025年第10卷第2期
页 面:1601-1608页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0811[工学-控制科学与工程]
基 金:Technology Innovation Program - Ministry of Trade, Industry & Energy (MOTIE, South Korea) [1415187329, 20024355] Green Venture Program - Ministry of SMEs & Startups (MSS, South Korea) [1425173737, S3214972]
主 题:Point cloud compression Geometry Feature extraction Shape Sensor phenomena and characterization Robot kinematics Noise Iterative algorithms Azimuth Translation Aerial Systems: applications field robots mapping point cloud registration
摘 要:We propose a novel cross-source point cloud registration (CSPR) method for USV-AAV cooperation in lentic environments. In the wild outdoors, which is the typical working domain of the USV-AAV team, CSPR faces significant challenges due to platform-domain problems (complex unstructured surroundings and viewing angle difference) in addition to sensor-domain problems (varying density, noise pattern, and scale). These characteristics make large discrepancies in local geometry, causing existing CSPR methods that rely on point-to-point correspondence based on local geometry around key points (e.g. surface normal, shape function, angle) to struggle. To address this challenge, we propose the novel concept of a directional correspondence-based iterative cross-source point cloud registration algorithm. Instead of using point-to-point correspondence under large discrepancies in local geometry, we build correspondence about directions to enable robust registration in the wild outdoors. Also, since the proposed directional correspondence uses bearing angle and normalized coordinate, we can separate scale estimation with transformation, effectively resolving the problem of different scales between two point clouds. Our algorithm outperforms the state-of-the-art methods, achieving an average error of 1.60 degrees for rotation and 1.83% for translation. Additionally, we demonstrated a USV-AAV team operation with enhanced visual information achieved with the proposed method.