版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Shenzhen Technol Univ Coll Big Data & Internet Shenzhen 518118 Peoples R China
出 版 物:《IEEE ROBOTICS AND AUTOMATION LETTERS》 (IEEE Robot. Autom.)
年 卷 期:2024年第9卷第7期
页 面:6512-6519页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0811[工学-控制科学与工程]
基 金:Shenzhen Science and Technology
主 题:Deep learning for visual perception computer vision for automation perception for grasping manipulation
摘 要:Object grasping in cluttered scene is a practical robotic skill which has a wide range of applications. In this paper, we propose a novel maximum graspness metric which can help extract high-quality scene grasp points effectively. The graspness scores of a single-view point cloud are generated using the proposed interpolation approach. The graspness model is implemented using a compact encoder-decoder model which takes a depth image as input. On the other hand, the grasp point features are extracted. They are further grouped and sampled to predict approaching vectors and in-plane rotations of the grasp poses using residual point blocks. The proposed model is evaluated using a large scale benchmark GraspNet-1Billion dataset and can outperform prior state-of-the-art method by a margin (+4.91 AP) on all camera types. Through real-world cluttered scenario testing, our approach achieves grasping successful rate of 89.60% using a UR-5 robotic arm and a RealSense camera.