咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Robot Grasp in Cluttered Scene... 收藏

Robot Grasp in Cluttered Scene Using a Multi-Stage Deep Learning Model

作     者:Wei, Dujia Cao, Jianmin Gu, Ye 

作者机构:Shenzhen Technol Univ Coll Big Data & Internet Shenzhen 518118 Peoples R China 

出 版 物:《IEEE ROBOTICS AND AUTOMATION LETTERS》 (IEEE Robot. Autom.)

年 卷 期:2024年第9卷第7期

页      面:6512-6519页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0811[工学-控制科学与工程] 

基  金:Shenzhen Science and Technology 

主  题:Deep learning for visual perception computer vision for automation perception for grasping manipulation 

摘      要:Object grasping in cluttered scene is a practical robotic skill which has a wide range of applications. In this paper, we propose a novel maximum graspness metric which can help extract high-quality scene grasp points effectively. The graspness scores of a single-view point cloud are generated using the proposed interpolation approach. The graspness model is implemented using a compact encoder-decoder model which takes a depth image as input. On the other hand, the grasp point features are extracted. They are further grouped and sampled to predict approaching vectors and in-plane rotations of the grasp poses using residual point blocks. The proposed model is evaluated using a large scale benchmark GraspNet-1Billion dataset and can outperform prior state-of-the-art method by a margin (+4.91 AP) on all camera types. Through real-world cluttered scenario testing, our approach achieves grasping successful rate of 89.60% using a UR-5 robotic arm and a RealSense camera.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分