版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:FPT Univ IT Dept Hanoi 10000 Vietnam
出 版 物:《IEEE ROBOTICS AND AUTOMATION LETTERS》 (IEEE Robot. Autom.)
年 卷 期:2024年第9卷第4期
页 面:3124-3130页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0811[工学-控制科学与工程]
主 题:Pose estimation Feature extraction Robots Three-dimensional displays Solid modeling Point cloud compression Geometry 6D object pose estimation grasp detection robot manipulation
摘 要:Object recognition and pose estimation are critical components in autonomous robot manipulation systems, playing a crucial role in enabling robots to interact effectively with the environment. During actual execution, the robot must recognize the object in the current scene, estimate its pose, and then select a feasible grasp pose from the pre-defined grasp configurations. While most existing methods primarily focus on pose estimation, they often neglect the graspability and reachability aspects. This oversight can lead to inefficiencies and failures during execution. In this study, we introduce an innovative graspability-aware object pose estimation framework. Our proposed approach not only estimates the poses of multiple objects in clustered scenes but also identifies graspable areas. This enables the system to concentrate its efforts on specific points or regions of an object that are suitable for grasping. It leverages both depth and color images to extract geometric and appearance features. To effectively combine these diverse features, we have developed an adaptive fusion module. In addition, the fused features are further enhanced through a graspability-aware feature enhancement module. The key innovation of our method lies in improving the discriminability and robustness of the features used for object pose estimation. We have achieved state-of-the-art results on public datasets when compared to several baseline methods. In real robot experiments conducted on a Franka Emika robot arm equipped with an Intel Realsense camera and a two-finger gripper, we consistently achieved high success rates, even in cluttered scenes.