版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:School of Artificial Intelligence Anhui University Engineering Research Center of Autonomous Unmanned System Technology Ministry of Education Anhui Provincial Key Laboratory of Security Artificial Intelligence Anhui University School of Automation Southeast University
出 版 物:《Science China(Information Sciences)》 (中国科学:信息科学(英文版))
年 卷 期:2025年第68卷第3期
页 面:242-257页
核心收录:
学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 080202[工学-机械电子工程] 081104[工学-模式识别与智能系统] 08[工学] 0804[工学-仪器科学与技术] 0835[工学-软件工程] 0802[工学-机械工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:supported in part by National Natural Science Foundation of China (Grant Nos. 62388101, 62303010) University Synergy Innovation Program of Anhui Province (Grant No. GXXT-2023-039) Anhui Provincial Key Research Program of Universities (Grant No. 2022AH050087)
主 题:generative ResNet meta light block coordinate attention feature resolution robot grasping
摘 要:Robotic grasping presents significant challenges due to variations in object properties, environmental complexities,and the demand for real-time operation. This study proposes the MetaCoorNet(MCN), which is a novel deep learning architecture specifically designed to address these challenges in robotic grasping pose estimation. By combining spatial and channel operators, the MetaCoor block is utilized to extract features efficiently. This architecture enhances feature selectivity by embedding location information into channel attention using a positional embedding technique within the coordinate attention mechanism. Consequently, the proposed MCN can focus on pertinent grasp-related regions. Furthermore, convolutional fusion blocks seamlessly integrate spatial and channel features, resulting in enhanced feature resolution and representation *** innovative design enables the proposed MCN to achieve state-of-the-art performance on the Cornell and Jacquard datasets,attaining accuracies of 98% and 91.2%, respectively. The effectiveness and robustness of MCN are further validated through real-world experiments conducted using a seven-degree-of-freedom Kinova manipulator.