咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >PRGFlow: Unified SWAP-aware de... 收藏

PRGFlow: Unified SWAP-aware deep global optical flow for aerial robot navigation

PRGFlow : 为天线机器人航行统一了交换知道的深全球的光流动

作     者:Sanket, Nitin J. Singh, Chahat Deep Fermuller, Cornelia Aloimonos, Yiannis 

作者机构:Univ Maryland Percept & Robot Grp College Pk MD 20742 USA 

出 版 物:《ELECTRONICS LETTERS》 (电子学快报)

年 卷 期:2021年第57卷第16期

页      面:614-617页

核心收录:

学科分类:0808[工学-电气工程] 0809[工学-电子科学与技术(可授工学、理学学位)] 08[工学] 

基  金:Brin Family Foundation Northrop Grumman Mission Systems University Research Program ONR [N00014-17-1-2622] National Science Foundation [BCS 1824198] 

主  题:Optical, image and video signal processing Image recognition Optimisation techniques Spatial variables control Transducers and sensing devices Aerospace control Mobile robots Telerobotics Sensor fusion Computer vision and image processing techniques File organisation Control engineering computing Optimisation techniques Other topics in statistics 

摘      要:Global optical flow estimation is the foundation stone for obtaining odometry which is used to enable aerial robot navigation. However, such a method has to be of low latency and high robustness whilst also respecting the size, weight, area and power (SWAP) constraints of the robot. A combination of cameras coupled with inertial measurement units (IMUs) has proven to be the best combination in order to obtain such low latency odometry on resource-constrained aerial robots. Recently, deep learning approaches for visual inertial fusion have gained momentum due to their high accuracy and robustness. However, an equally noteworthy benefit for robotics of these techniques are their inherent scalability (adaptation to different sized aerial robots) and unification (same method works on different sized aerial robots). To this end, we present a deep learning approach called PRGFlow for obtaining global optical flow and then loosely fuse it with an IMU for full 6-DoF (Degrees of Freedom) relative pose estimation (which is then integrated to obtain odometry). The network is evaluated on the MSCOCO dataset and the dead-reckoned odometry on multiple real-flight trajectories without any fine-tuning or re-training. A detailed benchmark comparing different network architectures and loss functions to enable scalability is also presented. It is shown that the method outperforms classical feature matching methods by 2x under noisy data. The supplementary material and code can be found at .

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分