咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >FFPA-Net: Efficient Feature Fu... 收藏
arXiv

FFPA-Net: Efficient Feature Fusion with Projection Awareness for 3D Object Detection

作     者:Jiang, Chaokang Wang, Guangming Wu, Jinxing Miao, Yanzi Wang, Hesheng 

作者机构:Engineering Research Center of Intelligent Control for Underground Space Ministry of Education School of Information and Control Engineering Advanced Robotics Research Center China University of Mining and Technology Xuzhou221116 China Department of Automation Key Laboratory of System Control and Information Processing of Ministry of Education Key Laboratory of Marine Intelligent Equipment and System of Ministry of Education Shanghai Engineering Research Center of Intelligent Control and Management Shanghai Jiao Tong University Shanghai200240 China The Department of Engineering Mechanics Shanghai Jiao Tong University Shanghai200240 China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2022年

核心收录:

主  题:Object detection 

摘      要:Promising complementarity exists between the texture features of color images and the geometric information of LiDAR point clouds. However, there still present many challenges for efficient and robust feature fusion in the field of 3D object detection. In this paper, first, unstructured 3D point clouds are filled in the 2D plane and 3D point cloud features are extracted faster using projection-aware convolution layers. Further, the corresponding indexes between different sensor signals are established in advance in the data preprocessing, which enables faster cross-modal feature fusion. To address LiDAR points and image pixels misalignment problems, two new plug- and-play fusion modules, LiCamFuse and BiLiCamFuse, are proposed. In LiCamFuse, soft query weights with perceiving the Euclidean distance of bimodal features are proposed. In BiLiCamFuse, the fusion module with dual attention is proposed to deeply correlate the geometric and textural features of the scene. The quantitative results on the KITTI dataset demonstrate that the proposed method achieves better feature-level fusion. In addition, the proposed network shows a shorter running time compared to existing methods. Copyright © 2022, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分