咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >YOLO-CCA: A Context-Based Appr... 收藏
arXiv

YOLO-CCA: A Context-Based Approach for Traffic Sign Detection

作     者:Jiang, Linfeng Zhan, Peidong Bai, Ting Yu, Haoyong 

作者机构:School of Artificial Intelligence Chongqing University of Technology Chongqing404100 China School of Computing College of Design and Engineering National University of Singapore 119077 Singapore School of Civil and Environmental Engineering Cornell University IthacaNY United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Deep learning 

摘      要:Traffic sign detection is crucial for improving road safety and advancing autonomous driving technologies. Due to the complexity of driving environments, traffic sign detection frequently encounters a range of challenges, including low resolution, limited feature information, and small object sizes. These challenges significantly hinder the effective extraction of features from traffic signs, resulting in false positives and false negatives in object detection. To address these challenges, it is essential to explore more efficient and accurate approaches for traffic sign detection. This paper proposes a context-based algorithm for traffic sign detection, which utilizes YOLOv7 as the baseline model. Firstly, we propose an adaptive local context feature enhancement (LCFE) module using multi-scale dilation convolution to capture potential relationships between the object and surrounding areas. This module supplements the network with additional local context information. Secondly, we propose a global context feature collection (GCFC) module to extract key location features from the entire image scene as global context information. Finally, we build a Transformer-based context collection augmentation (CCA) module to process the collected local context and global context, which achieves superior multi-level feature fusion results for YOLOv7 without bringing in additional complexity. Extensive experimental studies performed on the Tsinghua-Tencent 100K dataset show that the mAP of our method is 92.1%. Compared with YOLOv7, our approach improves 3.9% in mAP, while the amount of parameters is reduced by 2.7M. On the CCTSDB2021 dataset the mAP is improved by 0.9%. These results show that our approach achieves higher detection accuracy with fewer parameters. The source code is available at https://***/zippiest/yolo-cca. Copyright © 2024, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分