版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Xi'an Jiaotong University School of Software Engineering Xi'an710049 China Xi'an Jiaotong University School of Information and Communication Engineering Xi'an710049 China Xi'an Jiaotong University Ministry of Education Key Laboratory for Intelligent Networks and Network Security School of Information and Communication Engineering SMILES LAB Xi'an710049 China Shaanxi Yulan Jiuzhou Intelligent Optoelectronic Technology Co. Ltd Xi'an710000 China
出 版 物:《IEEE Transactions on Multimedia》 (IEEE Trans Multimedia)
年 卷 期:2024年
核心收录:
学科分类:0810[工学-信息与通信工程] 0808[工学-电气工程] 08[工学]
主 题:Cameras
摘 要:With the development of deep learning in recent years, the performance of object detection under conventional cameras has been significantly improved. Nevertheless, due to the distortion caused by the fisheye cameras, detecting objects in this scenario remains a significant challenge. The dominant approaches focus on modifying the shape of the bounding box to better align the boundaries of the distorted object. However, these methods neglect the learning of spatial distortion information, which prevents them from satisfactory results. In this paper, we propose a novel fisheye camera detection network to learn distortion features better, dubbed SDANet. SDANet is composed of a series of SDABlocks, which are designed to learn spatial distortion features. Each SDABlock consists of multiple convolution kernels of different sizes, and it can generate the most suitable kernel based on the current input s distortion characteristics. Moreover, to address the limitations of the scarcity and uneven spatial distribution of fisheye image datasets on performance improvement, we propose a dedicated data augmentation strategy called Prominent Fisheye Distortion Augmentation (PFDAug). PFDAug can further introduce distortions to fisheye images, effectively alleviating these problems. Experimental results on the CEPDOF, MW-R, HABBOF, LOAF, and FishEye8k fisheye image datasets demonstrate that our method achieves state-ofthe-art performance. © 1999-2012 IEEE.