版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Univ Birmingham Sch Elect Elect & Syst Engn Birmingham B15 2TT W Midlands England Aberystwyth Univ Dept Comp Sci Aberystwyth SY23 3FL Dyfed Wales
出 版 物:《IEEE TRANSACTIONS ON RADAR SYSTEMS》 (IEEE Trans. Radar Syst.)
年 卷 期:2025年第3卷
页 面:115-123页
核心收录:
基 金:U.K. Government Department for Transport (DfT) U.K. National Quantum Technology Hub in Sensing and Timing [EP/T001046/1] Engineering and Physical Sciences Research Council Mapping and Enabling Future Airspace (EPSRC MEFA) [EP/T011068/1]
主 题:Autoencoder classification convolutional autoencoder (CAE) latent variables spectrograms
摘 要:Deep learning with convolutional neural networks (CNNs) has been widely utilized in radar research concerning automatic target recognition. Maximizing numerical metrics to gauge the performance of such algorithms does not necessarily correspond to model robustness against untested targets, nor does it lead to improved model interpretability. Approaches designed to explain the mechanisms behind the operation of a classifier on radar data are proliferating, but bring with them a significant computational and analysis overhead. This work uses an elementary unsupervised convolutional autoencoder (CAE) to learn a compressed representation of a challenging dataset of urban bird and drone targets, and subsequently if apparent, the quality of the representation via preservation of class labels leads to better classification performance after a separate supervised training stage. It is shown that a CAE that reduces the features output after each layer of the encoder gives rise to the best drone versus bird classifier. A clear connection between unsupervised evaluation via label preservation in the latent space and subsequent classification accuracy after supervised fine-tuning is shown, supporting further efforts to optimize radar data latent representations to enable optimal performance and model interpretability.