版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Nanjing Univ Sch Elect Sci & Engn Nanjing 210093 Peoples R China Hong Kong Polytech Univ Dept Comp Hong Kong Peoples R China Peking Univ Sch Comp Sci Natl Key Lab Multimedia Informat Proc Beijing 100871 Peoples R China Hong Kong Univ Sci & Technol Acad Interdisciplinary Studies Hong Kong Peoples R China
出 版 物:《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》 (IEEE Trans Circuits Syst Video Technol)
年 卷 期:2025年第35卷第5期
页 面:5109-5122页
核心收录:
基 金:National Key Research and Development Program of China [2021YFA0717700] National Natural Science Foundation of China CCF-Ant Research Fund [CCF-AFSG RF20230403]
主 题:Three-dimensional displays Feature extraction Object detection Solid modeling Adaptation models Transformers Laser radar Reliability Circuits and systems Cameras Autonomous driving object recognition uncertainty
摘 要:Vision-centric Bird s Eye View (BEV) perception holds considerable promise for autonomous driving. Recent studies have prioritized efficiency or accuracy enhancements, yet the issue of domain shift has been overlooked, leading to substantial performance degradation upon transfer. We identify major domain gaps in real-world cross-domain scenarios and initiate the first effort to address the Domain Adaptation (DA) challenge in multi-view 3D object detection for BEV perception. Given the complexity of BEV perception approaches with their multiple components, domain shift accumulation across multi-geometric spaces (e.g., 2D, 3D Voxel, BEV) poses a significant challenge for BEV domain adaptation. In this paper, we introduce an innovative geometric-aware teacher-student framework, BEVUDA++, to diminish this issue, comprising a Reliable Depth Teacher (RDT) and a Geometric Consistent Student (GCS) model. Specifically, RDT effectively blends target LiDAR with dependable depth predictions to generate depth-aware information based on uncertainty estimation, enhancing the extraction of Voxel and BEV features that are essential for understanding the target domain. To collaboratively reduce the domain shift, GCS maps features from multiple spaces into a unified geometric embedding space, thereby narrowing the gap in data distribution between the two domains. Additionally, we introduce a novel Uncertainty-guided Exponential Moving Average (UEMA) to further reduce error accumulation due to domain shifts informed by previously obtained uncertainty guidance. To demonstrate the superiority of our proposed method, we execute comprehensive experiments in four cross-domain scenarios, securing state-of-the-art performance in BEV 3D object detection tasks, e.g., 12.9% NDS and 9.5% mAP enhancement on Day-Night adaptation.