版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Khalifa Univ Khalifa Univ Ctr Autonomous Robot Syst Abu Dhabi U Arab Emirates
出 版 物:《IEEE ACCESS》 (IEEE Access)
年 卷 期:2025年第13卷
页 面:17830-17867页
核心收录:
基 金:Khalifa University of Science and Technology [8434000534, CIRA-2021-085, RC1-2018-KUCARS] KU-Stanford
主 题:Reviews Object recognition Computer vision Computational modeling Imaging Image segmentation Deep learning Surveys Optical imaging Oceans Underwater computer vision deep learning underwater robotics ocean research underwater image enhancement object tracking object detection
摘 要:Underwater computer vision plays a vital role in ocean research, enabling autonomous navigation, infrastructure inspections, and marine life monitoring. However, the underwater environment presents unique challenges, including color distortion, limited visibility, and dynamic light conditions, which hinder the performance of traditional image processing methods. Recent advancements in deep learning (DL) have demonstrated remarkable success in overcoming these challenges by enabling robust feature extraction, image enhancement, and object recognition. This review provides a comprehensive analysis of cutting-edge deep learning architectures designed for underwater object detection, segmentation, and tracking. State-of-the-art (SOTA) models, including AGW-YOLOv8, Feature-Adaptive FPN, and Dual-SAM, have shown substantial improvements in addressing occlusions, camouflaging, and small underwater object detection. For tracking tasks, transformer-based models like SiamFCA and FishTrack leverage hierarchical attention mechanisms and convolutional neural networks (CNNs) to achieve high accuracy and robustness in dynamic underwater environments. Beyond optical imaging, this review explores alternative modalities such as sonar, hyperspectral imaging, and event-based vision, which provide complementary data to enhance underwater vision systems. These approaches improve performance under challenging conditions, enabling richer and more informative scene interpretation. Promising future directions are also discussed, emphasizing the need for domain adaptation techniques to improve generalizability, lightweight architectures for real-time performance, and multi-modal data fusion to enhance interpretability and robustness. By critically evaluating current methodologies and highlighting gaps, this review provides insights for advancing underwater computer vision systems to support ocean exploration, ecological conservation, and disaster management.