咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Contrastive Learning for Self-... 收藏
arXiv

Contrastive Learning for Self-Supervised Pre-Training of Point Cloud Segmentation Networks With Image Data

作     者:Janda, Andrej Wagstaff, Brandon Ng, Edwin G. Kelly, Jonathan 

作者机构:Space & Terrestrial Autonomous Robotic Systems Laboratory University of Toronto Canada 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2023年

核心收录:

主  题:Semantic Segmentation 

摘      要:Reducing the quantity of annotations required for supervised training is vital when labels are scarce and costly. This reduction is particularly important for semantic segmentation tasks involving 3D datasets, which are often significantly smaller and more challenging to annotate than their image-based counterparts. Self-supervised pre-training on unlabelled data is one way to reduce the amount of manual annotations needed. Previous work has focused on pre-training with point clouds exclusively. While useful, this approach often requires two or more registered views. In the present work, we combine image and point cloud modalities by first learning self-supervised image features and then using these features to train a 3D model. By incorporating image data, which is often included in many 3D datasets, our pre-training method only requires a single scan of a scene and can be applied to cases where localization information is unavailable. We demonstrate that our pre-training approach, despite using single scans, achieves comparable performance to other multi-scan, point cloud-only methods. Copyright © 2023, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分