版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Univ Michigan Dept Elect Engn & Comp Sci Ann Arbor MI 48109 USA Daegu Gyeongbuk Inst Sci & Technol Dept Informat & Commun Engn Daegu South Korea Inria Paris WILLOW Team Paris France Ecole Normale Super Paris France POSTECH Dept Comp Sci & Engn Pohang South Korea
出 版 物:《IEEE SIGNAL PROCESSING MAGAZINE》 (IEEE信号处理杂志)
年 卷 期:2017年第34卷第6期
页 面:39-49页
核心收录:
基 金:Institute for Information and Communications Technology Promotion [2016-0-00563] National Research Foundation [NRF-2011-0031648] DGIST RAMP D Program - Korean government [17-ST-02] Ministry of Culture, Sports, and Tourism of Korea Korea Creative Content Agency
主 题:Machine learning Semantics Visualization Image segmentation Image recognition Neural networks Benchmark testing
摘 要:Semantic segmentation is a popular visual recognition task whose goal is to estimate pixel-level object class labels in images. This problem has been recently handled by deep convolutional neural networks (DCNNs), and the state-of-the-art techniques achieve impressive records on public benchmark data sets. However, learning DCNNs demand a large number of annotated training data while segmentation annotations in existing data sets are significantly limited in terms of both quantity and diversity due to the heavy annotation cost. Weakly supervised approaches tackle this issue by leveraging weak annotations such as image-level labels and bounding boxes, which are either readily available in existing large-scale data sets for image classification and object detection or easily obtained thanks to their low annotation costs. The main challenge in weakly supervised semantic segmentation then is the incomplete annotations that miss accurate object boundary information required to learn segmentation. This article provides a comprehensive overview of weakly supervised approaches for semantic segmentation. Specifically, we describe how the approaches overcome the limitations and discuss research directions worthy of investigation to improve performance.