版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Chinese Acad Sci Northeast Inst Geog & Agroecol Changchun 130000 Jilin Peoples R China Univ Lancaster Lancaster Environm Ctr Lancaster LA1 4YQ England Ctr Ecol & Hydrol Lib Ave Lancaster LA1 4AP England Univ Lancaster Fac Sci & Technol Lancaster LA1 4YR England
出 版 物:《REMOTE SENSING》 (遥感)
年 卷 期:2019年第11卷第20期
页 面:2370-2370页
核心收录:
学科分类:0830[工学-环境科学与工程(可授工学、理学、农学学位)] 1002[医学-临床医学] 070801[理学-固体地球物理学] 07[理学] 08[工学] 0708[理学-地球物理学] 0816[工学-测绘科学与技术]
基 金:National Key Research and Development Program of China [2017YFB0503602] National Natural Science Foundation of China [41301465, 41671397] Jilin Province Science and Technology Development Program [20170204025SF]
主 题:crop mapping object-based image classification deep learning decision fusion FSR remotely sensed imagery
摘 要:Accurate information on crop distribution is of great importance for a range of applications including crop yield estimation, greenhouse gas emission measurement and management policy formulation. Fine spatial resolution (FSR) remotely sensed imagery provides new opportunities for crop mapping at a detailed level. However, crop classification from FSR imagery is known to be challenging due to the great intra-class variability and low inter-class disparity in the data. In this research, a novel hybrid method (OSVM-OCNN) was proposed for crop classification from FSR imagery, which combines a shallow-structured object-based support vector machine (OSVM) with a deep-structured object-based convolutional neural network (OCNN). Unlike pixel-wise classification methods, the OSVM-OCNN method operates on objects as the basic units of analysis and, thus, classifies remotely sensed images at the object level. The proposed OSVM-OCNN harvests the complementary characteristics of the two sub-models, the OSVM with effective extraction of low-level within-object features and the OCNN with capture and utilization of high-level between-object information. By using a rule-based fusion strategy based primarily on the OCNN s prediction probability, the two sub-models were fused in a concise and effective manner. We investigated the effectiveness of the proposed method over two test sites (i.e., S1 and S2) that have distinctive and heterogeneous patterns of different crops in the Sacramento Valley, California, using FSR Synthetic Aperture Radar (SAR) and FSR multispectral data, respectively. Experimental results illustrated that the new proposed OSVM-OCNN approach increased markedly the classification accuracy for most of crop types in S1 and all crop types in S2, and it consistently achieved the most accurate accuracy in comparison with its two object-based sub-models (OSVM and OCNN) as well as the pixel-wise SVM (PSVM) and CNN (PCNN) methods. Our findings, thus, suggest that the propos