版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Univ Adelaide Sch Civil Environm & Min Engn Adelaide SA Australia Univ Newcastle Sch Engn Callaghan NSW Australia Hohai Univ Ctr Global Change & Water Cycle State Key Lab Hydrol Water Resources & Hydraul En Nanjing Jiangsu Peoples R China
出 版 物:《WATER RESOURCES RESEARCH》 (水资源研究)
年 卷 期:2018年第54卷第11期
页 面:9432-9455页
核心收录:
学科分类:0710[理学-生物学] 0830[工学-环境科学与工程(可授工学、理学、农学学位)] 08[工学] 081501[工学-水文学及水资源] 0815[工学-水利工程]
基 金:Fundamental Research Funds for the Central Universities [2018B55414] National Natural Science Foundation of China [41561134016, 51879068]
主 题:optimization benchmarking model calibration performance trade-offs algorithm efficiency robustness multistart optimization
摘 要:Environmental modelers using optimization algorithms for model calibration face an ambivalent choice. Some algorithms, for example, Newton-type methods, are fast but struggle to consistently find global parameter optima;other algorithms, for example, evolutionary methods, boast better global convergence but at much higher cost (e.g., requiring more objective function calls). Trade-offs between accuracy/robustness versus cost are ubiquitous in numerical computation, yet environmental modeling studies have lacked a systematic framework for quantifying these trade-offs. This study develops a framework for benchmarking stochastic optimization algorithms in the context of environmental model calibration, where multiple algorithm invocations are typically necessary. We define reliability as the probability of finding the desired (global or tolerable) optimum from random initial points and estimate the number of invocations to find the desired optimum with prescribed confidence (here 95%). A robust algorithm should achieve consistently high reliability across many problems. A characteristic efficiency metric for algorithm benchmarking is defined as the total cost (objective function calls over multiple invocations) to find the desired optimum with prescribed confidence. This approach avoids the pitfalls of existing approaches that compare costs without controlling the confidence in algorithm success. A case study illustrates the framework by benchmarking the Levenberg-Marquardt and Shuffled Complex Evolution (SCE) algorithms over three catchments and four hydrological models. In 8 of 12 scenarios, Levenberg-Marquardt is more efficient than SCEby sacrificing some of its speed advantage to match SCE reliability through more invocations. The proposed framework is easy to apply and can help guide algorithm selection in environmental model calibration.