版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Department of Chemical Engineering Indian Institute of Technology (IIT) Madras Department of Biotechnology Bhupat and Jyoti Mehta School of Biosciences IIT Madras Robert Bosch Centre for Data Science and Artificial Intelligence (RBC-DSAI) IIT Madras Initiative for Biological Systems Engineering IIT Madras Chennai — 600 036 India
出 版 物:《IFAC-PapersOnLine》
年 卷 期:2020年第53卷第1期
页 面:634-639页
主 题:Information Gain Bhattacharyya Coefficient Approximate Bayesian Computation (ABC) Model Sloppiness Practical Identifiability Model selection
摘 要:In data-driven dynamical modeling, precise estimation of the parameters of large models from limited data has been considered a challenging task. The precision of the parameter estimates is highly dependent upon the information contained in the data; Loss of practical identifiability and sloppiness in the model structure are major challenges in estimating parameters precisely and closely related to the information contained in the data. Therefore, quantifying information is an important step in data-driven modeling. Quantifying information is a well-studied problem in the frequentist approach, where Fisher Information is one of the widely used metrics. However, Fisher Information computed via maximum likelihood estimation cannot accommodate any known prior knowledge about the parameters. Prior knowledge of the parameters along with informative experiments will improve the precision of the estimates. Bayesian estimation accommodates prior information in the form of a p.d.f. There has been very little work in the literature for quantifying information in the Bayesian framework. In this work, we introduce a new method for estimating information gain in the Bayesian framework using what is known as the Bhattacharyya coefficient. It is seen that the bounds of the coefficient have an insightful interpretation naturally in terms of information gain on the parameter of interest. We also demonstrate using case studies that the information gain of each parameter is an indication of loss of practical identifiability and sloppy parameters. It is also shown that the proposed information gain can be used as a model selection tool in black-box identification.