咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Optimal subset selection for c... 收藏

Optimal subset selection for causal inference using machine learning ensembles and particle swarm optimization

作     者:Sharma, Dhruv Willy, Christopher Bischoff, John 

作者机构:George Washington Univ 2023 N Cleveland St Arlington VA 22201 USA George Washington Univ 21610 South Essex Dr Lexington Pk MD 20653 USA George Washington Univ 203 Greenhow Ct Se Leesburg VA USA 

出 版 物:《COMPLEX & INTELLIGENT SYSTEMS》 (Complex Intell. Syst.)

年 卷 期:2021年第7卷第1期

页      面:41-59页

核心收录:

主  题:Analytics Evolutionary computing Swarm optimization Machine learning 

摘      要:We suggest and evaluate a method for optimal construction of synthetic treatment and control samples for the purpose of drawing causal inference. The balance optimization subset selection problem, which formulates minimization of aggregate imbalance in covariate distributions to reduce bias in data, is a new area of study in operations research. We investigate a novel metric, cross-validated area under the receiver operating characteristic curve (AUC) as a measure of balance between treatment and control groups. The proposed approach provides direct and automatic balancing of covariate distributions. In addition, the AUC-based approach is able to detect subtler distributional differences than existing measures, such as simple empirical mean/variance and count-based metrics. Thus, optimizing AUCs achieves a greater balance than the existing methods. Using 5 widely used real data sets and 7 synthetic data sets, we show that optimization of samples using existing methods (Chi-square, mean variance differences, Kolmogorov-Smirnov, and Mahalanobis) results in samples containing imbalance that is detectable using machine learning ensembles. We minimize covariate imbalance by minimizing the absolute value of the distance of the maximum cross-validated AUC on M folds from 0.50, using evolutionary optimization. We demonstrate that particle swarm optimization (PSO) outperforms modified cuckoo swarm (MCS) for a gradient-free, non-linear noisy cost function. To compute AUCs, we use supervised binary classification approaches from the machine learning and credit scoring literature. Using superscore ensembles adds to the classifier-based two-sample testing literature. If the mean cross-validated AUC based on machine learning is 0.50, the two groups are indistinguishable and suitable for causal inference.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分