咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Stability-Adjusted Cross-Valid... 收藏
arXiv

Stability-Adjusted Cross-Validation for Sparse Linear Regression

作     者:Cory-Wright, Ryan Gómez, Andrés 

作者机构:Department of Analytics Marketing and Operations Imperial College Business School London United Kingdom Department of Industrial and Systems Engineering Viterbi School of Engineering University of Southern California CA United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2023年

核心收录:

主  题:Mixed integer linear programming 

摘      要:Given a high-dimensional covariate matrix and a response vector, ridge-regularized sparse linear regression selects a subset of features that explains the relationship between covariates and the response in an interpretable manner. To select the sparsity and robustness of linear regressors, techniques like k-fold cross-validation are commonly used for hyperparameter tuning. However, cross-validation substantially increases the computational cost of sparse regression as it requires solving many mixed-integer optimization problems (MIOs) for each hyperparameter combination. Additionally, validation metrics often serve as noisy estimators of test set errors, with different hyperparameter combinations leading to models with different noise levels. Therefore, optimizing over these metrics is vulnerable to out-of-sample disappointment, especially in underdetermined settings. To improve upon this state of affairs, we make two key contributions. First, motivated by the generalization theory literature, we propose selecting hyperparameters that minimize a weighted sum of a cross-validation metric and a model’s output stability, thus reducing the risk of poor out-of-sample performance. Second, we leverage ideas from the mixed-integer optimization literature to obtain computationally tractable relaxations of k-fold cross-validation metrics and the output stability of regressors, facilitating hyperparameter selection after solving fewer MIOs. These relaxations result in an efficient cyclic coordinate descent scheme, achieving lower validation errors than via traditional methods such as grid search. On synthetic datasets, our confidence adjustment procedure improves out-of-sample performance by 2%–5% compared to minimizing the k-fold error alone. On 13 real-world datasets, our confidence adjustment procedure reduces test set error by 2%, on average. Copyright © 2023, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分