咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Improving Generalization of Co... 收藏
arXiv

Improving Generalization of Complex Models under Unbounded Loss Using PAC-Bayes Bounds

作     者:Zhang, Xitong Ghosh, Avrajit Liu, Guangliang Wang, Rongrong 

作者机构:Department of Computational Mathematics Science and Engineering Michigan State University United States Department of Computer Science and Engineering Michigan State University United States Department of Computational Mathematics Science and Engineering Department of Mathematics Michigan State University United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2023年

核心收录:

主  题:Network architecture 

摘      要:Previous research on PAC-Bayes learning theory has focused extensively on establishing tight upper bounds for test errors. A recently proposed training procedure called PAC-Bayes training, updates the model toward minimizing these bounds. Although this approach is theoretically sound, in practice, it has not achieved a test error as low as those obtained by empirical risk minimization (ERM) with carefully tuned regularization hyperparameters. Additionally, existing PAC-Bayes training algorithms (e.g., Pérez-Ortiz et al. (2021)) often require bounded loss functions and may need a search over priors with additional datasets, which limits their broader applicability. In this paper, we introduce a new PAC-Bayes training algorithm with improved performance and reduced reliance on prior tuning. This is achieved by establishing a new PAC-Bayes bound for unbounded loss and a theoretically grounded approach that involves jointly training the prior and posterior using the same dataset. Our comprehensive evaluations across various classification tasks and neural network architectures demonstrate that the proposed method not only outperforms existing PAC-Bayes training algorithms but also approximately matches the test accuracy of ERM that is optimized by SGD/Adam using various regularization methods with optimal *** Codes 62C12 © 2023, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分