咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Variable selection using MM al... 收藏

Variable selection using MM algorithms

作     者:Hunter, DR Li, RZ 

作者机构:Penn State Univ Dept Stat University Pk PA 16802 USA 

出 版 物:《ANNALS OF STATISTICS》 (Ann. Stat.)

年 卷 期:2005年第33卷第4期

页      面:1617-1642页

核心收录:

学科分类:07[理学] 0714[理学-统计学(可授理学、经济学学位)] 0701[理学-数学] 070101[理学-基础数学] 

基  金:NIDA NIH HHS [P50 DA010075  P50 DA010075-100008] Funding Source: Medline 

主  题:AIC BIC EM algorithm LASSO MM algorithm penalized likelihood oracle estimator SCAD 

摘      要:Variable selection is fundamental to high-dimensional statistical modeling. Many variable selection techniques may be implemented by maximum penalized likelihood using various penalty functions. Optimizing the penalized likelihood function is often challenging because it may be nondifferentiable and/or nonconcave. This article proposes a new class of algorithms for finding a maximizer of the penalized likelihood for a broad class of penalty functions. These algorithms operate by perturbing the penalty function slightly to render it differentiable, then optimizing this differentiable function using a minorize-maximize (MM) algorithm. MM algorithms are useful extensions of the well-known class of EM algorithms, a fact that allows us to analyze the local and global convergence of the proposed algorithm using some of the techniques employed for EM algorithms. In particular, we prove that when our MM algorithms converge, they must converge to a desirable point;we also discuss conditions under which this convergence may be guaranteed. We exploit the Newton-Raphson-like aspect of these algorithms to propose a sandwich estimator for the standard errors of the estimators. Our method performs well in numerical tests.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分