The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m-space, for m is an element of IN, is the ...
详细信息
The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m-space, for m is an element of IN, is the set of errors when linear measurements are taken to recover a signal/image via the Lasso. Based on a recent work by Wang (2013), we are interested in two new penalty methods for Q-Lasso relying on two types of difference of convex functions (DC for short) programming where the DC objective functions are the difference of l1 and l sigma q norms and the difference of l(1) and l(r) norms with r>1. By means of a generalized q-term shrinkage operator upon the special structure of l(sigma q) norm, we design a proximal gradient algorithm for handling the DC l(1)-l(sigma q) model. Then, based on the majorization scheme, we develop a majorized penalty algorithm for the DC l(1)-l(r) model. The convergence results of our new algorithms are presented as well. We would like to emphasize that extensive simulation results in the case Q={b} show that these two new algorithms offer improved signal recovery performance and require reduced computational effort relative to state-of-the-art l(1) and l(p) (p is an element of(0,1)) models, see Wang (2013). We also devise two DC algorithms on the spirit of a paper where exact DC representation of the cardinality constraint is investigated and which also used the largest-q norm of l(sigma q) and presented numerical results that show the efficiency of our DC algorithm in comparison with other methods using other penalty terms in the context of quadratic programing, see Jun-ya et al. (2017).
暂无评论