Subspace clustering has been widely applied to detect meaningful clusters in high-dimensional data spaces. And the sparse subspace clustering (SSC) obtains superior clustering performance by solving a relaxed l(0)-min...
详细信息
Subspace clustering has been widely applied to detect meaningful clusters in high-dimensional data spaces. And the sparse subspace clustering (SSC) obtains superior clustering performance by solving a relaxed l(0)-minimization problem with l(1)-norm. Although the use of l(1)-norm instead of the l(0) one can make the object function convex, it causes large errors on large coefficients in some cases. In this paper, we study the sparse subspace clustering algorithm based on a nonconvex modeling formulation. Specifically, we introduce a nonconvex pseudo-norm that makes a better approximation to the l(0)-minimization than the traditional l(1)-minimization framework and consequently finds a better affinity matrix. However, this formulation makes the optimization task challenging due to that the traditional alternating direction method of multipliers (ADMM) encounters troubles in solving the nonconvex subproblems. In view of this, the reweighted techniques are employed in making these subproblems convex and easily solvable. We provide several guarantees to derive the convergence results, which proves that the nonconvex algorithm is globally convergent to a critical point. Experiments on two real-world problems of motion segmentation and face clustering show that our method outperforms state-of-the-art techniques.
Best sparse tensor rank-1 approximation consists of finding a projection of a given data tensor onto the set of sparse rank-1 tensors, which is important in sparse tensor decomposition and related problems. Existing m...
详细信息
Best sparse tensor rank-1 approximation consists of finding a projection of a given data tensor onto the set of sparse rank-1 tensors, which is important in sparse tensor decomposition and related problems. Existing models used to or Li norms to pursue sparsity. In this work, we first construct a truncated exponential induced regularizer to encourage sparsity, and prove that this regularizer admits a reweighted property. Lower bounds for nonzero entries and upper bounds for the number of nonzero entries of the stationary points of the associated optimization problem are studied. By using the reweighted property of the regularizer, we develop an iteratively reweighted algorithm for solving the problem, and establish its convergence to a stationary point without any assumption. In particular, we show that if the parameter of the regularizer is small enough, then the support of the iterative points will be fixed after finitely many steps. Numerical experiments illustrate the effectiveness of the proposed model and algorithm. (C) 2022 Elsevier Inc. All rights reserved.
暂无评论