Code summarization facilitates program comprehension and software maintenance by converting code snippets into natural-language descriptions. Over the years, numerous methods have been developed for this task, but a k...
详细信息
As the Internet gradually penetrates people's daily lives, individual citizens are empowered to demonstrate and exchange opinions and sentiments at any time anywhere. Online communities are increasingly participat...
详细信息
Sparse subspace learning has been demonstrated to be effective in data mining and machine learning. In this paper, we cast the unsupervised feature selection scenario as a matrix factorization problem from the viewpoi...
详细信息
Sparse subspace learning has been demonstrated to be effective in data mining and machine learning. In this paper, we cast the unsupervised feature selection scenario as a matrix factorization problem from the viewpoint of sparse subspace learning. By minimizing the reconstruction residual, the learned feature weight matrix with the l 2,1 -norm and the non-negative constraints not only removes the irrelevant features, but also captures the underlying low dimensional structure of the data points. Meanwhile in order to enhance the model's robustness, l 1 -norm error function is used to resistant to outliers and sparse noise. An efficient iterative algorithm is introduced to optimize this non-convex and non-smooth objective function and the proof of its convergence is given. Although, there is a subtraction item in our multiplicative update rule, we validate its non-negativity. The superiority of our model is demonstrated by comparative experiments on various original datasets with and without malicious pollution.
暂无评论