Due to the need to protect personal information and the impracticality of exhaustive data collection, there is increasing need to deal with datasets with various levels of granularity, such as user-individual data and...
详细信息
Due to the need to protect personal information and the impracticality of exhaustive data collection, there is increasing need to deal with datasets with various levels of granularity, such as user-individual data and user-group data. In this study, we propose a new method for jointly analyzing multiple datasets with different granularity. The proposed method is a probabilistic model based on nonnegative matrix factorization, which is derived by introducing latent variables that indicate the high-resolution data underlying the low-resolution data. Experiments on purchase logs show that the proposed method has a better performance than the existing methods. Furthermore, by deriving an extension of the proposed method, we show that the proposed method is a new fundamental approach for analyzing datasets with different granularity.
In order to obtain a discriminative, compact and robust data representation, a discriminative and robust nonnegative matrix factorization method with soft label constraint (DRNMF_SLC) is proposed. By minimizing the ob...
详细信息
In order to obtain a discriminative, compact and robust data representation, a discriminative and robust nonnegative matrix factorization method with soft label constraint (DRNMF_SLC) is proposed. By minimizing the objective function, the data representation after learning soft label constraint is obtained. To further acquire a more hierarchical and discriminative data representation, a deep discriminative and robust nonnegative matrix factorization network method with soft label constraint (Deep DRNMFN_SLC) is constructed. In order to improve the feature expression ability of deep neural network (DNN), a deep discriminative and robust nonnegative matrix factorization network method with soft label constraint based on DNN (Deep DRNMFN_SLC_DNN) is proposed, which could obtain a more discriminative, robust and generalized feature representation, and meanwhile greatly reduce the dimension of data features. Furthermore, the objective function of DRNMF_SLC is constructed by introducing both the global loss function and the central loss function of soft label constraint matrix, and the optimization solution and convergence proof of objective function are given simultaneously. When the proposed DRNMF_SLC method and Deep DRNMFN_SLC_DNN method are, respectively, applied to the face recognition under occlusions and illumination variations, the frameworks, Algorithm 1 and Algorithm 2 are given. The extensive and adequate experiments demonstrate the effectiveness of the proposed method.
nonnegative matrix factorization (NMF) is a well-known paradigm for data representation. Traditional NMF-based classification methods first perform NMF or one of its variants on input data samples to obtain their low-...
详细信息
nonnegative matrix factorization (NMF) is a well-known paradigm for data representation. Traditional NMF-based classification methods first perform NMF or one of its variants on input data samples to obtain their low-dimensional representations, which are successively classified by means of a typical classifier [e.g., k-nearest neighbors (KNN) and support vector machine (SVM)]. Such a stepwise manner may overlook the dependency between the two processes, resulting in the compromise of the classification accuracy. In this paper, we elegantly unify the two processes by formulating a novel constrained optimization model, namely dual embedding regularized NMF (DENMF), which is semi-supervised. Our DENMF solution simultaneously finds the low-dimensional representations and assignment matrix via joint optimization for better classification. Specifically, input data samples are projected onto a couple of low-dimensional spaces (i.e., feature and label spaces), and locally linear embedding is employed to preserve the identical local geometric structure in different spaces. Moreover, we propose an alternating iteration algorithm to solve the resulting DENMF, whose convergence is theoretically proven. Experimental results over five benchmark datasets demonstrate that DENMF can achieve higher classification accuracy than state-of-the-art algorithms.
This paper proposes a novel dimensional reduction method, called discriminant graph nonnegative matrix factorization (DGNMF), for image representation. Inspired by manifold learning and linear discrimination analysis,...
详细信息
This paper proposes a novel dimensional reduction method, called discriminant graph nonnegative matrix factorization (DGNMF), for image representation. Inspired by manifold learning and linear discrimination analysis, DGNMF provides a compact representation which can respect the original data space. In addition, In addition, the within-class distance of each class in the representation is very small. Based on these characteristics, our proposed method can be viewed as a supervised learning method, which outperforms some existing dimensional reduction methods, including PCA, LPP, LDA, NMF and GNMF. Experiments on image recognition have shown that our approach can provide a better representation than some classic methods.
As one commonly used dimensionality reduction method, nonnegative matrix factorization (NMF), whose goal is to learn parts-based representations, has been widely studied and applied to various areas. However, classica...
详细信息
As one commonly used dimensionality reduction method, nonnegative matrix factorization (NMF), whose goal is to learn parts-based representations, has been widely studied and applied to various areas. However, classical NMF does not utilize any label information. In this paper, we propose a novel semi-supervised NMF algorithm named the hyperplane-based nonnegative matrix factorization (HNMF). This HNMF constructs a hyperplane for each cluster such that the labelled points are close to the corresponding hyperplane and far away from the other ones. Then, the discriminative abilities of representation vectors are greatly enhanced. Clustering experiments completed on five publicly available databases demonstrate the effectiveness of this proposed HNMF compared to the state-of-the-art methods. (C) 2019 Elsevier Inc. All rights reserved.
nonnegative matrix factorization (NMF) has received intensive attention due to producing a parts-based representation of the data. However, because of the non-convexity of NMF models, these methods easily obtain a bad...
详细信息
nonnegative matrix factorization (NMF) has received intensive attention due to producing a parts-based representation of the data. However, because of the non-convexity of NMF models, these methods easily obtain a bad local solution. To alleviate this deficiency, this paper presents a novel NMF method by gradually including data points into NMF from easy to complex, namely self-paced learning (SPL), which is shown to be beneficial in avoiding a bad local solution. Furthermore, instead of using the conventional hard weighting scheme, we adopt the soft weighting strategy of SPL to further improve the performance of our model. An iterative updating algorithm is proposed to solve the optimization problem of our method. The convergence of the updating rules is also theoretically guaranteed. Experiments on both toy data and real-world benchmark datasets demonstrate the effectiveness of the proposed method. (C) 2018 Elsevier B.V. All rights reserved.
matrix decomposition is ubiquitous and has applications in various fields like speech processing, data mining and image processing to name a few. Under matrix decomposition, nonnegative matrix factorization is used to...
详细信息
matrix decomposition is ubiquitous and has applications in various fields like speech processing, data mining and image processing to name a few. Under matrix decomposition, nonnegative matrix factorization is used to decompose a nonnegativematrix into a product of two nonnegative matrices which gives some meaningful interpretation of the data. Thus, nonnegative matrix factorization has an edge over the other decomposition techniques. In this paper, we propose two novel iterative algorithms based on Majorization Minimization (MM) - in which we formulate a novel upper bound and minimize it to get a closed form solution at every iteration. Since the algorithms are based on MM, it is ensured that the proposed methods will be monotonic. The proposed algorithms differ in the updating approach of the two nonnegative matrices. The first algorithm - Iterative nonnegative matrix factorization (INOM) sequentially updates the two nonnegative matrices while the second algorithm - Parallel Iterative nonnegative matrix factorization (PARINOM) parallely updates them. We also prove that the proposed algorithms converge to the stationary point of the problem. Simulations were conducted to compare the proposed methods with the existing ones and was found that the proposed algorithms performs better than the existing ones in terms of computational speed and convergence.
Network embedding, aiming to learn low-dimensional representations of nodes in networks, is very useful for many vector-based machine learning algorithms and has become a hot research topic in network analysis. Althou...
详细信息
Network embedding, aiming to learn low-dimensional representations of nodes in networks, is very useful for many vector-based machine learning algorithms and has become a hot research topic in network analysis. Although many methods for network embedding have been proposed before, most of them are unsupervised, which ignores the role of prior information available in the network. In this paper, we propose a novel method for network embedding using semi-supervised kernel nonnegative matrix factorization (SSKNMF), which can incorporate prior information and thus to learn more useful features from the network through introducing kernel methodology. Besides, it can improve robustness against noises by using the objective function based on L-2,L-1 norm. Efficient iterative update rules are derived to resolve the network embedding model using the SSKNMF, and the convergence of these rules are strictly proved from the perspective of mathematics. The results from extensive experiments on several real-world networks show that our proposed algorithm is effective and has better performance than the existing representative methods.
nonnegative matrix factorization (NMF) is a powerful tool for hyperspectral unmixing (HU). This method factorizes a hyperspectral cube into constituent endmembers and their fractional abundances. In this paper, we pro...
详细信息
nonnegative matrix factorization (NMF) is a powerful tool for hyperspectral unmixing (HU). This method factorizes a hyperspectral cube into constituent endmembers and their fractional abundances. In this paper, we propose a two-stage nonnegative matrix factorization algorithm. During the first stage, k-means clustering is first employed to obtain the estimated endmember matrix. This matrix serves as the initial matrix for NMF during the second stage, where we design a new cost function for the purpose of refining the solutions of NMF. The two-stage NMF model is solved with multiplicative update rules, and the monotonic convergence of this algorithm is proven with an auxiliary function. Numerical tests demonstrate that our two-stage NMF algorithm can achieve accurate and stable solutions.
This paper contributes to study the influence of various NMF algorithms on the classification accuracy of each classifier as well as to compare the classifiers among themselves. We focus on a fast nonnegativematrix f...
详细信息
This paper contributes to study the influence of various NMF algorithms on the classification accuracy of each classifier as well as to compare the classifiers among themselves. We focus on a fast nonnegative matrix factorization (NMF) algorithm based on discrete-time projection neural network (DTPNN). The NMF algorithm is combined with three classifiers in order to find out the influence of dimensionality reduction performed by the NMF algorithm on the accuracy rate of the classifiers. The convergent objective function values in terms of two popular objective functions, Frobenius norm and Kullback-Leibler (K-L) divergence for different NMF based algorithms on a wide range of data sets are demonstrated. The CPU running time in terms of these objective functions on different combination of NMF algorithms and data sets are also shown. Moreover, the convergent behaviors of different NMF methods are illustrated. In order to test its effectiveness on classification accuracy, a performance study of three well-known classifiers is carried out and the influence of the NMF algorithm on the accuracy is evaluated. Furthermore, the confusion matrix module has been incorporated into the algorithms to provide additional classification accuracy comparison.
暂无评论