咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Kernel Probabilistic K-Means C... 收藏

Kernel Probabilistic K-Means Clustering

聚类的核概率的 K 工具

作     者:Liu, Bowen Zhang, Ting Li, Yujian Liu, Zhaoying Zhang, Zhilin 

作者机构:Beijing Univ Technol Fac Informat Technol Beijing 100124 Peoples R China Guilin Univ Elect Technol Sch Artificial Intelligence Guilin 541004 Peoples R China 

出 版 物:《SENSORS》 (传感器)

年 卷 期:2021年第21卷第5期

页      面:1892-1892页

核心收录:

学科分类:0710[理学-生物学] 071010[理学-生物化学与分子生物学] 0808[工学-电气工程] 07[理学] 0804[工学-仪器科学与技术] 0703[理学-化学] 

基  金:National Natural Science Foundation of China [61876010  61806013  61906005] 

主  题:fuzzy c-means kernel probabilistic k-means nonlinear programming fast active gradient projection 

摘      要:Kernel fuzzy c-means (KFCM) is a significantly improved version of fuzzy c-means (FCM) for processing linearly inseparable datasets. However, for fuzzification parameter m=1, the problem of KFCM (kernel fuzzy c-means) cannot be solved by Lagrangian optimization. To solve this problem, an equivalent model, called kernel probabilistic k-means (KPKM), is proposed here. The novel model relates KFCM to kernel k-means (KKM) in a unified mathematic framework. Moreover, the proposed KPKM can be addressed by the active gradient projection (AGP) method, which is a nonlinear programming technique with constraints of linear equalities and linear inequalities. To accelerate the AGP method, a fast AGP (FAGP) algorithm was designed. The proposed FAGP uses a maximum-step strategy to estimate the step length, and uses an iterative method to update the projection matrix. Experiments demonstrated the effectiveness of the proposed method through a performance comparison of KPKM with KFCM, KKM, FCM and k-means. Experiments showed that the proposed KPKM is able to find nonlinearly separable structures in synthetic datasets. Ten real UCI datasets were used in this study, and KPKM had better clustering performance on at least six datsets. The proposed fast AGP requires less running time than the original AGP, and it reduced running time by 76-95% on real datasets.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分