Recent years have witnessed tremendous growth in the publication of research articles as well as in the rise of new research topics. Articles get published in a streaming manner and therefore retrieving and recommendi...
详细信息
Recent years have witnessed tremendous growth in the publication of research articles as well as in the rise of new research topics. Articles get published in a streaming manner and therefore retrieving and recommending trending topics continuously by updating the trend of topics with time will be beneficial for young researchers. The proposed topic recommendation system is a clustering-based approach that utilizes an autoencoder framework for the generation of clusters. The autoencoder framework considers articles as input in a multiview framework and produces latent data in lower-dimensional space using graph attention-based encoder and decoder networks. The latent data is then partitioned with respect to its different views and finally, a single consensus overlapping partitioning is produced by satisfying all the views. Since the publication of articles is a continuous process, to update the trending topics continuously, the clustering-based approach is kept on and applied iteratively in a sliding window manner. The generated clusters are analyzed to extract the trending topics and recommendations for future scope. An article can belong to multiple topics and the proposed method is developed by considering such criteria and therefore tested with the modified version of the multilabel scientific article data ArXiv, named ***. The superiority of the proposed method can be observed from its comparisons with existing methods, and a few baseline methods with respect to cluster formation, topic extraction, and trending topic evaluation.
Skeleton-based human action recognition (HAR) is being utilized in various fields like action classification and abnormal behavior detection. The accurate coordinates of the human joints are a crucial factor for the h...
详细信息
Skeleton-based human action recognition (HAR) is being utilized in various fields like action classification and abnormal behavior detection. The accurate coordinates of the human joints are a crucial factor for the high performance in skeleton-based HAR. However, the missing joints caused by occlusion and invisibility result in performance degradation. Hence, in this paper, a missing joint reconstruction model is proposed to improve the performance of skeleton-based HAR. The proposed model, based on a denoising graph autoencoder (DGAE), regards missing joints as noise corrupted information and aims to reconstruct them to be close to their original coordinates. When the encoder of the proposed model compresses the noised input into a latent vector, a masking Laplacian matrix is introduced to reduce the effect of the missing joints' features. The masking Laplacian matrix adjusts the effect of features between a missing joint and its adjacent joints by altering the weights of an adjacent matrix. In the decoder, a Laplacian matrix, which represents the connections among the joints, is utilized to reconstruct an output from the latent vector. The experiment result shows that the proposed model reconstructs the coordinates of missing joints with a marginal error. In addition, the performance of skeleton-based HAR is enhanced by reconstructing the missing joints.
In this work, we introduce the graph regularized autoencoder. We propose three variants. The first one is the unsupervised version. The second one is tailored for clustering, by incorporating subspace clustering terms...
详细信息
In this work, we introduce the graph regularized autoencoder. We propose three variants. The first one is the unsupervised version. The second one is tailored for clustering, by incorporating subspace clustering terms into the autoencoder formulation. The third is a supervised label consistent autoencoder suitable for single label and multi-label classification problems. Each of these has been compared with the state-of-the-art on benchmark datasets. The problems addressed here are image denoising, clustering and classification. Our proposed methods excel of the existing techniques in all of the problems. (c) 2018 Elsevier Ltd. All rights reserved.
For predicting the value of the quality variable in fermentation processes, traditional data-driven methods do not use information in large amounts of unlabelled data. To solve this data-rich but information-poor (DRI...
详细信息
For predicting the value of the quality variable in fermentation processes, traditional data-driven methods do not use information in large amounts of unlabelled data. To solve this data-rich but information-poor (DRIP) problem, a teacher student stacked sparse recurrent autoencoder (TS-SSRAE) model is proposed. Compared with traditional data-driven methods, the proposed method has three main advantages. First, an autoencoder is an unsupervised method which can effectively extract rich information in unlabelled data. The proposed stacked recurrent autoencoder (SRAE) with long short-term memory (LSTM) recurrent neural unit is superior to traditional autoencoders when extracting the dynamic correlation information in the fermentation process. Second, sparse constraints can make it much easier for hidden neurons to obtain useful information in a single moment. Finally, the LSTM recurrent neural unit is complex and the inputs of a SRAE must be a sequence, which increases the complexity of the model to a certain extent. So, the knowledge distillation is employed to simplify the model and reduce the computing time. In order to demonstrate its effectiveness, the proposed method is applied to the penicillin fermentation process for a simulation experiment and Escherichia coli production of interleukin-2. The results show that the proposed method based on TS-SSRAE can have better performance than conventional methods.
Rolling bearing is a critical component of machinery that has been widely applied in manufacturing, transportation, aerospace, and power and energy industries. The timely and accurate bearing fault detection thus is o...
详细信息
Rolling bearing is a critical component of machinery that has been widely applied in manufacturing, transportation, aerospace, and power and energy industries. The timely and accurate bearing fault detection thus is of vital importance. Computational data-driven deep learning has recently become a prevailing approach for bearing fault detection. Despite the progress of the deep learning approach, the deep learning performance is hinged upon the size of labeled data, the acquisition of which is expensive in actual implementation. Unlabeled data, on the other hand, are inexpensive. In this research, we develop a new semi-supervised learning method built upon the autoencoder to fully utilize a large amount of unlabeled data together with limited labeled data to enhance fault detection performance. Compared with the state-of-the-art semi-supervised learning methods, this proposed method can be more conveniently implemented with fewer hyperparameters to be tuned. In this method, a joint loss is established to account for the effects of labeled and unlabeled data, which is subsequently used to direct the backpropagation training. Systematic case studies using the Case Western Reserve University (CWRU) rolling bearing dataset are carried out, in which the effectiveness of this new method is verified by comparing it with other well-established baseline methods. Specifically, nearly all emulation runs using the proposed methodology can lead to around 2%-5% accuracy increase, indicating its robustness in performance enhancement.
Detecting anomalies such as breakage and excessive wear of cutting tools in the machining process is crucial to prevent damage and improve productivity. Data-driven anomaly detection (AD) methods suffer from limited a...
详细信息
Detecting anomalies such as breakage and excessive wear of cutting tools in the machining process is crucial to prevent damage and improve productivity. Data-driven anomaly detection (AD) methods suffer from limited availability of anomaly samples, which is ineluctable in practice owing to strict reliability restrictions. Therefore, we propose a semisupervised AD approach in which only failure-free samples are required to establish an AD model. The key strategy is to learn the characteristics of failure-free samples using an improved autoencoder (AE) and discern observations by deviations from the characteristics. We rebuild the loss function of AE to impel the model to learn the common characteristics in latent space. We propose a factor that reflects the anomaly degree as the decision-making function to implement AD. The proposed approach is verified on an experimental cutting tool breakage dataset and a public cutting tool wear dataset. The experimental results demonstrate the validity of the proposed approach. The comparisons with conventional methods substantiate that the proposed approach outperforms existing AD methods.
Deep learning has been developed to generate promising super resolution hyperspectral imagery by fusing hyperspectral imagery with the panchromatic ***,it is still challenging to maintain edge spectral information in ...
详细信息
Deep learning has been developed to generate promising super resolution hyperspectral imagery by fusing hyperspectral imagery with the panchromatic ***,it is still challenging to maintain edge spectral information in the necessary upsampling processes of these approaches,and diffcult to guarantee effective feature *** study proposes a pansharpening network denoted as HyperRefiner that consists of,(1)a well performing upsampling network SRNet,in which the dual attention block and refined attention block are cascaded to accomplish the extraction and fusion of features;(2)a spectral autoencoder that is embedded to perform dimensionality reduction under constrained feature extraction;and(3)the optimization module which performs self-attention at the pixel and feature levels.A comparisonwithseveral state-of-the-art models reveals that HyperRefiner can improve the quality of the fused ***,compared to the single-head HyperTransformer and with the Chikusei dataset,our network improved the Peak Signal-to-Noise Ratio,Erreur Relative Globale Adimensionnelle de Synthese and Spectral Angle Mapper by 0.86%,3.62%,and 2.09%,and reduce the total memory,floating point operations,model parameters and computation time by 41%,75%,86%and 46%,*** experimental results show that HyperRefiner outperforms several networks and demonstrates its usefulness in hyperspectral image *** code is publicly available athttps://***/zsspo/Fusion_HyperRefiner.
作者:
Zhang, DonglinWu, Xiao-JunJiangnan Univ
Sch Artificial Intelligence & Comp Sci Wuxi 214122 Jiangsu Peoples R China Jiangnan Univ
Jiangsu Prov Engn Lab Pattern Recognit & Computat Wuxi 214122 Jiangsu Peoples R China
Hashing methods have sparked great attention on multimedia tasks due to their effectiveness and efficiency. However, most existing methods generate binary codes by relaxing the binary constraints, which may cause larg...
详细信息
Hashing methods have sparked great attention on multimedia tasks due to their effectiveness and efficiency. However, most existing methods generate binary codes by relaxing the binary constraints, which may cause large quantization error. In addition, most supervised cross-modal approaches preserve the similarity relationship by constructing an n x n large-size similarity matrix, which requires huge computation, making these methods unscalable. To address the above challenges, this article presents a novel algorithm, called scalable discrete matrix factorization and semantic autoencoder method (SDMSA). SDMSA is a two-stage method. In the first stage, the matrix factorization scheme is utilized to learn the latent semantic information, the label matrix is incorporated into the loss function instead of the similarity matrix. Thereafter, the binary codes can be generated by the latent representations. During optimization, we can avoid manipulating a large nxn similarity matrix, and the hash codes can be generated directly. In the second stage, a novel hash function learning scheme based on the autoencoder is proposed. The encoder-decoder paradigm aims to learn projections, the feature vectors are projected to code vectors by encoder, and the code vectors are projected back to the original feature vectors by the decoder. The encoder-decoder scheme ensures the embedding can well preserve both the semantic and feature information. Specifically, two algorithms SDMSA-lin and SDMSA-ker are developed under the SDMSA framework. Owing to the merit of SDMSA, we can get more semantically meaningful binary hash codes. Extensive experiments on several databases show that SDMSA-lin and SDMSA-ker achieve promising performance.
X-ray inspection by control officers is not always consistent when inspecting baggage since this task are monotonous, tedious and tiring for human inspectors. Thus, a semi-automatic inspection makes sense as a solutio...
详细信息
X-ray inspection by control officers is not always consistent when inspecting baggage since this task are monotonous, tedious and tiring for human inspectors. Thus, a semi-automatic inspection makes sense as a solution in this case. In this perspective, the study presents a novel feature learning model for object classification in luggage dual X-ray images in order to detect explosives objects and firearms. We propose to use supervised feature learning by autoencoders approach. Object detection is performed by a modified YOLOv3 to detect all the presented objects without classification. The features learning is carried out by labeled adversarial autoencoders. The classification is performed by a support vector machine to classify a new object as explosive, firearms or non-threatening objects. To show the superiority of our proposed system, a comparative analysis was carried out to several methods of deep learning. The results indicate that the proposed system leads to efficient objects classification in complex environments, achieving an accuracy of 98.00% and 96.50% in detection of firearms and explosive objects respectively.
Recently, clustering algorithms based on deep autoencoder attract lots of attention due to their excellent clustering performance. On the other hand, the success of PCA-Kmeans and spectral clustering corroborates that...
详细信息
Recently, clustering algorithms based on deep autoencoder attract lots of attention due to their excellent clustering performance. On the other hand, the success of PCA-Kmeans and spectral clustering corroborates that the orthogonality of embedding is beneficial to increase the clustering accuracy. In this paper, we propose a novel dimensional reduction model, called Orthogonal autoencoder (OAE), which encourages the orthogonality of the learned embedding. Furthermore, we propose a joint deep Clustering framework based on Orthogonal autoencoder (COAE), and this new framework is capable of extracting the latent embedding and predicting the clustering assignment simultaneously. The COAE stacks a fully connected clustering layer on top of the OAE, where the activation function of the clustering layer is the multinomial logistic regression function. The loss function of the COAE contains two terms: the reconstruction loss and the clustering-oriented loss. The first one is a data-dependent term in order to prevent overfitting. The other one is the cross entropy between the predicted assignment and the auxiliary target distribution. The network parameters of the COAE can be effectively updated by the mini-batch stochastic gradient descent algorithm and the back-propagation approach. The experiments on benchmark datasets empirically demonstrate that the COAE can achieve superior or competitive clustering performance as state-of-the-art deep clustering frameworks. The implementation of our algorithm is available at https://***/WangDavey/COAE
暂无评论