Predicting novel uses for approved drugs helps in reducing the costs of drug development and facilitates the development process. Most of previous methods focused on the multi-source data related to drugs and diseases...
详细信息
Predicting novel uses for approved drugs helps in reducing the costs of drug development and facilitates the development process. Most of previous methods focused on the multi-source data related to drugs and diseases to predict the candidate associations between drugs and diseases. There are multiple kinds of similarities between drugs, and these similarities reflect how similar two drugs are from the different views, whereas most of the previous methods failed to deeply integrate these similarities. In addition, the topology structures of the multiple drug-disease heterogeneous networks constructed by using the different kinds of drug similarities are not fully exploited. We therefore propose GFPred, a method based on a graph convolutional autoencoder and a fully-connected autoencoder with an attention mechanism, to predict drug-related diseases. GFPred integrates drug-disease associations, disease similarities, three kinds of drug similarities and attributes of the drug nodes. Three drug-disease heterogeneous networks are constructed based on the different kinds of drug similarities. We construct a graph convolutional autoencoder module, and integrate the attributes of the drug and disease nodes in each network to learn the topology representations of each drug node and disease node. As the different kinds of drug attributes contribute differently to the prediction of drug-disease associations, we construct an attribute-level attention mechanism. A fully-connected autoencoder module is established to learn the attribute representations of the drug and disease nodes. Finally, the original features of the drug-disease node pairs are also important auxiliary information for their association prediction. A combined strategy based on a convolutional neural network is proposed to fully integrate the topology representations, the attribute representations, and the original features of the drug-disease pairs. The ablation studies showed the contributions of data related
Identifying new disease indications for the approved drugs can help reduce the cost and time of drug development. Most of the recent methods focus on exploiting the various information related to drugs and diseases fo...
详细信息
Identifying new disease indications for the approved drugs can help reduce the cost and time of drug development. Most of the recent methods focus on exploiting the various information related to drugs and diseases for predicting the candidate drug-disease associations. However, the previous methods failed to deeply integrate the neighborhood topological structure and the node attributes of an interested drug-disease node pair. We propose a new prediction method, ANPred, to learn and integrate pairwise attribute information and neighbor topology information from the similarities and associations related to drugs and diseases. First, a bi-layer heterogeneous network with intra-layer and inter-layer connections is established to combine the drug similarities, the disease similarities, and the drug-disease associations. Second, the embedding of a pair of drug and disease is constructed based on integrating multiple biological premises about drugs and diseases. The learning framework based on multi-layer convolutional neural networks is designed to learn the attribute representation of the pair of drug and disease nodes from its embedding. The sequences composed of neighbor nodes are formed based on random walk on the heterogeneous network. A framework based on fully-connected autoencoder and skip-gram module is constructed to learn the neighbor topological representations of nodes. The cross-validation results indicate the performance of ANPred is superior to several state-of-the-art methods. The case studies on 5 drugs further confirm the ability of ANPred in discovering the potential drug-disease association candidates.
暂无评论