Recent backbones built on transformers capture the context within a significantly larger area than CNN, and greatly improve the performance on semantic segmentation. However, the fact, that the decoder utilizes featur...
详细信息
ISBN:
(数字)9783031189166
ISBN:
(纸本)9783031189159;9783031189166
Recent backbones built on transformers capture the context within a significantly larger area than CNN, and greatly improve the performance on semantic segmentation. However, the fact, that the decoder utilizes features from different stages in the shallow layers, indicates that local context is still important. Instead of simply incorporating features from different stages, we propose a cross-stage class-specific attention mainly for transformer-based backbones. Specifically, given a coarse prediction, we first employ the final stage features to aggregate a class center within the whole image. Then high-resolution features from the earlier stage are used as queries to absorb the semantics from class centers. To eliminate the irrelevant classes within a local area, we build the context for each query position according to the classification score from coarse prediction, and remove the redundant classes. So only relevant classes provide keys and values in attention and participate the value routing. We validate the proposed scheme on different datasets including ADE20K, Pascal Context and COCO-Stuff, showing that the proposed model improves the performance compared with other works.
Background: The open and accessible regions of the chromosome are more likely to be bound by transcription factors which are important for nuclear processes and biological functions. Studying the change of chromosome ...
详细信息
Background: The open and accessible regions of the chromosome are more likely to be bound by transcription factors which are important for nuclear processes and biological functions. Studying the change of chromosome flexibility can help to discover and analyze disease markers and improve the efficiency of clinical diagnosis. Current methods for predicting chromosome flexibility based on Hi-C data include the Flexibility-Rigidity Index (FRI) and the Gaussian Network Model (GNM), which have been proposed to characterize chromosome flexibility. However, these methods require the chromosome structure data based on 3D biological experiments, which is time-consuming and expensive. Objective: Generally, the folding and curling of the double helix sequence of DNA have a great impact on chromosome flexibility and function. Motivated by the success of genomic sequence analysis in biomolecular function analysis, we hope to propose a method to predict chromosome flexibility only based on genomic sequence data. Methods: We propose a new method (named "DeepCFP") using deep learning models to predict chromosome flexibility based on only genomic sequence features. The model has been tested in the GM12878 cell line. Results: The maximum accuracy of our model has reached 91%. The performance of DeepCFP is close to FRI and GNM. Conclusion: The DeepCFP can achieve high performance only based on genomic sequence.
Machine translation is a task in natural language processing that uses computers to convert between different languages. This article introduces an original seq2seq model experiment on the English-Vietnamese data set....
详细信息
ISBN:
(纸本)9781728143231;9781728143224
Machine translation is a task in natural language processing that uses computers to convert between different languages. This article introduces an original seq2seq model experiment on the English-Vietnamese data set. By adding the attention mechanism and comparing the results of the model, we find that the attention mechanism can greatly promote machine translation. Using seq2seq and attention mechanism models to achieve the basic functions of the machine model, and has outstanding performance in the experimental results. Using multi-bleu-perl to analyze, the results show that the attention mechanism shows good performance on Vietnamese machine translation tasks.
暂无评论