In recent years, deep learning approaches have gained great attention due to their superior performance and the availability of high speed computing resources. These approaches are also extended towards the real time ...
详细信息
In recent years, deep learning approaches have gained great attention due to their superior performance and the availability of high speed computing resources. These approaches are also extended towards the real time processing of multimedia content exploiting its spatial and temporal structure. In this paper, we propose a deep learning-based video description framework which first extracts visual features from video frames using deep convolutional neural networks (CNN) and then pass the derived representations into a long-short term memory-based language model. In order to capture accurate information for human presence, a fine-tuned multi-task CNN is presented. The proposed pipeline is end-to-end, trainable, and capable of learning dense visual features along with an accurate framework for the generation of natural language descriptions of video streams. The evaluation is done by calculating Metric for Evaluation of Translation with Explicit ORdering and Recall-Oriented Understudy for Gisting Evaluation (ROUGE) scores between system generated and human annotated video descriptions for a carefully designed data set. The video descriptions generated by the traditional feature learning and proposed deep learning frameworks are also compared through the ROUGE scores.
In previous work on text summarization, encoder-decoder architectures and attention mechanisms have both been widely used. Attention-based encoder-decoder approaches typically focus on taking the sentences preceding a...
详细信息
ISBN:
(纸本)9781450360142
In previous work on text summarization, encoder-decoder architectures and attention mechanisms have both been widely used. Attention-based encoder-decoder approaches typically focus on taking the sentences preceding a given sentence in a document into account for document representation, failing to capture the relationships between a sentence and sentences that follow it in a document in the encoder. We propose an attentive encoder-based summarization (AES) model to generate article summaries. AES can generate a rich document representation by considering both the global information of a document and the relationships of sentences in the document. A unidirectional recurrent neural network (RNN) and a bidirectional RNN are considered to construct the encoders, giving rise to unidirectional attentive encoder-based summarization (Uni-AES) and bidirectional attentive encoder-based summarization (Bi-AES), respectively. Our experimental results show that Bi-AES outperforms Uni-AES. We obtain substantial improvements over a relevant start-of-the-art baseline.
With the rise in popularity of artificial intelligence, the technology of verbal communication between man and machine has received an increasing amount of attention, but generating a good conversation remains a diffi...
详细信息
With the rise in popularity of artificial intelligence, the technology of verbal communication between man and machine has received an increasing amount of attention, but generating a good conversation remains a difficult task. The key factor in human-machine conversation is whether the machine can give good responses that are appropriate not only at the content level (relevant and grammatical) but also at the emotion level (consistent emotional expression). In our paper, we propose a new model based on long short-term memory, which is used to achieve an encoder-decoder framework, and we address the emotional factor of conversation generation by changing the model's input using a series of input transformations: a sequence without an emotional category, a sequence with an emotional category for the input sentence, and a sequence with an emotional category for the output responses. We perform a comparison between our work and related work and find that we can obtain slightly better results with respect to emotion consistency. Although in terms of content coherence our result is lower than those of related work, in the present stage of research, our method can generally generate emotional responses in order to control and improve the user's emotion. Our experiment shows that through the introduction of emotional intelligence, our model can generate responses appropriate not only in content but also in emotion.
We propose a repeated review deep learning model for image captioning in image evidence review process. It consists of two subnetworks. One is the convolutional neural network which is employed to extract the image fe...
详细信息
We propose a repeated review deep learning model for image captioning in image evidence review process. It consists of two subnetworks. One is the convolutional neural network which is employed to extract the image features and the other is the recurrent neural network which is used to decode the image features into captions. Our model combines the advantages of the two subnetworks by recalling visual information different from the traditional model of encoder-decoder, and then introduces multimodal layer to fuse the image and caption effectively. The proposed model has been validated on benchmark datasets (MSCOCO, Flick). It shows that the proposed model performs well on bleu-3 and bleu-4, even to some extent, beyond the best models available today (such as NIC, m-RNN, etc.).
Class imbalance is a key issue for the application of deep learning for remote sensing image classification because a model generated by imbalanced samples training has low classification accuracy for minority classes...
详细信息
Class imbalance is a key issue for the application of deep learning for remote sensing image classification because a model generated by imbalanced samples training has low classification accuracy for minority classes. In this study, an accurate classification approach using the multistage sampling method and deep neural networks was proposed to classify imbalanced data. We first balance samples by multistage sampling to obtain the training sets. Then, a state-of-the-art model is adopted by combining the advantages of atrous spatial pyramid pooling (ASPP) and encoder-decoder for pixel-wise classification, which are two different types of fully convolutional networks (FCNs) that can obtain contextual information of multiple levels in the encoder stage. The details and spatial dimensions of targets are restored using such information during the decoder stage. We employ four deep learning-based classification algorithms (basic FCN, FCN-8S, ASPP, and encoder-decoder with ASPP of our approach) on multistage training sets (original, MUS1, and MUS2) of WorldView-3 images in southeastern Qinghai-Tibet Plateau and GF-2 images in northeastern Beijing for comparison. The experiments show that, compared with existing sets (original, MUS1, and identical) and existing method (cost weighting), the MUS2 training set of multistage sampling significantly enhance the classification performance for minority classes. Our approach shows distinct advantages for imbalanced data.
Dense semantic labeling is significant in high-resolution remote sensing imagery research and it has been widely used in land-use analysis and environment protection. With the recent success of fully convolutional net...
详细信息
Dense semantic labeling is significant in high-resolution remote sensing imagery research and it has been widely used in land-use analysis and environment protection. With the recent success of fully convolutional networks (FCN), various types of network architectures have largely improved performance. Among them, atrous spatial pyramid pooling (ASPP) and encoder-decoder are two successful ones. The former structure is able to extract multi-scale contextual information and multiple effective field-of-view, while the latter structure can recover the spatial information to obtain sharper object boundaries. In this study, we propose a more efficient fully convolutional network by combining the advantages from both structures. Our model utilizes the deep residual network (ResNet) followed by ASPP as the encoder and combines two scales of high-level features with corresponding low-level features as the decoder at the upsampling stage. We further develop a multi-scale loss function to enhance the learning procedure. In the postprocessing, a novel superpixel-based dense conditional random field is employed to refine the predictions. We evaluate the proposed method on the Potsdam and Vaihingen datasets and the experimental results demonstrate that our method performs better than other machine learning or deep learning methods. Compared with the state-of-the-art DeepLab_v3+ our model gains 0.4% and 0.6% improvements in overall accuracy on these two datasets respectively.
Haze is a natural phenomenon in which the dust, smoke and other particles alter the vision of the sky to reduce the visibility. Hazy images cause various visibility problems for traffic user, tourists everywhere, espe...
详细信息
Haze is a natural phenomenon in which the dust, smoke and other particles alter the vision of the sky to reduce the visibility. Hazy images cause various visibility problems for traffic user, tourists everywhere, especially in hilly areas where haze and fog are very common. In this paper, a method for single image dehazing using convolutional neural network is proposed. Outdoor images have been used on which particular filters are applied to find the haze in image. Hazy images contain small value in only one-color alpha channel from Red, Blue, green RGB channel. The intensity of these pixels is mainly bestowed by air light depth map. Estimating these low value points of haze transmission map are useful to obtain a high quality dehazed image. An end-to-end encoder-decoder training model is utilized to achieve a high quality dehazed image. The approach is validated on datasets which consists of around 1500 outdoor images. The method also gives transmission map of the hazy image which can further be used to enhance visibility of the scene.
Biomedical events play a key role in improving biomedical research. Event trigger identification, extracting the words describing the event types, is a crucial and prerequisite step in the pipeline process of biomedic...
详细信息
Biomedical events play a key role in improving biomedical research. Event trigger identification, extracting the words describing the event types, is a crucial and prerequisite step in the pipeline process of biomedical event extraction. There exist two main problems in previous methods: (1) The association among contextual trigger labels which can provide significant clues is ignored. (2)The weight between word embeddings and contextual features needs to be adjusted dynamically according to the trigger candidate. In this paper, we propose a novel contextual label sensitive gated network for biomedical event trigger extraction to solve the above two problems, which can mix the two parts dynamically and capture the contextual label clues automatically. Furthermore, we also introduce the dependency-based word embeddings to represent dependency-based semantic information as well as attention mechanism to get more focused representations. Experimental results show that our approach advances state-of-the-arts and achieves the best F1-score on the commonly used Multi-Level Event Extraction (MLEE) corpus.
法律文本的自动生成能缓解我国法律服务行业中的人力资源不足的问题,对抗生成网络模型的出现为法律文本的自动生成提供了新思路.本文提出一种基于对抗生成网络的文本自动生成模型——ED-GAN(Generative Adversarial Networks based on E...
详细信息
法律文本的自动生成能缓解我国法律服务行业中的人力资源不足的问题,对抗生成网络模型的出现为法律文本的自动生成提供了新思路.本文提出一种基于对抗生成网络的文本自动生成模型——ED-GAN(Generative Adversarial Networks based on encoder-decoder).在该模型的生成器中,首先将案情要素的关键词序列输入至编码器encoder阶段的LSTM中编码成一隐含层向量,再将这个隐含层向量输入到解码器decoder的LSTM中,并结合其各时间步的输出生成下一时间步的隐含层向量,进而得到各时间步的输出,生成文本序列.模型最后采用CNN网络来鉴别生成文本和真实文本之间的差距.实验验证表明,采用所提模型能够生成较理想的法律文本.
Background: In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to t...
详细信息
Background: In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. Results: We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images. Conclusions: We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to
暂无评论