We perform a comparative study for automatic term extraction from domain-specific language using a PageRank model with different edge-weighting methods. We vary vector space representations within the PageRank graph a...
详细信息
Short text classification is a fundamental task in naturallanguageprocessing. It is hard due to the lack of context information and labeled data in practice. In this paper, we propose a new method called SHINE, whic...
详细信息
User and item representation learning is critical for recommendation. Many of existing recommendation methods learn representations of users and items based on their ratings and reviews. However, the user-user and ite...
详细信息
ISBN:
(纸本)9781950737901
User and item representation learning is critical for recommendation. Many of existing recommendation methods learn representations of users and items based on their ratings and reviews. However, the user-user and item-item relatedness are usually not considered in these methods, which may be insufficient. In this paper, we propose a neural recommendation approach which can utilize useful information from both review content and user-item graphs. Since reviews and graphs have different characteristics, we propose to use a multi-view learning framework to incorporate them as different views. In the review content-view, we propose to use a hierarchical model to first learn sentence representations from words, then learn review representations from sentences, and finally learn user/item representations from reviews. In addition, we propose to incorporate a threelevel attention network into this view to select important words, sentences and reviews for learning informative user and item representations. In the graph-view, we propose a hierarchical graph neural network to jointly model the user-item, user-user and item-item relatedness by capturing the first- and secondorder interactions between users and items in the user-item graph. In addition, we apply attention mechanism to model the importance of these interactions to learn informative user and item representations. Extensive experiments on four benchmark datasets validate the effectiveness of our approach.
Syntactic relations are broadly used in many NLP tasks. For event detection, syntactic relation representations based on dependency tree can better capture the interrelations between candidate trigger words and relate...
详细信息
ISBN:
(纸本)9781950737901
Syntactic relations are broadly used in many NLP tasks. For event detection, syntactic relation representations based on dependency tree can better capture the interrelations between candidate trigger words and related entities than sentence representations. But, existing studies only use first-order syntactic relations (i.e., the arcs) in dependency trees to identify trigger words. For this reason, this paper proposes a new method for event detection, which uses a dependency tree basedgraph convolution network with aggregative attention to explicitly model and aggregate multi-order syntactic representations in sentences. Experimental comparison with state-of-the-art baselines shows the superiority of the proposed method.
Short text classification has found rich and critical applications in news and tweet tagging to help users find relevant information. Due to lack of labeled training data in many practical use cases, there is a pressi...
详细信息
ISBN:
(纸本)9781950737901
Short text classification has found rich and critical applications in news and tweet tagging to help users find relevant information. Due to lack of labeled training data in many practical use cases, there is a pressing need for studying semi-supervised short text classification. Most existing studies focus on long texts and achieve unsatisfactory performance on short texts due to the sparsity and limited labeled data. In this paper, we propose a novel heterogeneous graph neural network based method for semi-supervised short text classification, leveraging full advantage of few labeled data and large unlabeled data through information propagation along the graph. In particular, we first present a flexible HIN (heterogeneous information network) framework for modeling the short texts, which can integrate any type of additional information as well as capture their relations to address the semantic sparsity. Then, we propose Heterogeneous graph ATtention networks (HGAT) to embed the HIN for short text classification based on a dual-level attention mechanism, including node-level and type-level attentions. The attention mechanism can learn the importance of different neighboring nodes as well as the importance of different node (information) types to a current node. Extensive experimental results have demonstrated that our proposed model outperforms state-of-the-art methods across six benchmark datasets significantly.
While automated question answering systems are increasingly able to retrieve answers to naturallanguage questions, their ability to generate detailed human-readable explanations for their answers is still quite limit...
详细信息
Multi-hop reasoning is an effective and interpretable approach for query answering, as it finds reasoning paths over knowledge graphs (KGs) to enhance interpretability. Recent studies have applied reinforcement learni...
详细信息
Multi-hop reasoning is an effective and interpretable approach for query answering, as it finds reasoning paths over knowledge graphs (KGs) to enhance interpretability. Recent studies have applied reinforcement learning-based (RL-based) methods with policy agents to solve this problem. However, these methods primarily focus on sequential reasoning paths and candidate nodes, while overlooking the local structural information of adjacency subgraphs and the semantic correlations between relations at each decision step. In this paper, we propose a novel RL-based multi-hop reasoning model, L ook O ne S tep A head (LOSA), which leverages first-order aggregation to regard more expressive adjacency subgraphs rather than nodes as candidate actions and pays attention to the semantic correlations between relations. Specifically, an adjacency aggregation module encodes the local subgraph information and feeds the representations into the policy network to guide decision-making, thereby reducing backtracking and mitigating sparse-reward issues. Furthermore, a semantic matching module is designed to emphasize the semantic correlations of relations to improve the rationality of the reasoning paths. Extensive experiments on benchmark datasets demonstrate the effectiveness of the proposed approach.
Semantic parses are directed acyclic graphs (DAGs), so semantic parsing should be modeled as graph prediction. But predicting graphs presents difficult technical challenges, so it is simpler and more common to predict...
详细信息
ISBN:
(纸本)9781950737901
Semantic parses are directed acyclic graphs (DAGs), so semantic parsing should be modeled as graph prediction. But predicting graphs presents difficult technical challenges, so it is simpler and more common to predict the linearized graphs found in semantic parsing datasets using well-understood sequence models. The cost of this simplicity is that the predicted strings may not be wellformed graphs. We present recurrent neural network DAG grammars, a graph-aware sequence model that ensures only well-formed graphs while sidestepping many difficulties in graph prediction. We test our model on the Parallel Meaning Bank-a multilingual semantic graphbank. Our approach yields competitive results in English and establishes the first results for German, Italian and Dutch.
Knowledge graph Question Answering (KGQA) has become a prominent area in naturallanguageprocessing due to the emergence of large-scale Knowledge graphs (KGs). Recently Neural Machine Translation based approaches are...
详细信息
暂无评论