Entity linking (also called entity disambiguation) aims to map the mentions in a given document to their corresponding entities in a target knowledge base. In order to build a high-quality entity linking system, effor...
详细信息
Entity linking (also called entity disambiguation) aims to map the mentions in a given document to their corresponding entities in a target knowledge base. In order to build a high-quality entity linking system, efforts are made in three parts: Encoding of the entity, encoding of the mention context, and modeling the coherence among mentions. For the encoding of entity, we use long short term memory (LSTM) and a convolutional neural network (CNN) to encode the entity context and entity description, respectively. Then, we design a function to combine all the different entity information aspects, in order to generate unified, dense entity embeddings. For the encoding of mention context, unlike standard attention mechanisms which can only capture important individual words, we introduce a novel, attention mechanism-based LSTM model, which can effectively capture the important text spans around a given mention with a conditional random field (CRF) layer. In addition, we take the coherence among mentions into consideration with a forward-backward algorithm, which is less time-consuming than previous methods. Our experimental results show that our model obtains a competitive, or even better, performance than state-of-the-art models across different datasets.
We show that simultaneous precision measurements of the CP-violating phase in time-dependent Bs→J/ψϕ study and the Bs→μ+μ− rate, together with measuring mt′ by direct search at the LHC, would determine Vt′s*Vt...
详细信息
We show that simultaneous precision measurements of the CP-violating phase in time-dependent Bs→J/ψϕ study and the Bs→μ+μ− rate, together with measuring mt′ by direct search at the LHC, would determine Vt′s*Vt′b and therefore the b→s quadrangle in the four-generation standard model. The forward–backward asymmetry in B→K*ℓ+ℓ− provides further discrimination.
We introduce a number of novel techniques to lexical substitution, including an application of the forward-backward algorithm, a grammatical relation based similarity measure, and a modified form of n-gram matching. W...
详细信息
ISBN:
(纸本)9789544520106
We introduce a number of novel techniques to lexical substitution, including an application of the forward-backward algorithm, a grammatical relation based similarity measure, and a modified form of n-gram matching. We test these techniques on the Semeval-2007 lexical substitution data [McCarthy and Navigli, 2007], to demonstrate their competitive performance. We create a similar (small scale) dataset for Czech, and our evaluation demonstrates language independence of the techniques.
In this article, we offer two modifications of the modified forward-backward splitting method based on inertial Tseng method and viscosity method for inclusion problems in real Hilbert spaces. Under standard assumptio...
详细信息
In this article, we offer two modifications of the modified forward-backward splitting method based on inertial Tseng method and viscosity method for inclusion problems in real Hilbert spaces. Under standard assumptions, such as Lipschitz continuity and monotonicity (also maximal monotonicity), we establish weak and strong convergence of the proposed algorithms. We give the numerical experiments to show the efficiency and advantage of the proposed methods and we also used our proposed algorithm for solving the image deblurring and image recovery problems. Our result extends some related works in the literature and the primary experiments might also suggest their potential applicability.
暂无评论