Relevance ranking is a key to Web search in determining how results are retrieved and ordered. As keyword-based search does not guarantee relevance in meanings, semantic search has been put forward as an attractive an...
详细信息
Relevance ranking is a key to Web search in determining how results are retrieved and ordered. As keyword-based search does not guarantee relevance in meanings, semantic search has been put forward as an attractive and promising approach. Recently several kinds of semantic information have been adopted in search respectively, such as thesauruses, ontologies and semantic markups, as well as folksonomies and social annotations. However, although to integrate more semantics would logically generate better search results, search mechanism to fully adopt different kinds of semantic information is still in absence and to be researched. To these ends, an integrated semantic search mechanism is proposed to incorporate textual information and keyword search with heterogeneous semantic information and semantic search. A statistical based measurement of semantic relevance, defined as semantic probabilities, is introduced to integrate both keywords and four kinds of semantic information including thesauruses, categories, ontologies and folksonomies. It is calculated with all textual information and semantic information, and stored in a newly proposed index structure called semantic-keyword dual index. Based on this uniform measurement, the search mechanism is developed that fully utilizes existing keyword and semantic search mechanisms to enhance heterogeneous semantic search. Experiments show that the proposed approach can effectively integrate both keyword-based information and heterogeneous semantic information in search.
Semantic gap has become a bottleneck of content-based image retrieval. In order to bridge the gap and improve retrieval accuracy, a map from lower-level visual features to high-level semantics should be formulated. Th...
详细信息
Semantic gap has become a bottleneck of content-based image retrieval. In order to bridge the gap and improve retrieval accuracy, a map from lower-level visual features to high-level semantics should be formulated. This paper provides a comprehensive survey on semantic mapping. Firstly, an image retrieval framework integrated with high-level semantics is presented. Secondly, image semantic description is introduced in two aspects: image content level-models and semantic representations. Thirdly, as the emphasis of this paper, semantic mapping approaches and techniques are investigated by classifying them into four main categories in terms of their characteristics. Various ideas and models proposed in these approaches are analyzed. In addition, advantages and limitations of each category are discussed. Finally, based on the state-of-the-art technology and the demand from real-world applications, several important issues related to semantic image retrieval are identified and some promising research directions are suggested.
As an important link of pattern recognition, pattern feature extraction and selection has been paid close attention by lots of scholars, and currently become one of the research hot spot in the field of pattern recogn...
详细信息
As an important link of pattern recognition, pattern feature extraction and selection has been paid close attention by lots of scholars, and currently become one of the research hot spot in the field of pattern recognition. Its main purpose is "Low Loss Dimensionality Reduction";it is generally divided into two parts, that is, linear pattern feature extraction and selection and nonlinear pattern feature extraction and selection. This article gives a general discussion of pattern feature extraction and selection, and introduces the frontier methods of this field, at last discusses the development tendency of pattern feature extraction and selection.
3D visualization of large-scale virtual crowds is a very important and interesting problem in research fields of virtual reality. For reasons of efficiency and visual realism, it is very difficult to populate, animate...
详细信息
3D visualization of large-scale virtual crowds is a very important and interesting problem in research fields of virtual reality. For reasons of efficiency and visual realism, it is very difficult to populate, animate and render large-scale virtual crowds with hundreds of thousand individually animated virtual characters in real-time applications;especially for the scene which has more than ten thousands of individuals with different shapes and motions. In this paper, an efficient method to visualize large-scale virtual crowds is presented. Firstly, using model variation technique, many different models can be derived from a small number of model templates. Secondly, the crowds can be animated individually by deforming a small number of elements in motion database. Thirdly, using a developed point sample rendering algorithm, large-scale crowds can be displayed in real-time. This method can be used to visualize different dynamic crowds which require both real-time efficiency and large number of virtual individuals support. Based on this work, an efficient and readily usable 3D visualization system is presented. It can provide very high visual realism for large crowds' visualization in interactive frame rates on a regular PC. The 3D visualization of 60000 people evacuating from a building is also realized in real-time based on the system.
Topic models have been successfully used to information classification and retrieval. These models can capture word correlations in a collection of textual documents with a low-dimensional set of multinomial distribut...
详细信息
Topic models have been successfully used to information classification and retrieval. These models can capture word correlations in a collection of textual documents with a low-dimensional set of multinomial distribution, called "topics". It is important but difficult to select an appropriate number of topics for a specific dataset. This paper proposes a theorem that the model reaches optimum as the average similarity among topics reaches minimum, and based on this theorem, proposes a method of adaptively selecting the best LDA model based on density. Experiments show that the proposed method can achieve performance matching the best of LDA without manually tuning the number of topics.
The limit behaviors of computations have not been fully *** is necessary to consider such limit behaviors when we consider the properties of infinite objects in computer science,such as infinite logic programs,the sym...
详细信息
The limit behaviors of computations have not been fully *** is necessary to consider such limit behaviors when we consider the properties of infinite objects in computer science,such as infinite logic programs,the symbolic solutions of infinite polynomial ***,we can use finite objects to approximate infinite objects,and we should know what kinds of infinite objects are approximable and how to approximate them effectively.A sequence {Rκ:κω}of term rewriting systems has the well limit behavior if under the condition that the sequence has the Set-theoretic limit or the distance-based limit,the sequence {Th(Rκ):κ∈ω} of corresponding theoretic closures of Rκ has the set-theoretic or distance-based limit,and limκ→∞ Th(Rκ) is equal to the theoretic closure of the limit of {Rκ:κ∈ω).Two kinds of limits of term rewriting systems are considered:one is based on the set-theoretic limit,the other is on the distance-based *** is proved thatgiven a sequence {Rκ:κ∈ω) of term rewriting systems Rκ,if there is a well-founded ordering (-<) on terms such that every Rκ is (-<)-well-founded,and the set-theoretic limit of {Rκ:κ∈ω).exists,then {Rκ:κ∈ω).has the well limit behavior;and if (1) there is a well-founded ordering(-<)on terms such that every Rκ is(-<-well-founded,(2) there is a distance d on terms which is closed under substitutions and contexts and (3) {Rκ:κ∈ω).is Cauchy under d then {Rκ:κ∈ω).has the well limit *** results are used to approximate the least Herbrand models of infinite Horn logic programs and real Horn logic programs,and the solutions and Cr(o)bner bases of (infinite) sets of real polynomials by sequences of (finite) sets of rational polynomials.
In this paper, a fast global motion estimation method is proposed for video coding. This method can accommodate not only a translational motion model but also a polynomial motion model. It speeds up the procedure of t...
详细信息
ISBN:
(纸本)9780819469533
In this paper, a fast global motion estimation method is proposed for video coding. This method can accommodate not only a translational motion model but also a polynomial motion model. It speeds up the procedure of the global motion estimation (GME) by pre-analyzing the characteristics of the block. At the first stage, the smooth region blocks which contribute less to the GME are filtered by using a threshold method based on image intensity. Next, a threshold method based on the discrepancy of the motion vectors is used to exclude the foreground blocks from the GME. From the experimental results, we can conclude that the proposed fast global motion estimation method manages to speed up the processing of estimating the motion vector field while maintaining the coding performance.
This paper describes a novel model using dependency structures on the source side for syntax-based statistical machine translation: Dependency Treelet String Correspondence Model (DTSC). The DTSC model maps source dep...
详细信息
In the past decade, many papers about granular computing(GrC) have been published, but the keypoints about granular computing(GrC) are still unclear. In this paper, we try to find the key points of GrC in the informat...
详细信息
Computational cognitive modeling has recently emerged as one of the hottest issues in the AI area. Both symbolic approaches and connectionist approaches present their merits and demerits. Although Bayesian method is s...
详细信息
Computational cognitive modeling has recently emerged as one of the hottest issues in the AI area. Both symbolic approaches and connectionist approaches present their merits and demerits. Although Bayesian method is suggested to incorporate advantages of the two kinds of approaches above, there is no feasible Bayesian computational model concerning the entire cognitive process by now. In this paper, we propose a variation of traditional Bayesian network, namely Globally Connected and Locally Autonomic Bayesian Network (GCLABN), to formally describe a plausible cognitive model. The model adopts a unique knowledge representation strategy, which enables it to encode both symbolic concepts and their relationships within a graphical structure, and to generate cognition via a dynamic oscillating process rather than a straightforward reasoning process like traditional approaches. Then a simple simulation is employed to illustrate the properties and dynamic behaviors of the model. All these traits of the model are coincident with the recently discovered properties of the human cognitive process.
暂无评论