Block transform coding using discrete cosine transform is the most popular approach for image compression. However, many annoying blocking artifacts are generated due to coarse quantization on transform coefficients i...
详细信息
Complex networks are extensively studied in various areas such as social networks,biological networks,Internet and networks have many characters such as small-diameter,higher cluster and power-law degree ***-world is...
详细信息
Complex networks are extensively studied in various areas such as social networks,biological networks,Internet and networks have many characters such as small-diameter,higher cluster and power-law degree ***-world is evolved for efficient information transformation and ***,navigation is an important functional character of *** researches mostly focus on understanding the navigability of small-world networks by analyzing the diameter and the routing *** this paper,we use the navigability to model the basic structural complexity of a *** is,given a network topology,we need a model to evaluate how complex the topology *** network complexity models have been proposed but none of them consider the navigability factor of the network *** believe that using the navigability factor to evaluate the network structural complexity is a feasible and reasonable *** use the adjacent matrix to build a navigation transition matrix and evaluate the randomness of random walks on the transition matrix by defining navigation entropy on *** use the iteration of the random walk matrix to evaluate the navigability of a network and the complexity of *** is,the higher the navigation entropy,the higher the randomness of a *** lower the navigation entropy,the higher the structure of a *** apply the navigation entropy model on a set of structural and random network topologies to show how the model can show the different complexity of networks.
With the success of internet, recently more and more companies start to run web-based business. While running e-business sites, many companies have encountered unexpected degeneration of their web server applications ...
详细信息
Searching frequent patterns in transactional databases is considered as one of the most important data mining problems and Apriori is one of the typical algorithms for this task. Developing fast and efficient algorith...
详细信息
In cognitive radio (CR) networks, unlicensed secondary users need to conduct spectrum sensing to gain access to a licensed spectrum band. And cooperation among CR users will solve the problems caused by multipath fadi...
详细信息
In cognitive radio (CR) networks, unlicensed secondary users need to conduct spectrum sensing to gain access to a licensed spectrum band. And cooperation among CR users will solve the problems caused by multipath fading and shadowing. In this paper, we propose a multi-threshold method at local nodes to cope with noises of great uncertainty. Functions of distance between bodies of evidence are used at fusion centre to make synthetic judgment. To guarantee security which is an essential component for basic network functions, we will take selfish nodes into account which try to occupy channels exclusively. The proposed technique has shown better performance than conventional algorithms without increase the system overhead.
This paper proposes a method which is not for summarization but for extracting multiple facets from a text according to the keyword sets representing readers’ interests,so that readers can obtain the interested facet...
详细信息
This paper proposes a method which is not for summarization but for extracting multiple facets from a text according to the keyword sets representing readers’ interests,so that readers can obtain the interested facets and carry out faceted navigation on text.A facet is a meaningful combination of the subsets of the *** text process technologies are mostly based on text features such as word frequency,sentence location,syntax analysis and discourse *** approaches neglect the cognition process of human *** proposed method considers human reading *** show that the facet extraction is effective and robust.
We present a hierarchical chunk-to-string translation model, which can be seen as a compromise between the hierarchical phrase-based model and the tree-to-string model, to combine the merits of the two models. With th...
ISBN:
(纸本)9781622761715
We present a hierarchical chunk-to-string translation model, which can be seen as a compromise between the hierarchical phrase-based model and the tree-to-string model, to combine the merits of the two models. With the help of shallow parsing, our model learns rules consisting of words and chunks and meanwhile introduce syntax cohesion. Under the weighed synchronous context-free grammar defined by these rules, our model searches for the best translation derivation and yields target translation simultaneously. Our experiments show that our model significantly outperforms the hierarchical phrase-based model and the tree-to-string model on English-Chinese Translation tasks.
We study the visual learning models that could work efficiently with little ground-truth annotation and a mass of noisy unlabeled data for large scale Web image applications, following the subroutine of semi-supervise...
详细信息
We study the visual learning models that could work efficiently with little ground-truth annotation and a mass of noisy unlabeled data for large scale Web image applications, following the subroutine of semi-supervised learning (SSL) that has been deeply investigated in various visual classification tasks. However, most previous SSL approaches are not able to incorporate multiple descriptions for enhancing the model capacity. Furthermore, sample selection on unlabeled data was not advocated in previous studies, which may lead to unpredictable risk brought by real-world noisy data corpse. We propose a learning strategy for solving these two problems. As a core contribution, we propose a scalable semi-supervised multiple kernel learning method (S 3 MKL) to deal with the first problem. The aim is to minimize an overall objective function composed of log-likelihood empirical loss, conditional expectation consensus (CEC) on the unlabeled data and group LASSO regularization on model coefficients. We further adapt CEC into a group-wise formulation so as to better deal with the intrinsic visual property of real-world images. We propose a fast block coordinate gradient descent method with several acceleration techniques for model solution. Compared with previous approaches, our model better makes use of large scale unlabeled images with multiple feature representation with lower time complexity. Moreover, to address the issue of reducing the risk of using unlabeled data, we design a multiple kernel hashing scheme to identify the “informative” and “compact” unlabeled training data subset. Comprehensive experiments are conducted and the results show that the proposed learning framework provides promising power for real-world image applications, such as image categorization and personalized Web image re-ranking with very little user interaction.
Geographic objects with descriptive text are gaining in prevalence in many web services such as Google *** keyword query which combines both the location information and textual description stands out in recent *** wo...
详细信息
Geographic objects with descriptive text are gaining in prevalence in many web services such as Google *** keyword query which combines both the location information and textual description stands out in recent *** works mainly focus on finding top-k Nearest Neighbours where each node has to match the whole querying keywords.A collective query has been proposed to retrieve a group of objects nearest to the query object such that the group's keywords cover query's keywords and has the shortest inner-object *** the previous method does not consider the density of data objects in the spatial *** practice,a group of dense data objects around a query point will be more interesting than those sparse data *** distance of data objects of a group cannot reflect the density of the *** overcome this shortage,we proposed an approximate algorithm to process the collective spatial keyword query based on density and inner *** empirical study shows that our algorithm can effectively retrieve the data objects in dense areas.
Concept learning in information systems is actually performed in knowledge granular space on information systems. But no much attention has been paid to study such a knowledge granular space and its structure so far, ...
详细信息
Concept learning in information systems is actually performed in knowledge granular space on information systems. But no much attention has been paid to study such a knowledge granular space and its structure so far, and its structure characteristics are still poorly understood. In this paper, the granular space is firstly topologized and is decomposed into granular worlds. Then it is modeled as a bounded lattice. Finally, by using graph theory, the bounded lattice obtained is expressed as a hass graph, and the mechanism of concept learning in information systems can be visually explained. With related properties of topological space, bounded lattice and graph theory, the "mysterious" granular space can be delved more deeply into. This work can form a basis for designing concept learning algorithm as well as can richen the theory system for granular computing.
暂无评论