As a fundamental problem in patternrecognition, graph matching has found a variety of applications in the field of computer vision. In graph matching, patterns are modeled as graphs and patternrecognition amounts to...
详细信息
ISBN:
(纸本)9781424416301
As a fundamental problem in patternrecognition, graph matching has found a variety of applications in the field of computer vision. In graph matching, patterns are modeled as graphs and patternrecognition amounts to finding a correspondence between the nodes of different graphs. there are many ways in which the problem has been formulated, but most can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility functions and a quadratic term encodes edge compatibility functions. the main research focus in this theme is about designing efficient algorithms for solving approximately the quadratic assignment problem, since it is NP-hard. In this paper, we turn our attention to the complementary problem: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the "labels" are matchings between pairs of graphs. We present experimental results with real image data which give evidence that learning can improve the performance of standard graph matching algorithms. In particular, it turns out that linear assignment with such a learning scheme may improve over state-of-the-art quadratic assignment relaxations. this finding suggests that for a range of problems where quadratic assignment was thought to be essential for securing good results, linear assignment, which is far more efficient, could be just sufficient if learning is performed. this enables speed-ups of graph matching by up to 4 orders of magnitude while retaining state-of-the-art accuracy.
In recent years there has been a tremendous increase in the number of users maintaining online blogs on the Internet. Companies, in particular, have become aware of this medium of communication and have taken a keen i...
详细信息
ISBN:
(数字)9783540734994
ISBN:
(纸本)9783540734987
In recent years there has been a tremendous increase in the number of users maintaining online blogs on the Internet. Companies, in particular, have become aware of this medium of communication and have taken a keen interest in what is being said about them through such personal blogs. this has given rise to a new field of research directed towards mining useful information from a large amount of unformatted data present in online blogs and online forums. We discuss an implementation of such a blog mining application. the application is broadly divided into two parts, the indexing process and the search module. Blogs pertaining to different organizations are fetched from a particular blog domain on the Internet. After analyzing the textual content of these blogs they are assigned a sentiment rating. Specific data from such blogs along withtheir sentiment ratings are then indexed on the physical hard drive. the search module searches through these indexes at run time for the input organization name and produces a list of blogs conveying both positive and negative sentiments about the organization.
Approximate inferring approach of Credal network based on Ant Colony Algorithms is put forward Considering g network inferring of goal-oriented in Bayesian network for the variable decision-maker is interested in live...
详细信息
ISBN:
(纸本)9781424410651
Approximate inferring approach of Credal network based on Ant Colony Algorithms is put forward Considering g network inferring of goal-oriented in Bayesian network for the variable decision-maker is interested in liven some evidence, the paper gives arithmetic for acquiring equivalent Credal network structure of goal-oriented Selecting of these vertexes in Credal network is considered as a multistage decision-making. Based on this, Ant Colony Algorithms is applied for Credal network approximate inferring to reuse the vertexes of high probability of each variable in order to improve of efficiency of inferring arithmetic and avoid some unnecessary computation. Finally, it shows the validity of the approach by simple analysis for a complex Credal network model.
Non-negative matrix factorization (NMF) as a part-based representation method allows only additive combinations of non-negative basis components to represent the original data, so it provides a realistic approximation...
详细信息
ISBN:
(纸本)9781424410651
Non-negative matrix factorization (NMF) as a part-based representation method allows only additive combinations of non-negative basis components to represent the original data, so it provides a realistic approximation to the original data. However, NMF does not work well when directly applied to face recognition due to its global linear decomposition;this intuitively results in a degradation of recognition performance and non-robustness to the variation in illumination, expression and occlusion. In this paper, we propose a robust method, random subspace sub-pattern NMF (RS-SpNMF), especially for face recognition. Unlike the traditional random subspace method (RSM), which completely randomly selects the features from the whole original pattern feature set, the proposed method randomly samples,features from each local region (or a sub-image) partitioned from the original face image and performs NMF decomposition on each sampled feature set. More specially, we first divide a face image into several sub-images in a deterministic way, then construct a component classifier on sampled feature subset from each sub-image set, and finally combine all of component classifiers for the final decision. Experiments on three benchmarks face databases (ORL,Yale and AR) show that the proposed method is effective, especially to the occlusive face image.
Outlier detection has recently become an important problem in many industrial and financial applications. In this paper, a novel unsupervised algorithm for outlier detection with a solid statistical foundation is prop...
详细信息
ISBN:
(数字)9783540734994
ISBN:
(纸本)9783540734987
Outlier detection has recently become an important problem in many industrial and financial applications. In this paper, a novel unsupervised algorithm for outlier detection with a solid statistical foundation is proposed. First we modify a nonparametric density estimate with a variable kernel to yield a robust local density estimation. Outliers are then detected by comparing the local density of each point to the local density of its neighbors. Our experiments performed on several simulated data sets have demonstrated that the proposed approach can outperform two widely used outlier detection algorithms (LOF and LOCI).
In this paper, we introduced new adaptive learning algorithms to extract linear discriminant analysis (LDA) features from multidimensional data in order to reduce the data dimension space. For this purpose, new adapti...
详细信息
ISBN:
(数字)9783540742609
ISBN:
(纸本)9783540742586
In this paper, we introduced new adaptive learning algorithms to extract linear discriminant analysis (LDA) features from multidimensional data in order to reduce the data dimension space. For this purpose, new adaptive algorithms for the computation of the square root of the inverse covariance matrix Sigma (-1/2) are introduced. the proof for the convergence of the new adaptive algorithm is given by presenting the related cost function and discussing about its initial conditions. the new adaptive algorithms are used before an adaptive principal component analysis algorithm in order to construct an adaptive multivariate multi-class LDA algorithm. Adaptive nature of the new optimal feature extraction method makes it appropriate for on-line patternrecognition applications. Both adaptive algorithms in the proposed structure are trained simultaneously, using a stream of input data. Experimental results using synthetic and real multi-class multi-dimensional sequence of data, demonstrated the effectiveness of the new adaptive feature extraction algorithm.
Based on all phase theory this paper designed three kinds of true 2-D all phase filter bank (true 2-D APFB), which can be used to decompose and recompose image data in true 2-D directly. If quantification error of fil...
详细信息
ISBN:
(纸本)9781424410651
Based on all phase theory this paper designed three kinds of true 2-D all phase filter bank (true 2-D APFB), which can be used to decompose and recompose image data in true 2-D directly. If quantification error of filters is ignored, the true 2-D APFBs have perfect reconstruction property. To reduce computation, they are implemented in lifting scheme. Simulation has shown that true 2-D APFBs have nicer data compression property. Withthe same compression rate, PSNR of IDCT_AFB7.7 is less 0.7dB at most than Daubechies9/7 wavelet's, for true 2-D APFBs adopt quadtree SPIHT coding method which is suitable for separable 2-D wavelet transform. For true 2-D filter banks, binary tree SPIHT coding should be adopted to gel better performance in compression.
CoreWar is a computer simulation where two programs written in an assembly language called redcode compete in a virtual memory array. these programs are referred to as warriors. Over more than twenty years of developm...
详细信息
ISBN:
(数字)9783540734994
ISBN:
(纸本)9783540734987
CoreWar is a computer simulation where two programs written in an assembly language called redcode compete in a virtual memory array. these programs are referred to as warriors. Over more than twenty years of development a number of different battle strategies have emerged, making it possible to identify different warrior types. Systems for automatic warrior creation appeared more recently, evolvers being the dominant kind. this paper describes an attempt to analyze the output of the CCAI evolver, and explores the possibilities for performing automatic categorization by warrior type using representations based on redcode source, as opposed to instruction execution frequency. Analysis was performed using EM clustering, as well as information gain and gain ratio attribute evaluators, and revealed which mainly brute-force types of warriors were being generated. this, along withthe observed correlation between clustering and the workings of the evolutionary algorithm justifies our approach and calls for more extensive experiments based on annotated warrior benchmark collections.
Advances in wireless and mobile technology flood us with amounts of moving object datathat preclude all means of manual data processing. the volume of data gathered from position sensors of mobile phones, PDAs, or ve...
详细信息
ISBN:
(数字)9783540734994
ISBN:
(纸本)9783540734987
Advances in wireless and mobile technology flood us with amounts of moving object datathat preclude all means of manual data processing. the volume of data gathered from position sensors of mobile phones, PDAs, or vehicles, defies human ability to analyze the stream of input data. On the other hand, vast amounts of gathered data hide interesting and valuable knowledge patterns describing the behavior of moving objects. thus, new algorithms for mining moving object data are required to unearththis knowledge. An important function of the mobile objects management system is the prediction of the unknown location of an object. In this paper we introduce a data mining approach to the problem of predicting the location of a moving object. We mine the database of moving object locations to discover frequent trajectories and movement rules. then, we match the trajectory of a moving object withthe database of movement rules to build a probabilistic model of object location. Experimental evaluation of the proposal reveals prediction accuracy close to 80%. Our original contribution includes the elaboration on the location prediction model, the design of an efficient mining algorithm, introduction of movement rule matching strategies, and a thorough experimental evaluation of the proposed model.
data perturbation with random noise signals has been shown to be useful for data hiding in privacy-preserving data mining. Perturbation methods based on additive randomization allows accurate estimation of the Probabi...
详细信息
ISBN:
(数字)9783540734994
ISBN:
(纸本)9783540734987
data perturbation with random noise signals has been shown to be useful for data hiding in privacy-preserving data mining. Perturbation methods based on additive randomization allows accurate estimation of the Probability Density Function (PDF) via the Expectation-Maximization (EM) algorithm but it has been shown that noise-filtering techniques can be used to reconstruct the original data in many cases, leading to security breaches. In this paper, we propose a generic PDF reconstruction algorithm that can be used on non-additive (and additive) randomization techiques for the purpose of privacy-preserving data mining. this two-step reconstruction algorithm is based on Parzen-Window reconstruction and Quadratic Programming over a convex set - the probability simplex. Our algorithm eliminates the usual need for the iterative EM algorithm and it is generic for most randomization models. the simplicity of our two-step reconstruction algorithm, without iteration, also makes it attractive for use when dealing with large datasets.
暂无评论