We model the range image segmentation problem in the framework of Bayes inference and Markov random field. Thereafter an extended ICM algorithm is presented in this paper. We introduce another estimation (surface para...
详细信息
ISBN:
(纸本)0780386531
We model the range image segmentation problem in the framework of Bayes inference and Markov random field. Thereafter an extended ICM algorithm is presented in this paper. We introduce another estimation (surface parameter set A) into the basic ICM algorithm and also propose to use edge-based segmentation as the initial state for subsequent energy minimizing procedure. Theoretical and experimental analysis of convergence is given. We have shown that in this way a little computation brings forth a high-quality segmentation.
We study Archimedean, finite, negative totally ordered monoids. We describe an algorithm which generate the structures of this type in a step-wise fashion. Our approach benefits from the level set representation of mo...
详细信息
We study Archimedean, finite, negative totally ordered monoids. We describe an algorithm which generate the structures of this type in a step-wise fashion. Our approach benefits from the level set representation of monoids and is inspired by web geometry.
This paper presents an approach for paralleling K-medoid clustering algorithm. The K-medoid algorithm will be divided into tasks, which will be mapped into multiprocessor system. The control structure for the way of e...
详细信息
This paper presents an approach for paralleling K-medoid clustering algorithm. The K-medoid algorithm will be divided into tasks, which will be mapped into multiprocessor system. The control structure for the way of expressing the tasks in parallel form and the communication model that satisfied the mechanism for interaction between these tasks is presented. Data parallel model is built by decomposing the tasks among the processors. The implementation and testing of the parallel model have conducted using SESE academic simulator under Fedora 11 version at Linux OS environment.
Tridiagonal system solver is an important kernel in many scientific and engineering applications. Even though quite a few parallel algorithms and implementations have been addressed in recent years, challenges still r...
详细信息
Tridiagonal system solver is an important kernel in many scientific and engineering applications. Even though quite a few parallel algorithms and implementations have been addressed in recent years, challenges still remain when solving large-scale tridiagonal system on heterogenous supercomputers. In this paper, a hierarchical algorithm framework SPIKE (pronounced 'SPIKE squared') is proposed to minimize the parallel overhead and to achieve the best utilization of CPU-GPU hybrid systems. In these systems, a layered and adaptive partitioning is presented based on the SPIKE algorithm to effectively control the sequential parts while efficiently exploiting the computation and communication overlapping in heterogeneous computing node. Moreover, the SPIKE algorithm is reformulated to reduce the matrix computations to only 1/3 in our hierarchical algorithm framework. Meanwhile, an improved implementation of the tiled-PCR-pThomas algorithm is employed for the GPU architecture, and the shared memory usage on the GPU can be reduced by 1/3 using careful dependence analysis on solving unit vector tridiagonal systems. Our experiments on Tianhe-1A show ideal weak scalability on up to 128 nodes when solving a tridiagonal system with a size of 1920M in the largest run and good strong scalability (70%) from 32 nodes to 256 nodes when solving a tridiagonal system with a size of 480M. Furthermore, the adaptive task partition across the CPU and GPU can get over 10% performance improvement in the strong scaling test with 256 nodes. In one computing node of Tianhe-1A, our GPU-only code can outperform the CUSPARSE version (non-pivoting tridiagonal solver) by 30%, and our hybrid code is about 6.7 times faster than the Intel SPIKE multi-process version for tridiagonal systems having a size of 3M, 5M, and 15M.
We consider the problem of dynamic coalition formation among a set of agents where the value function of the agents is constrained by the size of a coalition - larger coalitions are able to get higher value but only u...
详细信息
ISBN:
(纸本)9781479939329
We consider the problem of dynamic coalition formation among a set of agents where the value function of the agents is constrained by the size of a coalition - larger coalitions are able to get higher value but only up to a certain fixed coalition size, denoted by n_max. The objective of the coalition formation problem is to determine a partition of the agent set that gives the highest utility to the agents. This problem is non-trivial as the set of partitions that needs to be explored grows exponentially with the number of agents and an exhaustive search in the space of partitions makes the problem intractable. To address this problem, we first provide a formal framework to model the coalition formation problem using a coalition structure graph (CSG). We then propose a branch and bound based algorithm called bottom Up CSG Search that searches for the optimal partitions or coalition structures among the nodes of CSG while pruning nodes that are not going to lead to the optimal coalition structure. We have provided analytical results related to the completeness, anytime-nature and time complexity of our algorithm. We have also verified the performance of our algorithm for a dynamic reformation problem where a set of physical e-puck robots starting from arbitrary positions form sub-teams that maximize their utility. Our experimental results show that our proposed algorithm performs better in terms of number of nodes generated and the time required to find the optimal coalition structure or partition than existing CSG-search algorithms.
For the past two decades, fractals (e.g., the Hilbert and Peano space-filling curves) have been considered the natural method for providing a locality-preserving mapping. The idea behind a locality-preserving mapping ...
详细信息
For the past two decades, fractals (e.g., the Hilbert and Peano space-filling curves) have been considered the natural method for providing a locality-preserving mapping. The idea behind a locality-preserving mapping is to map points that are nearby in the multidimensional space into points that are nearby in the one-dimensional space. We argue against the use of fractals in locality-preserving mapping algorithms, and present examples with experimental evidence to show why fractals produce poor locality-preserving mappings. In addition, we propose an optimal locality-preserving mapping algorithm, termed the spectral locality-preserving mapping algorithm (Spectral LPM, for short), that makes use of the spectrum of the multidimensional space. We give a mathematical proof for the optimality of Spectral LPM, and also demonstrate its practical use.
This paper applies four centrality measures, Newman grouping algorithm, central graph and depth study to delineate the characteristics of two sets of spatial networks. One dataset contains the spatial networks of thre...
详细信息
ISBN:
(纸本)9781509020942
This paper applies four centrality measures, Newman grouping algorithm, central graph and depth study to delineate the characteristics of two sets of spatial networks. One dataset contains the spatial networks of three Chinese-Indonesian houses in Surabaya, Indonesia, which are three merchants' houses built in different time periods in the past 100 years. The other dataset contains the spatial networks of six farmers' houses in the village of JinHsing, Taiwan, which were also built in different time periods in the past 100 years. The houses in Surabaya have been adapted for commerce-based urban life. The houses in JinHsing are part of the fabric of an agriculture settlement. Two groups of houses rooted in Chinese culture are differentiated by their natural environments, life-styles and social conditions. While the appearance, materials, construction and even floor-plans of these houses have changed dramatically over past century, certain patterns in spatial organizations persist. Using in-group and cross-group comparisons, the paper illustrates how the development of spatial organization of these houses reflects the cultural, social and economic statuses of the residents and the community.
Many combinatorial problems can be efficiently solved for partial k-trees (graphs of treewidth bounded by k). The edge-coloring problem is one of the well-known combinatorial problems for which no NC algorithms have b...
详细信息
Many combinatorial problems can be efficiently solved for partial k-trees (graphs of treewidth bounded by k). The edge-coloring problem is one of the well-known combinatorial problems for which no NC algorithms have been obtained for partial k-trees. This paper gives an optimal and first NC parallel algorithm to find an edge-coloring of any given partial k-tree using a minimum number of colors if k and the maximum degree /spl Delta/ are bounded.< >
Most clustering algorithms operate by optimizing (either implicitly or explicitly) a single measure of cluster solution quality. Such methods may perform well on some data sets but lack robustness with respect to vari...
详细信息
ISBN:
(纸本)9781424421749
Most clustering algorithms operate by optimizing (either implicitly or explicitly) a single measure of cluster solution quality. Such methods may perform well on some data sets but lack robustness with respect to variations in cluster shape, proximity, evenness and so forth. In this paper, we have proposed a multiobjective clustering technique which optimizes simultaneously two objectives, one reflecting the total symmetry present in the data set and the other reflecting the stability of the obtained partitions over different bootstrap samples of the data set. The proposed algorithm utilizes a recently developed simulated annealing based multiobjective optimization technique, AMOSA, as the underlying optimization method. Here assignment of points to different clusters are done based on the point symmetry based distance rather than the Euclidean distance. Results on several artificial and real-life data sets show that the proposed technique is well-suited to detect the number of clusters from data sets having point symmetric clusters.
The Boltzmann distribution is a good candidate for a search distribution for optimization problems. We compare two methods to approximate the Boltzmann distribution - Estimation of Distribution algorithms (EDA) and Ma...
详细信息
The Boltzmann distribution is a good candidate for a search distribution for optimization problems. We compare two methods to approximate the Boltzmann distribution - Estimation of Distribution algorithms (EDA) and Markov Chain Monte Carlo methods (MCMC). It turns out that in the space of binary functions even blocked MCMC methods outperform EDA on a small class of problems only. In these cases a temperature of T = 0 performed best.
暂无评论