multidimensional scaling (MDS) is a widely used technique for mapping data from a high-dimensional to a lower-dimensional space and for visualizing data. Recently, a new method, known as Geometric MDS, has been develo...
详细信息
multidimensional scaling (MDS) is a widely used technique for mapping data from a high-dimensional to a lower-dimensional space and for visualizing data. Recently, a new method, known as Geometric MDS, has been developed to minimize the MDS stress function by an iterative procedure, where coordinates of a particular point of the projected space are moved to the new position defined analytically. Such a change in position is easily interpreted geometrically. Moreover, the coordinates of points of the projected space may be recalculated simultaneously, i.e. in parallel, independently of each other. This paper has several objectives. Two implementations of Geometric MDS are suggested and analysed experimentally. The parallel implementation of Geometric MDS is developed for multithreaded multi-core processors. The sequential implementation is optimized for computational speed, enabling it to solve large data problems. It is compared with the SMACOF version of MDS. Python codes for both Geometric MDS and SMACOF are presented to highlight the differences between the two implementations. The comparison was carried out on several aspects: the comparative performance of Geometric MDS and SMACOF depending on the projection dimension, data size and computation time. Geometric MDS usually finds lower stress when the dimensionality of the projected space is smaller.
Ant colony optimization is a successful swarm intelligence method for solving various combinatorial optimization problems. It uses a population-based meta-heuristic that is based on the foraging behavior of real ant c...
详细信息
ISBN:
(纸本)9781467364195
Ant colony optimization is a successful swarm intelligence method for solving various combinatorial optimization problems. It uses a population-based meta-heuristic that is based on the foraging behavior of real ant colonies, and these ants use pheromones to communicate indirectly with others. While the scale of problem increases, ACO necessitates much more time and resource to solve the optimization problem. Two main solutions to this bottleneck can be used: distributed implementations and parallel implementations. The rapid development of computer architecture enables the easily reachable parallel implementation platforms by multi-core processors. In this paper, it is aimed to present the performance increase of two main ACO algorithms on multi-core processors with parallel programming. Parallelization is done on a single ant colony by using Java thread programming approach with minimal communication and coordination between threads. The paper also draws future works that can be done on this topic.
The performance of an application can be significantly improved by using parallelization, as well as by defining micro-services which allow the distribution of the work into several independent tasks. In this paper, w...
详细信息
ISBN:
(纸本)9783319946498;9783319946481
The performance of an application can be significantly improved by using parallelization, as well as by defining micro-services which allow the distribution of the work into several independent tasks. In this paper, we show how a micro-service architecture can be used for developing an efficient and flexible application for the nearest neighbor classification problem. Several dissimilarity measures are compared, in terms of both accuracy and computational time, for sequential as well parallel executions. In addition, a web-based interface was developed in order to facilitate the interaction with the user and easily monitoring the progress of the experiments.
Automatic pattern recognition is often based on similarity measures between objects which are, sometimes, represented as high-imensional feature vectors - for instance raw digital signals or highresolution spectrogram...
详细信息
ISBN:
(纸本)9783319624105
Automatic pattern recognition is often based on similarity measures between objects which are, sometimes, represented as high-imensional feature vectors - for instance raw digital signals or highresolution spectrograms. Depending on the application, when feature vectors turn extremely long, computations of the similarity measure might become impractical or even prohibitive. Fortunately, multi-core computer architectures are widely available nowadays and can be efficiently exploited to speed-up computations of similarity measures. In this paper, a block-separable version of the so-called Weighted Distribution Matching similarity measure is presented. This measure was recently proposed but has not been analyzed until now for a parallel implementation. Our analysis shows that this similarity measure can be easily decomposed into subproblems such that its parallel implementation provides a significant acceleration in comparison with its corresponding serial version. Both implementations are presented as Python programs for the sake of readability of the codes and reproducibility of the experiments.
Artificial neural networks (ANNs) have been widely used in the analysis of remotely sensed imagery. In particular, convolutional neural networks (CNNs) are gaining more and more attention. Unlike traditional CNNs meth...
详细信息
ISBN:
(纸本)9781509049516
Artificial neural networks (ANNs) have been widely used in the analysis of remotely sensed imagery. In particular, convolutional neural networks (CNNs) are gaining more and more attention. Unlike traditional CNNs methods, where the relevant information to classify the elements of a remotely sensed image is extracted only from the last fully-connected layer, the new adaptive deep pyramid matching (ADPM) model [1] takes advantage of the features from all of the convolutional layers. This model allows the optimal fusing weights for different convolutional layers be learned from the data itself. In addition, the combination of CNNs with spatial pyramid pooling (SPP-net) to create the basic deep network allows the use of images with multiple scales, which results in better learning process thanks to the complementary information. The original ADPM method is divided in two parts: the multi-scale deep feature extraction and the ADPM core. In this paper we present a computational improvement of the ADPM core, coding a parallel-multicore version. This strategy is shown to significantly enhance performance in the analysis of remotely sensed data.
We propose an extension to multiple dimensions of the univariate index of agreement between Probability Density Functions (PDFs) used in climate studies. We also provide a set of high-performance programs targeted bot...
详细信息
We propose an extension to multiple dimensions of the univariate index of agreement between Probability Density Functions (PDFs) used in climate studies. We also provide a set of high-performance programs targeted both to single and multi-core processors. They compute multivariate PDFs by means of kernels, the optimal bandwidth using smoothed bootstrap and the index of agreement between multidimensional PDFs. Their use is illustrated with two case-studies. The first one assesses the ability of seven global climate models to reproduce the seasonal cycle of zonally averaged temperature. The second case study analyzes the ability of an oceanic reanalysis to reproduce global Sea Surface Temperature and Sea Surface Height. Results show that the proposed methodology is robust to variations in the optimal bandwidth used. The technique is able to process multivariate datasets corresponding to different physical dimensions. The methodology is very sensitive to the existence of a bias in the model with respect to observations. (C) 2014 Elsevier Ltd. All rights reserved.
SLAM algorithms are widely used by autonomous robots operating in unknown environments. Several works have presented optimizations mainly focused on the algorithm complexity. New computing technologies (SIMD coprocess...
详细信息
ISBN:
(纸本)9781479938247
SLAM algorithms are widely used by autonomous robots operating in unknown environments. Several works have presented optimizations mainly focused on the algorithm complexity. New computing technologies (SIMD coprocessors, multicore architecture) can greatly accelerate the processing time but require rethinking the algorithm implementation. This paper presents an efficient implementation of the EKF-SLAM algorithm on an embedded system implementing OMAP multi-core architecture and SIMD optimizations. The aim is to optimize the algorithm implementation to improve the localization quality. Results demonstrate that an optimized implementation is always needed to achieve efficient performances and can help to design embedded systems implementing a low-cost multi-core architecture operating under real time constraints.
暂无评论