Large-scale graph analysis or also called network analysis of networks is supported by different algorithms, among the most relevant are PageRank (Web page ranking), Betweenness centrality (centrality in a graph) and ...
详细信息
ISBN:
(纸本)9781538631232
Large-scale graph analysis or also called network analysis of networks is supported by different algorithms, among the most relevant are PageRank (Web page ranking), Betweenness centrality (centrality in a graph) and Community Detection, these by of their complexity and the large amount of data that process diverse applications, increasingly need to use computational resources such as processor, memory and storage, for these reasons, it is necessary to apply high performance computing or HPC (High Performance Computing) but it would not be useful to apply HPC without having designed these algorithms in parallel programming, in this part there have been many studies on its application and methodologies to do it. The purpose of this work is to create a framework that allows computer science students to abstract a computer system based on the parallel programming paradigm, which implies that students to get acquainted with the resolution of algorithmic problems in a more natural way and away from the typical sequential thinking., The development of a graph analysis design pattern oriented to parallel programming in HPC, complemented with the design of didactic learning techniques in the network such as laboratories and/or simulators are key in the development of this framework.
High performance computing (HPC) means the aggregation of computational power to increase the ability of processing large problems in science, engineering, and business. HPC on the cloud allows performing on demand HP...
详细信息
ISBN:
(纸本)9781889335513
High performance computing (HPC) means the aggregation of computational power to increase the ability of processing large problems in science, engineering, and business. HPC on the cloud allows performing on demand HPC tasks by high performance clusters in a cloud environment. The connection structure of the nodes in HPC clusters should provide fast internode communication. It is important that scalability is preserved as well. This paper proposes a hypercube topology for connecting the nodes in an HPC cluster that facilitates fast communications between nodes. In addition, the proposed hypercube topology provides the ability to scale, which is needed for high performance computing on the cloud.
The importance of optimization and NP problems solving cannot be over emphasized. The usefulness and popularity of evolutionary computing methods are also well established. There are various types of evolutionary meth...
详细信息
ISBN:
(纸本)9781467387767
The importance of optimization and NP problems solving cannot be over emphasized. The usefulness and popularity of evolutionary computing methods are also well established. There are various types of evolutionary methods that arc mostly sequential, and some others have parallel implementation. We propose a method to parallelize Imperialist Competitive Algorithm (Multi-Population). The algorithm has been implemented with MPI on two platforms and have tested our algorithms on a shared- memory and message passing architecture. An outstanding performance is obtained, which indicates that the method is efficient concern to speed and accuracy. In the second step, the proposed algorithm is compared with a set of existing well known parallel algorithms and is indicated that it obtains more accurate solutions in a lower time.
C. elegans is a primitive multicellular organism (worm) that shares many important biological characteristics that arise as complications within human beings. [1] It begins as a single cell and then undergoes a comple...
详细信息
ISBN:
(纸本)9781450347556
C. elegans is a primitive multicellular organism (worm) that shares many important biological characteristics that arise as complications within human beings. [1] It begins as a single cell and then undergoes a complex embryogenesis to form a complete animal. Using experimental data, the early stages of life of the cells are simulated by computers. The goal of this project is to use this simulation to compare the embryogenesis stage of C. elegans cells with that of human cells. Since the simulation involves the manipulation of many files and large amounts of data, the power provided by supercomputers and parallel programming is required.
We present a cellular automata model to simulate with a parallelized code nanostructured alumina formation during anodization. The model is based on the Field Assisted Dissolution approach for anodization. The paralle...
详细信息
We present a cellular automata model to simulate with a parallelized code nanostructured alumina formation during anodization. The model is based on the Field Assisted Dissolution approach for anodization. The parallel code for model simulation is run on Nvidia Tesla GPU cards. We verify that the parallel algorithm yields correct analytical results for the simple exclusion diffusion between emitting and absorbing walls in 3D. We identify the model parameters that have a strong impact on the nanostructures obtained in simulations and present the diagram of prevalence of these structures. We also simulate in our model the so called "two step anodization" and find an agreement with experimental findings. (C) 2015 Elsevier B.V. All rights reserved.
In this study, we proposed a parallel implementation of the combinatorial type artificial bee colony algorithm which has an efficient neighbor production mechanism. Running time and performance tests of the proposed p...
详细信息
ISBN:
(纸本)9781479948741
In this study, we proposed a parallel implementation of the combinatorial type artificial bee colony algorithm which has an efficient neighbor production mechanism. Running time and performance tests of the proposed parallel model were carried on the traveling salesmen problem. Results show that parallel artificial bee colony algorithm decreases the running time due to exploiting the computational power of parallel computing systems. Beside, better quality solutions are obtained compared to the reproduction methods of the genetic algorithm.
Fast Oil Paint Image filter is a performance optimized oil paint algorithm. Current oil paint algorithms are CPU intensive and take a long time to produce the output;further the time taken increases exponentially with...
详细信息
ISBN:
(纸本)9781479954964
Fast Oil Paint Image filter is a performance optimized oil paint algorithm. Current oil paint algorithms are CPU intensive and take a long time to produce the output;further the time taken increases exponentially with increasing quality. One of the main causes is re-computation. The proposed algorithm significantly reduces re-computation reducing the processing time by approximately 86%. The research also evaluates the algorithm using the parallel programming approach.
The Multidimensional Knapsack Problem (MKP) is a generalization of the basic Knapsack Problem, with two or more constraints. It is an important optimization problem with many real-life applications. To solve this NP-h...
详细信息
The Multidimensional Knapsack Problem (MKP) is a generalization of the basic Knapsack Problem, with two or more constraints. It is an important optimization problem with many real-life applications. To solve this NP-hard problem we use a metaheuristic algorithm based on ant colony optimization (ACO). Since several steps of the algorithm can be carried out concurrently, we propose a parallel implementation under the GPGPU paradigm (General Purpose Graphics Processing Units) using CUDA. To use the algorithm presented in this paper, it is necessary to balance the number of ants, number of rounds used, and whether local search is used or not, depending on the quality of the solution desired. In other words, there is a compromise between time and quality of solution. We obtained very promising experimental results and we compared our implementation with those in the literature. The results obtained show that ant colony optimization is a viable approach to solve MKP efficiently, even for large instances, with the parallel approach.
The Multidimensional Knapsack Problem (MKP) is a generalization of the basic Knapsack Problem, with two or more constraints. It is an important optimization problem with many real-life applications. To solve this NP-h...
详细信息
The Multidimensional Knapsack Problem (MKP) is a generalization of the basic Knapsack Problem, with two or more constraints. It is an important optimization problem with many real-life applications. To solve this NP-hard problem we use a metaheuristic algorithm based on ant colony optimization (ACO). Since several steps of the algorithm can be carried out concurrently, we propose a parallel implementation under the GPGPU paradigm (General Purpose Graphics Processing Units) using CUDA. To use the algorithm presented in this paper, it is necessary to balance the number of ants, number of rounds used, and whether local search is used or not, depending on the quality of the solution desired. In other words, there is a compromise between time and quality of solution. We obtained very promising experimental results and we compared our implementation with those in the literature. The results obtained show that ant colony optimization is a viable approach to solve MKP efficiently, even for large instances, with the parallel approach.
Both the size and the resolution of images always were key topics in the graphical computing area. Especially, they become more and more relevant in the big data era. We can observe that often a huge amount of data is...
详细信息
Both the size and the resolution of images always were key topics in the graphical computing area. Especially, they become more and more relevant in the big data era. We can observe that often a huge amount of data is exchanged by medium/low bandwidth networks or yet, they need to be stored on devices with limited space of memory. In this context, the present paper shows the use of the Fractal method for image compression. It is a lossy method known by providing higher indexes of file reduction through a highly time consuming phase. In this way, we developed a model of parallel application for exploiting the power of multiprocessor architectures in order to get the Fractal method advantages in a feasible time. The evaluation was done with different-sized images as well as by using two types of machines, one with two and another with four cores. The results demonstrated that both the speedup and efficiency are highly dependent of the number of cores. They emphasized that a large number of threads does not always represent a better performance.
暂无评论