This paper presents solution to problem of edge coloring of sizable set of cubic graphs and examination of relations between these graphs. We solved this problem on various computing systems and for various sizes of t...
详细信息
ISBN:
(纸本)9781728131795
This paper presents solution to problem of edge coloring of sizable set of cubic graphs and examination of relations between these graphs. We solved this problem on various computing systems and for various sizes of the problem (various number of graphs). For the computations we used High-Performance computing Cluster and Amazon Web Services cloud environment. We measured and analyzed time of computation of edge coloring and other properties. Largest set we worked with contained almost 10 million graphs. We created new methodology, which can be used to finding order of the edges which optimizes time of computation of edge coloring for certain subset of graphs. On the basis of this methodology, we implemented algorithm for parallel edge coloring of set of graphs. For testing of the methodology, we designed 8 experiments. Results showed, that worst time of edge coloring of graph from set of 19 935 graphs before use of the methodology was 1260 ms. After application of our methodology, we found same order of edge coloring for whole group of 19 935 graphs and the highest time of coloring was 10 ms.
In this paper, an original parallel domain decomposition method for ray-tracing is proposed to solve numerical acoustic problems on multi-cores and multi-processors computers. A hybrid method between the ray-tracing a...
详细信息
ISBN:
(纸本)9780769544151
In this paper, an original parallel domain decomposition method for ray-tracing is proposed to solve numerical acoustic problems on multi-cores and multi-processors computers. A hybrid method between the ray-tracing and the beam-tracing method is first introduced. Then, a new parallel method based on domain decomposition principles is proposed. This method allows to handle large scale open domains for parallelcomputing purpose, better than other existing methods. parallel numerical experiments, carried out on a real world problem-namely the acoustic pollution analysis within a large city-illustrate the performance of this new domain decomposition method.
We continue to study various properties of the generalized exchanged hypercube (GEH) structure. We also derive the g-good-neighbour diagnosability of the exchanged hypercube structure, a special case of GEH, i.e. the ...
详细信息
Complex Event Processing (CEP) and Mobile Adhoc networks (MANETs) are two technologies that can be used to enable monitoring applications for Emergency and Rescue missions (ER). MANETs are characterized by energy limi...
详细信息
ISBN:
(纸本)9781467377010
Complex Event Processing (CEP) and Mobile Adhoc networks (MANETs) are two technologies that can be used to enable monitoring applications for Emergency and Rescue missions (ER). MANETs are characterized by energy limitations, and in-network processing or distributed CEP is one possible solution. Operator placement mechanism for distributed CEP has a direct impact on energy consumption. Existing operator placement mechanisms focus on static network topologies and are therefore inappropriate for MANETs scenarios. We propose a novel energy efficient decentralized distributed placement mechanism, designed to achieve fast convergence with minimal data transmission cost while achieving a near optimal placement assignment. We compare our decentralized placement mechanism with a centralized approach under different mobility scenarios. Furthermore, we evaluate the distributed CEP under different workload scenarios in order to gain additional insight into different performance characteristics of the system. Finally, we measure the impact of a simple placement replication scheme on the overall system performance in terms of delay and message overhead. Our decentralized placement mechanism achieves up to almost 50% lower message overhead compared to the centralized approach, and it has lower message overhead across different mobility scenarios compared to the centralized approach. The placement replication scheme achieves up to 51% lower delay compared to the decentralized placement mechanism with no replication.
We use "unplugged" activities to introduce parallel concepts in a first-year seminar for Computer Science majors. Student teams explore parallel approaches to computational tasks. Pre- and post-activity surv...
详细信息
ISBN:
(纸本)9781450390705
We use "unplugged" activities to introduce parallel concepts in a first-year seminar for Computer Science majors. Student teams explore parallel approaches to computational tasks. Pre- and post-activity surveys, and a reflection paper, measure the impact of these activities on students' views about parallel programming. Our goal is to encourage parallel thinking about programming tasks before sequential approaches become ingrained. Computer Science curricula have traditionally focused on sequential approaches to programming, which were well matched to earlier computer systems. However, current systems almost all use multiprocessor CPUs, and are frequently used in clusters or networks of multiple computers. Recent curricular guidelines from organizations such as ACM and ABET recommend exposure to parallelcomputing concepts.
In this paper, we present a contribution for the Single Source Shortest Path Problem (SSSPP) in large-scale graph with A* algorithm. A* is one of the most efficient graph traversal algorithm because it is driven by a ...
详细信息
ISBN:
(纸本)9783319681795;9783319681788
In this paper, we present a contribution for the Single Source Shortest Path Problem (SSSPP) in large-scale graph with A* algorithm. A* is one of the most efficient graph traversal algorithm because it is driven by a heuristic which determines the optimal path. A* approach is not efficient when the graph is too large to be processed due to exponential time complexity. We propose a MapReduce-based approach called MRA*: MapReduce-A* which consists to combine the A* algorithm with MapReduce paradigm to compute the shortest path in parallel and distributed environment. We perform experiments in a Hadoop multi-node cluster and our results prove that the proposed approach outperforms A* algorithm and reduces significantly the computational time.
In heterogeneous computing systems, general purpose CPUs are coupled with co-processors of different architectures, like GPUs and FPGAs. Applications may take advantage of this heterogeneous device ensemble to acceler...
详细信息
ISBN:
(纸本)9781728199245
In heterogeneous computing systems, general purpose CPUs are coupled with co-processors of different architectures, like GPUs and FPGAs. Applications may take advantage of this heterogeneous device ensemble to accelerate execution. However, developing heterogeneous applications requires specific programming models, under which applications unfold into code components targeting different computing devices. OpenCL is one of the main programming models for heterogeneous applications, set apart from others due to its openness, vendor independence and support for different co-processors. In the original OpenCL application model, a heterogeneous application starts in a certain host node, and then resorts to the local co-processors attached to that host. Therefore, co-processors at other nodes, networked with the host node, are inaccessible and cannot be used to accelerate the application. rOpenCL (remote OpenCL) overcomes this limitation for a significant set of the OpenCL 1.2 API, offering OpenCL applications transparent access to remote devices through a TPC/IP based network. This paper presents the architecture and the most relevant implementation details of rOpenCL, together with the results of a preliminary set of reference benchmarks. These prove the stability of the current prototype and show that, in many scenarios, the network overhead is smaller than expected.
Spatial association mining, as one of important techniques for spatial data mining, is used to discover interesting relationship patterns among spatial features based on spatial proximity from a large spatial database...
详细信息
ISBN:
(纸本)9781538650356
Spatial association mining, as one of important techniques for spatial data mining, is used to discover interesting relationship patterns among spatial features based on spatial proximity from a large spatial database. Explosive growth in georeferenced data has emphasized the need to develop computationally efficient methods for analyzing big spatial data. parallel and distributed computing is effective and mostly-used strategy for speeding up large scale dataset algorithms. This work presents parallel spatial association mining on the Spark RDD framework -a specially-designed in-memory parallelcomputing model to support iterative algorithms. The initial experiment result shows that the Spark-based algorithm has significantly improved performance than the method with MapReduce in spatial association pattern mining.
A process of Knowledge Discovery in Databases (KDD) involving large amounts of data requires a considerable amount of computational power. The process may be done on a dedicated and expensive machinery or, for some ta...
详细信息
ISBN:
(纸本)9783540734345
A process of Knowledge Discovery in Databases (KDD) involving large amounts of data requires a considerable amount of computational power. The process may be done on a dedicated and expensive machinery or, for some tasks, one can use distributedcomputing techniques on a network of affordable machines. In either approach it is usual the user to specify the workflow of the sub-tasks composing the whole KDD process before execution starts. In this paper we propose a technique that we call distributed Generative Data Mining. The generative feature of the technique is due to its capability of generating new sub-tasks of the Data Mining analysis process at execution time. The workflow of sub-tasks of the DM is, therefore, dynamic. To deploy the proposed technique we extended the distributed Data Mining system HARVARD and adapted an Inductive Logic Programming system (IndLog) used in a Relational Data Ming task. As a proof-of-concept, the extended system was used to analyse an artificial dataset of a credit scoring problem with eighty million records.
暂无评论