The processing of microscopic tissue images is nowadays done more and more using special immunodiagnostic-evaluation software products. Often to evaluate the samples, the first step is determining the number and locat...
详细信息
The processing of microscopic tissue images is nowadays done more and more using special immunodiagnostic-evaluation software products. Often to evaluate the samples, the first step is determining the number and location of cell nuclei. To do this, one of the most promising methods is the region growing, but this algorithm is very sensitive to the appropriate setting of different parameters. Due to the large number of parameters and due to the big set of possible values setting those parameters manually is a quite hard task, so we developed a genetic algorithm to optimize these values. The first step of the development is the statistical analysis of the parameters, and the determination of the important features, to extract valuable information for a to-be-implemented genetic algorithm that will perform the optimization.
The matrix chain order problem (MCOP) is an important example of dynamic programming problem. It has a simple definition and is frequently used to represent a larger class of dynamic programming problems, called Non-s...
详细信息
ISBN:
(纸本)9781618397881
The matrix chain order problem (MCOP) is an important example of dynamic programming problem. It has a simple definition and is frequently used to represent a larger class of dynamic programming problems, called Non-serial Polyadic Problems. This paper presents a coarse-grained solution for the MCOP with O(n~3/p~3) time and O(1) communication steps in worst case. The paper will also present the experimental results of an implementation of the algorithm. In our experiments we got speedups reaching 6.61 with 32 processes. As far as we know, that is the faster solution to the problem in distributed memory computing both the point of view theoretical as practical.
Routine operations of emergency first responders are usually well managed. The situation is different for mass casualty emergencies where more people and properties are threatened. In such situations there are no pred...
详细信息
Routine operations of emergency first responders are usually well managed. The situation is different for mass casualty emergencies where more people and properties are threatened. In such situations there are no predefined plans in place and mitigation is solved mostly through crisis management. Teams managing such acute accidents are often working in information insufficiency. Timely information exchange between involved agencies, common understanding of data and fast provision of knowledge can save lives and protect properties. Useful information is heterogeneous and is distributed across many organizations in disparate information sources in many formats with different access policies and in varying quality. The information sources range from sensors deployed on incidence sites, publicly available data sources, corporate legacy systems, documents stored on remote locations to human end-users providing information using mobile devices. This article addresses operational challenges of First Responders and complementary challenges in accessing and analyzing information from multiple sources to provide advanced capabilities for command and control in emergency response. Herein we propose to use an agent-based infrastructure for supporting such interoperability. We propose to build the framework amplifying an agent infrastructure developed in scope of the SECRICOM EU integrated project. In this article we focus mainly on the conceptual architecture of such integration framework.
Cloud computing is expanding widely in the world of IT infrastructure. This is due partly to the cost-saving effect of economies of scale. Fair market conditions can in theory provide a healthy environment to reflect ...
详细信息
Cloud computing is expanding widely in the world of IT infrastructure. This is due partly to the cost-saving effect of economies of scale. Fair market conditions can in theory provide a healthy environment to reflect the most reasonable costs of computations. While fixed cloud pricing provides an attractive low entry barrier for compute-intensive applications, both the consumer and supplier of computing resources can see high efficiency for their investments by participating in auction-based exchanges. There are huge incentives for the cloud provider to offer auctioned resources. However, from the consumer perspective, using these resources is a sparsely discussed challenge. This paper reports a methodology and framework designed to address the challenges of using HPC (High Performance computing) applications on auction-based cloud clusters. The authors focus on HPC applications and describe a method for determining bid-aware checkpointing intervals. They extend a theoretical model for determining checkpoint intervals using statistical analysis of pricing histories. Also the latest developments in the SpotHPC framework are introduced which aim at facilitating the managed execution of real MPI applications on auction-based cloud environments. The authors use their model to simulate a set of algorithms with different computing and communication densities. The results show the complex interactions between optimal bidding strategies and parallel applications performance.
The exponential growth in user and application data entails new means for providing fault tolerance and protection against data loss. High Performance computing (HPC) storage systems, which are at the forefront of han...
详细信息
The exponential growth in user and application data entails new means for providing fault tolerance and protection against data loss. High Performance computing (HPC) storage systems, which are at the forefront of handling the data deluge, typically employ hardware RAID at the backend. However, such solutions are costly, do not ensure end-to-end data integrity, and can become a bottleneck during data reconstruction. In this paper, we design an innovative solution to achieve a flexible, fault-tolerant, and high-performance RAID-6 solution for a parallel file system (PFS). Our system utilizes low-cost, strategically placed GPUs - both on the client and server sides - to accelerate parity computation. In contrast to hardware-based approaches, we provide full control over the size, length and location of a RAID array on a per file basis, end-to-end data integrity checking, and parallelization of RAID array reconstruction. We have deployed our system in conjunction with the widely-used Lustre PFS, and show that our approach is feasible and imposes acceptable overhead.
We present some unique challenges in cognitive radio ad-hoc networks (CRAHNs) that are not present in conventional single-channel or multi-channel wireless ad-hoc networks. We first briefly survey these challenges and...
详细信息
ISBN:
(纸本)9781467350648
We present some unique challenges in cognitive radio ad-hoc networks (CRAHNs) that are not present in conventional single-channel or multi-channel wireless ad-hoc networks. We first briefly survey these challenges and their potential impact on the design of efficient algorithms for several fundamental problems in CRAHNs. Then, we describe our recent contributions to the capacity maximization problem [29] and the connectivity problem [32]. The capacity maximization problem is to maximize the overall throughput utility among multiple unicast sessions; the connectivity problem is to find a connected subgraph from the given cognitive radio network where each secondary node is equipped with multiple radios. By assuming the physical interference model and asynchronous communications, we reformulate the above two problems where the capacity maximization problem is to find the maximum number of simultaneously transmitting links in secondary networks, and the connectivity problem is to construct a spanning tree over secondary networks using the fewest timeslots. We discuss the challenging issues for designing distributed approximation algorithms and give a preliminary framework for solving these two problems.
For mining sequential patterns on massive data set,the distributed sequential pattern mining algorithm based on MapReduce programming model and PrefixSpan is *** tasks are decomposed to many small tasks,the Map functi...
详细信息
For mining sequential patterns on massive data set,the distributed sequential pattern mining algorithm based on MapReduce programming model and PrefixSpan is *** tasks are decomposed to many small tasks,the Map function is used to mine each Prefix-Projected sequential pattern,and the projected databases were constructed *** simplifies the search space and acquires a higher mining *** the intermediate values are passed to a Reduce function which merges together all these values to produce a possibly smaller set of *** theoretical analyses and experimental results show MR-PrefixSpan reduces the time of scanning *** solves the problem of mining massive data effectively,has considerable speedup and scaleup performances with an increasing number of processors on the Hadoop platform.
Traditionally, Logical Processes (LPs) forming a simulation model store their execution information into disjoint simulations states, forcing events exchange to communicate data between each other. In this work we pro...
详细信息
Traditionally, Logical Processes (LPs) forming a simulation model store their execution information into disjoint simulations states, forcing events exchange to communicate data between each other. In this work we propose the design and implementation of an extension to the traditional Time Warp (optimistic) synchronization protocol for parallel/distributed simulation, targeted at shared-memory/multicore machines, allowing LPs to share parts of their simulation states by using global variables. In order to preserve optimism's intrinsic properties, global variables are transparently mapped to multi-version ones, so to avoid any form of safety predicate verification upon updates. Execution's consistency is ensured via the introduction of a new rollback scheme which is triggered upon the detection of an incorrect global variable's read. At the same time, efficiency in the execution is guaranteed by the exploitation of non-blocking algorithms in order to manage the multi-version variables' lists. Furthermore, our proposal is integrated with the simulation model's code through software instrumentation, in order to allow the application-level programmer to avoid using any specific API to mark or to inform the simulation kernel of updates to global variables. Thus we support full transparency. An assessment of our proposal, comparing it with a traditional message-passing implementation of variables' multi-version is provided as well.
parallel robot has many good features, such as high rigidity, great load and high precision. Directed against the parallel robot mechanism with AC servo-motor drive, a model of controlling system was established. A ki...
详细信息
ISBN:
(纸本)9780769544151
parallel robot has many good features, such as high rigidity, great load and high precision. Directed against the parallel robot mechanism with AC servo-motor drive, a model of controlling system was established. A kind of dynamic sliding mode control algorithm was designed, and the stability of this control algorithm was analyzed. A simulative experiment of trajectory tracking was made on the Matlab / Simulink. The simulation result was given to prove the validity of the variable structure controller, and a good performance in tracking, and the high-accuracy real time control of the parallel robot mechanism is implemented.
In this paper, an original parallel domain decomposition method for ray-tracing is proposed to solve numerical acoustic problems on multi-cores and multi-processors computers. A hybrid method between the ray-tracing a...
详细信息
ISBN:
(纸本)9780769544151
In this paper, an original parallel domain decomposition method for ray-tracing is proposed to solve numerical acoustic problems on multi-cores and multi-processors computers. A hybrid method between the ray-tracing and the beam-tracing method is first introduced. Then, a new parallel method based on domain decomposition principles is proposed. This method allows to handle large scale open domains for parallelcomputing purpose, better than other existing methods. parallel numerical experiments, carried out on a real world problem-namely the acoustic pollution analysis within a large city-illustrate the performance of this new domain decomposition method.
暂无评论