Object detection (e.g., face detection) using supervised learning often requires extensive training, resulting in long execution times. If the system requires retraining to accommodate a missed detection, waiting seve...
详细信息
ISBN:
(纸本)9781467392860
Object detection (e.g., face detection) using supervised learning often requires extensive training, resulting in long execution times. If the system requires retraining to accommodate a missed detection, waiting several hours or even days in some cases before the system is ready, may not be acceptable in practical implementations. This paper presents a generalized object detection framework such that the system can efficiently adapt to misclassified data and be retrained within a few minutes. The methodology developed here is based on the popular AdaBoost algorithm for object detection. To reduce the learning time in object detection, we develop a highly efficient, parallel, and distributed AdaBoost algorithm that is able to achieve a training execution time of only 1.4 seconds per feature on 25 workstations. Further, we incorporate this parallel object detection algorithm into an adaptive framework such that a much smaller, optimized training subset is used to yield high detection rates while further reducing the retraining execution time. We demonstrate the usefulness of our adaptive framework on face and car detection.
Complex Event Processing (CEP) and Mobile Adhoc networks (MANETs) are two technologies that can be used to enable monitoring applications for Emergency and Rescue missions (ER). MANETs are characterized by energy limi...
详细信息
ISBN:
(纸本)9781467377010
Complex Event Processing (CEP) and Mobile Adhoc networks (MANETs) are two technologies that can be used to enable monitoring applications for Emergency and Rescue missions (ER). MANETs are characterized by energy limitations, and in-network processing or distributed CEP is one possible solution. Operator placement mechanism for distributed CEP has a direct impact on energy consumption. Existing operator placement mechanisms focus on static network topologies and are therefore inappropriate for MANETs scenarios. We propose a novel energy efficient decentralized distributed placement mechanism, designed to achieve fast convergence with minimal data transmission cost while achieving a near optimal placement assignment. We compare our decentralized placement mechanism with a centralized approach under different mobility scenarios. Furthermore, we evaluate the distributed CEP under different workload scenarios in order to gain additional insight into different performance characteristics of the system. Finally, we measure the impact of a simple placement replication scheme on the overall system performance in terms of delay and message overhead. Our decentralized placement mechanism achieves up to almost 50% lower message overhead compared to the centralized approach, and it has lower message overhead across different mobility scenarios compared to the centralized approach. The placement replication scheme achieves up to 51% lower delay compared to the decentralized placement mechanism with no replication.
Realization control volume method for calculation of non-stationary task of heat conductivity is presented in the article. This method can be applied for simulation of metal casting solidification in sand mold and oth...
详细信息
ISBN:
(纸本)9781467381147
Realization control volume method for calculation of non-stationary task of heat conductivity is presented in the article. This method can be applied for simulation of metal casting solidification in sand mold and other application. Distinctive feature of the developed method is possibility of application of the distributedcomputing which provides a good result on calculation speed, however demands the bigger volume of random access memory. In the calculation example, for a grid from 1 million elements the increase in calculation speed by 15 divided by 20% was reached when using 2 cores of the Intel Core I-5 processor and by 30% when using 3 cores. Also there is a possibility of increase of calculations accuracy for the account of increase in quantity of calculated elements at identical calculation time.
The physics goals of the next Large Hadron Collider run include high precision tests of the Standard Model and searches for new physics. These goals require detailed comparison of data with computational models simula...
详细信息
The physics goals of the next Large Hadron Collider run include high precision tests of the Standard Model and searches for new physics. These goals require detailed comparison of data with computational models simulating the expected data behavior. To highlight the role which modeling and simulation plays in future scientific discovery, we report on use cases and experience with a unified system built to process both real and simulated data of growing volume and variety.
Generally, image rendering requires high computing capacity. It is really time consuming to render a movie on a single machine. The use of multiple machines to render a move requires much effort to control the workflo...
详细信息
ISBN:
(纸本)9781479989386
Generally, image rendering requires high computing capacity. It is really time consuming to render a movie on a single machine. The use of multiple machines to render a move requires much effort to control the workflow and data. With the emergence of cloud computing, more and more scientists and engineers are moving their tasks from laboratories to public clouds. This migration requires some sort experience on both the cloud architecture and coding in the cloud. This paper proposes a simple service to render movies on Microsoft Azure that accelerates movie rendering. This service, called AzureRender, also introduces task parallelism and cache management to improve performance and reduce cost. A comparative study on image rendering performance and cost between Microsoft Azure and desktop machines is given at the end of the paper.
Nearest neighbor search is a key technique used in hierarchical clustering. The time complexity of standard agglomerative hierarchical clustering is O(n 3 ), while the time complexity of more advanced hierarchical clu...
详细信息
ISBN:
(纸本)9781479979363
Nearest neighbor search is a key technique used in hierarchical clustering. The time complexity of standard agglomerative hierarchical clustering is O(n 3 ), while the time complexity of more advanced hierarchical clustering algorithms (such as nearest neighbor chain) is O(n 2 ). This paper presents a new nearest neighbor search method called nearest neighbor boundary(NNB), which first divides a large dataset into independent subsets and then finds nearest neighbor of each point in the subsets. When NNB is used, the time complexity of hierarchical clustering can be reduced to O(n log 2 n). Based on NNB, we propose a fast hierarchical clustering algorithm called nearest-neighbor boundary clustering(NBC), and the proposed algorithm can also be adapted to the parallel and distributed computing frameworks. The experimental results demonstrate that our proposal algorithm is practical for large datasets.
This paper introduces the development of an asynchronous approach coupled with a cascade optimisation algorithm. The approach incorporates concepts of asynchronous Markov processes and introduces a search process that...
详细信息
This paper introduces the development of an asynchronous approach coupled with a cascade optimisation algorithm. The approach incorporates concepts of asynchronous Markov processes and introduces a search process that is benefiting from distributedcomputing infrastructures. The algorithm uses concepts of partitions and pools to store intermediate solutions and corresponding objectives. Population inflections are performed periodically to ensure that Markov processes, still independent and asynchronous, make arbitrary use of intermediate solutions. Tested against complex optimisation problems and in comparison with commonly used Tabu Search, the asynchronous cascade algorithm demonstrates a significant potential in distributed operations with favourable comparisons drawn against synchronous and quasi-asynchronous versions of conventional algorithms. (c) 2014 Elsevier Ltd. All rights reserved.
In this paper, the authors investigate the ability of Schwarz relaxation (SR) methods to deal with large systems of differential algebraic equations (DAEs) and assess their respective efficiency. Since the number of i...
详细信息
In this paper, the authors investigate the ability of Schwarz relaxation (SR) methods to deal with large systems of differential algebraic equations (DAEs) and assess their respective efficiency. Since the number of iterations required to achieve convergence of the classical SR method is strongly related to the number of subdomains and the time step size, two new preconditioning techniques are here developed. A preconditioner based on a correction using the algebraic equations is first introduced and leads to a number of iterations independent on the number of subdomains. A second preconditioner based on a correction using the Schur complement matrix makes the convergence independent on both the number of subdomains and the integration step size. Application on European electricity network is presented to outline the performance, efficiency, and robustness of the proposed preconditioning techniques for the solution of DAEs.
We propose a new distributed heuristic for approximating the Pareto set of bi-objective optimization problems. Our approach is at the crossroads of parallel cooperative computation, objective space decomposition, and ...
详细信息
We propose a new distributed heuristic for approximating the Pareto set of bi-objective optimization problems. Our approach is at the crossroads of parallel cooperative computation, objective space decomposition, and adaptive search. Given a number of computing nodes, we self-coordinate them locally, in order to cooperatively search different regions of the Pareto front. This offers a trade-off between a fully independent approach, where each node would operate independently of the others, and a fully centralized approach, where a global knowledge of the entire population is required at every step. More specifically, the population of solutions is structured and mapped into computing nodes. As local information, every node uses only the positions of its neighbors in the objective space and evolves its local solution based on what we term a 'localized fitness function'. This has the effect of making the distributed search evolve, over all nodes, to a high quality approximation set, with minimum communications. We deploy our distributed algorithm using a computer cluster of hundreds of cores and study its properties and performance on rho MNK-landscapes. Through extensive large-scale experiments, our approach is shown to be very effective in terms of approximation quality, computational time and scalability. (C) 2014 Elsevier B.V. All rights reserved.
暂无评论