As the availability of SAR images continues to grow, efficient coregistration of massive SAR images presents a greater challenge. Traditional serial coregistration methods impose an unbearable time overhead. To reduce...
详细信息
ISBN:
(纸本)9798350320107
As the availability of SAR images continues to grow, efficient coregistration of massive SAR images presents a greater challenge. Traditional serial coregistration methods impose an unbearable time overhead. To reduce this overhead and make full use of computing resources, a parallel coregistration strategy based on Hadoop is proposed for SAR images. The Hadoop distributed File System (HDFS) is used to store SAR image data in chunks, and Hadoop's distributed computing strategy MapReduce is used to realize distributedparallelprocessing of SAR images. Two distributedparallel coregistration methods are presented with the proposed parallel strategy: one based on the maximum correlation method and the other on the DEM-assisted coregistration method. These methods are evaluated through coregistration experiments on the same dataset, and they are verified by comparing the coregistration results and processing time.
Analysis of the existing techniques for approximate query processing of Big Data, based on sampling, histograms and wavelets, demonstrates that wavelet-based methods can be effectively utilized for OLAP purposes due t...
详细信息
ISBN:
(纸本)9781467387767
Analysis of the existing techniques for approximate query processing of Big Data, based on sampling, histograms and wavelets, demonstrates that wavelet-based methods can be effectively utilized for OLAP purposes due to their advantages in terms of handling multidimensional data and querying single cells as well as aggregate values from a data warehouse. At the same time the current wavelet-based methods for approximate query processing have certain deficiencies making difficult to implement them in practice. In particular, most of the techniques struggle with arbitrarily size data either imposing a restriction on a dimension length to be a multiple of a power of two, or complicating decomposition algorithms what leads to the construction time increase and difficulties with error estimations. Also, there is a lack of methods for approximate processing based on wavelets with a bounded error and a confidence interval for both single and aggregate values. Our contribution in this paper is introduction of a new wavelet method for approximate query processing which handles arbitrarily sized multidimensional datasets with minor extra calculations and provides a bounded error of the single or aggregate value reconstruction. It is demonstrated that the new method allows evaluating a confidence interval of the query error depending on a given compression ratio of a data warehouse or performing an inverse task, i.e. evaluating the required data warehouse compression ratio for a given allowable error. The introduced method was applied and verified over real epidemiological datasets to support research in finding correlations and patterns in disease spread and clinical signs correlations. It was demonstrated that the accuracy of the estimated error is acceptable for retrieving single and aggregate value, query time processing advantage depends on compression ratio and volume of the processed data.
With increasing bandwidth available to the client and the number of users growing at an exponential rate the Web server can become a performance bottleneck. This paper considers the parallelization of requests to Web ...
详细信息
ISBN:
(纸本)0769507689
With increasing bandwidth available to the client and the number of users growing at an exponential rate the Web server can become a performance bottleneck. This paper considers the parallelization of requests to Web pages each of which is composed of a number of embedded objects. The performance of systems in which the embedded objects are distributed across multiple backend servers are analyzed. parallelization of Web requests gives rise to a significant improvement in performance. Replication of servers is observed to be beneficial especially when the embedded objects in a Web page are not evenly distributed across servers. Load balancing policies used by the dispatcher of Web page requests are investigated A simple round robin policy for backend server selection gives a better performance compared to the default random policy used by the Apache server.
Grid computing presents a new trend to distributed computing and Internet applications, which can construct a virtual single image of heterogeneous resources, provide uniform application interface and integrate widesp...
详细信息
The distributed resampling algorithm with non-proportional allocation (RNA) [1] is key to implementing particle filtering applications on parallel computer systems. We extend the original work by Bolic et al. by intro...
详细信息
ISBN:
(纸本)9781479928934
The distributed resampling algorithm with non-proportional allocation (RNA) [1] is key to implementing particle filtering applications on parallel computer systems. We extend the original work by Bolic et al. by introducing an adaptive RNA (ARNA) algorithm, improving RNA by dynamically adjusting the particle-exchange ratio and randomizing the process ring topology. This improves the runtime performance of ARNA by about 9% over RNA with 10% particle exchange. ARNA also significantly improves the speed at which information is shared between processing elements, leading to about 20-fold faster convergence. The ARNA algorithm requires only a few modifications to the original RNA, and is hence easy to implement.
Huffman encoding provides a simple approach for lossless compression of sequential data. The length of encoded symbols varies and these symbols are tightly packed in the compressed data. Thus, Huffman decoding is not ...
详细信息
ISBN:
(纸本)9781538637906
Huffman encoding provides a simple approach for lossless compression of sequential data. The length of encoded symbols varies and these symbols are tightly packed in the compressed data. Thus, Huffman decoding is not easily parallelisable. This is unfortunate since it is desirable to have a parallel algorithm which scales with the increased core count of modern systems. This paper presents a parallel approach for decoding Huffman codes which work by decoding from every location in the bit sequence then concurrently combining the results into the uncompressed sequence. Although requiring more operations than serial approaches the presented approach is able to produce results marginally faster, on sufficiently large data sets, then that of a simple serial implementation. This is achieved by using the large number of threads available on modern GPUs. A variety of implementations, primarily OpenCL, are presented to demonstrate the scaling of this algorithm on CPU and GPU hardware in response to cores available. As devices with more cores become available, the importance of such an algorithm will increase.
A unified framework for integrating PC cluster based parallel rendering with distributed virtual environments (DVEs) is presented in this paper. While various scene graphs have been proposed in DVEs, it is difficult t...
详细信息
ISBN:
(纸本)9780819489326
A unified framework for integrating PC cluster based parallel rendering with distributed virtual environments (DVEs) is presented in this paper. While various scene graphs have been proposed in DVEs, it is difficult to enable collaboration of different scene graphs. This paper proposes a technique for non-distributed scene graphs with the capability of object and event distribution. With the increase of graphics data, DVEs require more powerful rendering ability. But general scene graphs are inefficient in parallel rendering. The paper also proposes a technique to connect a DVE and a PC cluster based parallel rendering environment. A distributed multi-player video game is developed to show the interaction of different scene graphs and the parallel rendering performance on a large tiled display wall.
This work aims at improving the quality of the side information in distributed video coding. Based on genetic algorithms, our proposed technique combines several frames, interpolated using previously developed methods...
详细信息
ISBN:
(纸本)9781424456536
This work aims at improving the quality of the side information in distributed video coding. Based on genetic algorithms, our proposed technique combines several frames, interpolated using previously developed methods, in a fusion-based approach. Simulation results show a significant improvement in the side information quality compared to other interpolation techniques available in the literature, which greatly improves the rate-distortion performance of a distributed video codec, where the gain in PSNR can reach 6dB.
Synchronous Dataflow (SDF), a popular subset of the dataflow programming paradigm, gives a well structured formalism to capture signal and stream processing applications. With data-parallel architectures becoming ubiq...
详细信息
ISBN:
(纸本)9781538635346
Synchronous Dataflow (SDF), a popular subset of the dataflow programming paradigm, gives a well structured formalism to capture signal and stream processing applications. With data-parallel architectures becoming ubiquitous, several frameworks leverage the SDF formalism to map applications to parallel architectures. But, these frameworks assume that the Synchronous Dataflow graphs (SDFGs) under consideration already are data-parallel. In this paper, we address the lack of mechanisms required to detect if an SDFG can be executed in a data-parallel fashion. We develop necessary and sufficient conditions that an SDFG must satisfy for its data-parallel execution. In addition, we develop methods that detect and transform SDFGs that cannot be determined to be data-parallel through visual graph inspection alone. We report on a prototype implementation of the developed conditions as a compiler pass in PREESM framework and test them against some useful applications expressed as an SDFG.
This paper presents the design and evaluation of the stream processing implementation of the Integral linage algorithm. The Integral linage is a key component of many imageprocessing algorithms in particular the Haar...
详细信息
ISBN:
(纸本)9780769534435
This paper presents the design and evaluation of the stream processing implementation of the Integral linage algorithm. The Integral linage is a key component of many imageprocessing algorithms in particular the Haar-like feature based systems. Modern GPUs provide a large number of processors with a peak floating point performance that is significantly higher than current general CPUs. This results in significant performance improvement when the Integral linage calculation for large input images is off loaded onto the GPU of the system.
暂无评论