Analysis of biomedical images requires attention to image features that represent a small fraction of the total image size. A rapid method for eliminating unnecessary detail, analogous to pre-attentive processing in b...
详细信息
Analysis of biomedical images requires attention to image features that represent a small fraction of the total image size. A rapid method for eliminating unnecessary detail, analogous to pre-attentive processing in biological vision, allows computational resources to be applied where most needed for higher-level analysis. In this report we describe a method for bottom up merging of pixels into larger units based on flexible saliency criteria using a method similar to structured adaptive grid methods used for solving differential equations on physical domains. While creating a multiscale quadtree representation of the image, a saliency test is applied to prune the tree to eliminate unneeded details, resulting in an image with adaptive resolution. This method may be used as a first step for image segmentation and analysis and is inherently parallel, enabling implementation on programmable hardware or distributed memory clusters.
The paper is directed to the advanced Quasi Monte Carlo methods for realistic image synthesis. We propose and consider a new Quasi Monte Carlo solution of the rendering equation by uniform quadrangle separation of int...
The paper is directed to the advanced Quasi Monte Carlo methods for realistic image synthesis. We propose and consider a new Quasi Monte Carlo solution of the rendering equation by uniform quadrangle separation of integration domain. The hemispherical integration domain is uniformly separated into 12 equal size and symmetric sub‐domains. Each sub‐domain represents a solid angle, subtended by spherical quadrangle, very similar by form to plane unit square. Any spherical quadrangle has fixed vertices and computable parameters. A bijection of unit square into spherical quadrangle is find and the symmetric sampling scheme is applied to generate the sampling points uniformly distributed over hemispherical integration domain. Then, we apply the stratified Quasi Monte Carlo integration method for solving the rendering equation. The estimate of the rate of convergence is obtained. We prove the superiority of the proposed Quasi Monte Carlo solution of the rendering equation for arbitrary dimension of the sampling points. The uniform separation leads to convergence improvement of the Plain (Crude) Quasi Monte Carlo method.
We develop a novel approach for computing the circle Hough transform entirely on graphics hardware (GPU). A primary role is assigned to vertex processors and the rasterizer, overshadowing the traditional foreground of...
详细信息
We develop a novel approach for computing the circle Hough transform entirely on graphics hardware (GPU). A primary role is assigned to vertex processors and the rasterizer, overshadowing the traditional foreground of pixel processors and enhancing parallelprocessing. Resources like the vertex cache or blending units are studied too, with our set of optimizations leading to extraordinary peak gain factors exceeding 358x over a typical CPU execution. Software optimizations, like the use of precomputed tables or gradient information and hardware improvements, like hyperthreading and multicores are explored on CPUs as well. Overall, the GPU exhibits better scalability and much greater parallel performance to become a solid alternative for computing the classical circle Hough transform versus those optimal methods run on emerging multicore architectures. (c) 2008 Elsevier Inc. All rights reserved.
parallelization of operations is of utmost importance for efficient implementation of Public Key Cryptography algorithms. Starting with a classification of parallelization methods at different abstraction levels of pu...
详细信息
parallelization of operations is of utmost importance for efficient implementation of Public Key Cryptography algorithms. Starting with a classification of parallelization methods at different abstraction levels of public key algorithms, we propose a novel memory architecture for elliptic curve implementations with multiple modular multiplier units. This architecture is well-suited for different point addition and doubling algorithms over GF(p) to be implemented on FPGAs. It allows the execution time to scale with the number of modular multipliers and exhibits nearly no overhead compared to the mere runtime of the multipliers. The advantages of this distributed memory architecture are demonstrated by means of two different point addition and doubling algorithms.
This paper addresses the problem of unmixing hyperspectral images contamined by additive colored noise. Each pixel of the image is modeled as a linear combination of pure materials (denoted as endmembers) corrupted by...
详细信息
ISBN:
(纸本)9781424414833
This paper addresses the problem of unmixing hyperspectral images contamined by additive colored noise. Each pixel of the image is modeled as a linear combination of pure materials (denoted as endmembers) corrupted by an additive zero mean Gaussian noise sequence with unknown covariance matrix. Appropriate priors are defined ensuring positivity and additivity constraints on the mixture coefficients (denoted as abundances). These coefficients as wen as the noise covariance matrix are then estimated from their joint posterior distribution. A Gibbs sampling strategy generates abundances and noise covariance matrices distributed according to the joint posterior. These samples are then averaged for minimum mean square error estimation.
An automatic task-to-processor mapping algorithm is analyzed in parallel systems that run over loosely coupled distributed architectures. This research is based on the TTIGHa model that allows predicting parallel appl...
详细信息
ISBN:
(纸本)9780769534039
An automatic task-to-processor mapping algorithm is analyzed in parallel systems that run over loosely coupled distributed architectures. This research is based on the TTIGHa model that allows predicting parallel application performance running over heterogeneous architectures. In particular, the heterogeneity, of both processors and communications is taken into consideration. From the results obtained with the TTIGHa model, the MATEHa algorithm for task-to-processors assignment is presented and its implementation is analyzed. Experimental results working on subsets of two-cluster heterogeneous machines are presented, analyzing the resulting mapping scheme with MATEHa and two previous mapping methods: MATE and HEFT. Finally, the algorithm robustness is considered based on the variation of model parameters: inter-process communication times and processing times.
In the last years the Wireless Sensor Networks technology has achieved maturity. The continuous data production through a wide set of versatile applications drives researchers to think about different methods of data ...
详细信息
ISBN:
(纸本)1601320841
In the last years the Wireless Sensor Networks technology has achieved maturity. The continuous data production through a wide set of versatile applications drives researchers to think about different methods of data storing and recovering, which can provide an efficient abstraction for giving persistent support to the data generated into the sensor node. This paper focuses on the problem of local storage in sensor nodes using a flash memory chip. We propose a local storage management system and Sensor Node File System (SENFIS), a file system for sensor nodes of Mica family motes developed over TinyOS. Finally, we provide an evaluation of the storage management system and SENFIS in terms of code size, foot print, execution time and flash energy consumption.
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source ele...
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/L supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.
The proceedings contain 424 papers. The topics discussed include: optimal waveform design for cognitive radar;frames and a vector-valued ambiguity function;novel waveform and processing techniques for monostatic and b...
ISBN:
(纸本)9781424429417
The proceedings contain 424 papers. The topics discussed include: optimal waveform design for cognitive radar;frames and a vector-valued ambiguity function;novel waveform and processing techniques for monostatic and bistatic radar;analysis of random radar networks;threshold based distributed detection that achieves full diversity in wireless sensor networks;on the sum capacity of MIMO interference channel in the low interference regime;a parallel decoding algorithm of LDPC codes using CUDA;hybrid Cramer-Rao bound for moving array;nonparametric and sparse signal representations in array processing via iterative adaptive approaches;passive beamforming enhancements in relation to active-passive data fusion;optimal detection in MIMO Rayleigh fast fading channels with imperfect channel estimation;shortest paths in spaces of IIR-filters;new AM-FM analysis methods for retinal image characterization;and optimizing addition for sub-threshold logic.
A comparison study is presented of two methods for MRI reconstruction using super-resolution techniques which combine multiple multi-slice stacks into a single high- resolution 3-D image volume. Sampling configuration...
详细信息
A comparison study is presented of two methods for MRI reconstruction using super-resolution techniques which combine multiple multi-slice stacks into a single high- resolution 3-D image volume. Sampling configurations are compared involving stacks with parallel orientations at different sub-pixel offset locations, and stacks with regularly distributed slice orientations. Results from experiments with simulated and real MRI data suggest that different stack orientations perform better than parallel stacks.
暂无评论