Many computer vision methods rely on frame registration information obtained with algorithms such as the Kanade-Lucas-Tomasi (KLT) feature tracker, which is known for its excellent performance in that area. Various re...
详细信息
Many computer vision methods rely on frame registration information obtained with algorithms such as the Kanade-Lucas-Tomasi (KLT) feature tracker, which is known for its excellent performance in that area. Various research groups proposed methods to extend its performance, both in terms of execution time and stability. Recent research has shown that current graphics processing units (GPUs) have proven to be very efficient SIMD parallelprocessing architectures that can be used to speed up the execution time of the KLT algorithm by an order of a magnitude compared with an ordinary CPU implementation and thus making it suitable for real-time applications on commodity hardware. Previous publications demonstrated the use of the GPU for the tracking and imageprocessing part of the KLT algorithm. One essential, but computationally demanding step of the KLT algorithm is the feature selection step. It injects fresh feature points into the existing point set and thus enables the KLT to continuously track points throughout an image sequence. Those sequences can contain rapid movements or they may be of low quality. In such situations, features diminish rapidly and the step must be performed potentially for every frame. Thus, the performance of the otherwise efficient GPU implementation declines substantially, as this step includes a sorting operation on the sparse feature map, which is difficult to implement efficiently on the GPU. We use the KLT algorithm to calculate a stable frame registration with high accuracy using the feature point set. We found that maintaining a well distributed feature set through frequent injections of points is an essential requirement. In this paper we will demonstrate an alternative feature reselection method that can be efficiently implemented on the GPU. It is a Monte-Carlo-based approximation of the original method and leads to very good tracking results with just a fraction of the computational cost.
This paper is turned to the advanced Monte Carlo methods for realistic image creation. It offers a new stratified approach for solving the rendering equation. We consider the numerical solution of the rendering equati...
详细信息
This paper is turned to the advanced Monte Carlo methods for realistic image creation. It offers a new stratified approach for solving the rendering equation. We consider the numerical solution of the rendering equation by separation of integration domain. The hemispherical integration domain is symmetrically separated into 16 parts. First 8 sub-domains are equal size of orthogonal spherical triangles. They are symmetric each to other and grouped with a common vertex around the normal vector to the surface. The hemispherical integration domain is completed with more 8 sub-domains of equal size spherical quadrangles, also symmetric each to other. All sub-domains have fixed vertices and computable parameters. The bijections of unit square into an orthogonal spherical triangle and into a spherical quadrangle are derived and used to generate sampling points. Then, the symmetric sampling scheme is applied to generate the sampling points distributed over the hemispherical integration domain. The necessary transformations are made and the stratified Monte Carlo estimator is presented. The rate of convergence is obtained and one can see that the algorithm is of super-convergent type. [ABSTRACT FROM AUTHOR]
With advances in remote-sensing technology, the large volumes of data cannot be analyzed efficiently and rapidly, especially with arrival of high-resolution images. The development of image-processing technology is an...
详细信息
With advances in remote-sensing technology, the large volumes of data cannot be analyzed efficiently and rapidly, especially with arrival of high-resolution images. The development of image-processing technology is an urgent and complex problem for computer and geo-science experts. It involves, not only knowledge of remote sensing, but also of computing and networking. Remotely sensed images need to be processed rapidly and effectively in a distributed and parallelprocessing environment. Grid computing is a new form of distributed computing, providing an advanced computing and sharing model to solve large and computationally intensive problems. According to the basic principle of grid computing, we construct a distributedprocessing system for processing remotely sensed images. This paper focuses on the implementation of such a distributed computing and processing model based on the theory of grid computing. Firstly, problems in the field of remotely sensed imageprocessing are analyzed. Then, the distributed (and parallel) computing model design, based on grid computing, is applied. Finally, implementation methods with middleware technology are discussed in detail. From a test analysis of our system, ***, the whole image-processing system is evaluated, and the results show the feasibility of the model design and the efficiency of the remotely sensed imagedistributed and parallelprocessing system. (C) 2006 Elsevier Inc. All rights reserved.
This paper describes the first use of a Network processing Unit (NPU) to perform hardware-based image composition in a distributed rendering system. The image composition step is a notorious bottleneck in a clustered ...
详细信息
This paper describes the first use of a Network processing Unit (NPU) to perform hardware-based image composition in a distributed rendering system. The image composition step is a notorious bottleneck in a clustered rendering system. Furthermore, image compositing algorithms do not necessarily scale as data size and number of nodes increase. Previous researchers have addressed the composition problem via software and/or custom-built hardware. We used the heterogeneous multicore computation architecture of the Intel IXP28XX NPU, a fully programmable commercial off-the-shelf (COTS) technology, to perform the image composition step. With this design, we have attained a nearly four-times performance increase over traditional software-based compositing methods, achieving sustained compositing rates of 22-28 fps on a 1, 024 x 1, 024 image. This system is fully scalable with a negligible penalty in frame rate, is entirely COTS, and is flexible with regard to operating system, rendering software, graphics cards, and node architecture. The NPU-based compositor has the additional advantage of being a modular compositing component that is eminently suitable for integration into existing distributed software visualization packages.
We present a survey of modem parallel MATLAB techniques. We concentrate on the most promising and well supported techniques with an emphasis in SIP applications. Some of these methods require writing explicit code to ...
详细信息
ISBN:
(纸本)1424407281
We present a survey of modem parallel MATLAB techniques. We concentrate on the most promising and well supported techniques with an emphasis in SIP applications. Some of these methods require writing explicit code to perform inter-processor communication while others hide the complexities of communication and computation by using higher level programming interfaces. We cover each approach with special emphasis given to performance and productivity issues.
In this paper, we show a simulation method for performance prediction of medical imageprocessing chains, based on ***-time calculus of [9]. In particular, we focus on estimating the latency and throughput of a given ...
详细信息
ISBN:
(纸本)9783540709510
In this paper, we show a simulation method for performance prediction of medical imageprocessing chains, based on ***-time calculus of [9]. In particular, we focus on estimating the latency and throughput of a given imageprocessing chain and its distribution over hardware resources. The architectural level of abstraction of the real-time calculus approach makes our method flexible, so that different design decisions can be studied using similar models. The choice for simulation rather than the usual algebraic analysis of real-time calculus equations, gives us the possibility to include a number of performance relevant implementation details in our models for which analytical estimates are not available.
Array operations are useful in a large number of important scientific codes, such as molecular dynamics, finite-element methods, climate modeling, etc. It is a challenging problem to provide an efficient data distribu...
详细信息
Array operations are useful in a large number of important scientific codes, such as molecular dynamics, finite-element methods, climate modeling, etc. It is a challenging problem to provide an efficient data distribution for irregular problems. Multi-dimensional (AM) sparse array operations can be used in atmosphere and ocean sciences, imageprocessing, etc., and have been an extensively investigated problem. In our previous work, a data distribution scheme, Encoding-Decoding (ED), was proposed for two-dimensional (2D) sparse arrays. In this paper, ED is extended to be useful for MD sparse arrays first. Then, the performance of ED is compared with that of Send Followed Compress (SFC) and Compress Followed Send (CFS). Both theoretical analysis and experimental tests were conducted and then shown that ED is superior to SFC and CFS for all of evaluated criteria.
A tristate approach (TA) for image denoising processing is presented;the noise is aimed at the presence of pepper-and-salt noise. The newness of this method is that it develops a new route in the field of image restor...
详细信息
parallel 3D reconstruction and partial retrieval from 2D image slices based on object contour structure are introduced. In this paper, 2D images are segmented into object contours. Related object contours on adjacent ...
详细信息
The Tissue MicroArray (TMA) technique is assuming even more importance. Digital images acquisition becomes fundamental to provide an automatic system for Subsequent analysis. The accuracy of the results depends on the...
详细信息
ISBN:
(纸本)9781586037383
The Tissue MicroArray (TMA) technique is assuming even more importance. Digital images acquisition becomes fundamental to provide an automatic system for Subsequent analysis. The accuracy of the results depends on the image resolution, which has to be very high in order to provide as many details as possible. Lossless formats are more suitable to bring information, but data file size become a critical factor researchers have to deal with. This affects not only storage methods but also computing times and performances. Pathologists and researchers who work with biological tissues, in particular with the TMA technique, need to consider a large number of case studies to formulate and validate their hypotheses. It is clear the importance of image sharing between different institutes worldwide to increase the amount of interesting data to work with. In this context, preserving the security of sensitive data is a fundamental issue. In most of the cases copying patient data in places different from the original database is forbidden by the owner institutes. Storage, computing and security are key problems of TMA methodology. In our system we tackle all these aspects using the EGEE (Enabling Grids for E-sciencE) Grid infrastructure. The Grid platform provides good storage, performance in imageprocessing and safety of sensitive patient information: this architecture offers hundreds of Storage and Computing Elements and enables users to handle images without copying them to physical disks other than where they have been archived by the owner giving back to end-users only the processed anonymous images. The efficiency of the TMA analysis process is obtained implementing algorithms based on functions provided by the parallelimageprocessing Genoa Library (PIMA(GE)(2) Lib). The acquisition of remotely distributed TMA images is made using specialized I/O functions based on the Grid File Access Library (GFAL) API. In our opinion this approach may represent important contribution
暂无评论