The main aim of this work is to show, how GPGPUs can facilitate certain type of imageprocessingmethods. The software used in this paper is used to detect special tissue part, the nuclei on (HE - hematoxilin eosin) s...
详细信息
The main aim of this work is to show, how GPGPUs can facilitate certain type of imageprocessingmethods. The software used in this paper is used to detect special tissue part, the nuclei on (HE - hematoxilin eosin) stained colon tissue sample images. Since pathologists are working with large number of high resolution images - thus require significant storage space -, one feasible way to achieve reasonable processing time is the usage of GPGPUs. The CUDA software development kit was used to develop processing algorithms to NVIDIA type GPUs. Our work focuses on how to achieve better performance with coalesced global memory access when working with three-channel RGB tissue images, and how to use the on-die shared memory efficiently.
The motion of living beings can be recorded with motion capture systems. The recorded data can be used to animate virtual characters. There are many methods used by motion capture systems. The paper presents the devel...
详细信息
The motion of living beings can be recorded with motion capture systems. The recorded data can be used to animate virtual characters. There are many methods used by motion capture systems. The paper presents the development of a motion capture system for more than one performer with active optical markers. The performers wear homogenous, illuminating markers attached to each relevant joint. Movements are captured by 6 high speed cameras in order to determine the markers' position in the cameras' image. The cameras operate at 60 fps. The imageprocessing is distributed among multiple computers. Luckily most of the imageprocessing tasks are highly parallel, so the high parallel computational capacity of modern programmable GPUs can be exploited. The main computer calculates the markers' 3D positions after it receives the 2D data from imageprocessing. Each marker needs to be identified, so the markers' 2D and 3D positions are tracked. A hierarchy can be defined between the points in the point cloud, which can be used to determine the bones' hierarchical transformation chain in the characters' skeleton. To achieve satisfying results precise calibration is required.
The segmentation of tissue regions in high-resolution microscopy is a challenging problem due to both the size and appearance of digitized pathology sections. The two point correlation function (TPCF) has proved to be...
详细信息
The segmentation of tissue regions in high-resolution microscopy is a challenging problem due to both the size and appearance of digitized pathology sections. The two point correlation function (TPCF) has proved to be an effective feature to address the textural appearance of tissues. However the calculation of the TPCF functions is computationally burdensome and often intractable in the gigapixel images produced by slide scanning devices for pathology application. In this paper we present several approaches for accelerating deterministic calculation of point correlation functions using theory to reduce computation, parallelization on distributed systems, and parallelization on graphics processors. Previously we show that the correlation updating method of calculation offers an 8-35x speedup over frequency domain methods and decouples efficient computation from the select scales of Fourier methods. In this paper, using distributed computation on 64 compute nodes provides a further 42x speedup. Finally, parallelization on graphics processors (GPU) results in an additional 11-16x speedup using an implementation capable of running on a single desktop machine.
The main aim of this work is to show, how the GPGPUs can be used to speed up certain imageprocessingmethods. The algorithm explained in this paper is used to detect nuclei on (HE - hematoxilin eosin) stained colon t...
详细信息
The main aim of this work is to show, how the GPGPUs can be used to speed up certain imageprocessingmethods. The algorithm explained in this paper is used to detect nuclei on (HE - hematoxilin eosin) stained colon tissue sample images, and includes a Gauss blurring, an RGB-HSV color space conversion, a fixed binarization, an ultimate erode procedure and a local maximum search. Since the images retrieved from the digital slides require significant storage space (up to few hundred megapixels), the usage of GPGPUs to speed up imageprocessing operations is necessary in the interest of achieving reasonable processing time. The CUDA software development kit was used to develop algorithms to GPUs made by NVIDIA. This work focuses on how to achieve coalesced global memory access when working with three-channel RGB images, and how to use the on-die shared memory efficiently. The exact test algorithm also included a linear connected component labeling, which was running on the CPU, and with iterative optimization of the GPU code, we managed to achieve significant speed up in well defined test environment.
Compared with the traditional lumped hydrological models,distributed hydrological model,considering the effects of the uneven spatial distribution of watershed land surface on the hydrological cycle,has the characteri...
详细信息
Compared with the traditional lumped hydrological models,distributed hydrological model,considering the effects of the uneven spatial distribution of watershed land surface on the hydrological cycle,has the characteristic of physical *** from overall structure,there are two types of distributed hydrological model,which are runoff and *** establishment of convergence network is on the basis of calculating reservoir routing convergence,at present,converged networks are constructed on the grounds of DEM,the resolution of DEM directly affects the result of convergence network construction,for now,due to confidentiality rules,it is very difficult to obtain high-resolution *** the development of GIS and RS,it is more convenient to acquire data from distributed hydrological model,which has been developing *** is completed by the National Aeronautics and Space Administration(NASA),National image Mapping Agency(NIMA) and the German and Italian space *** current publicly available data resolution is 3 arc seconds(1 /1200 of longitude and latitude),and its length is equivalent to90 *** publication of this data set is an important breakthrough in geographical science and application,which has important application ***,because of the limitations on using radar technology to obtain surface elevation data,there are many problems in the original SRTM DEM data,such as missing more regional data,existing many abnormal points,and so *** article,which takes Xue Ye reservoir area as example,studies the methods of processing SRTM data and obtained high-resolution DEM data of the region.
We developed an automated on-site quick analysis system for mosaic CCD data of Suprime-Cam, which is a wide-field camera mounted at the prime focus of the Subaru Telescope, Mauna Kea, Hawaii. The first version of the ...
详细信息
We developed an automated on-site quick analysis system for mosaic CCD data of Suprime-Cam, which is a wide-field camera mounted at the prime focus of the Subaru Telescope, Mauna Kea, Hawaii. The first version of the data-analysis system was constructed;and started to operate in general observations. This system is a new function of observing support at the Subaru Telescope to provide the Subaru user community with an automated on-site data evaluation, aiming at improvements of observers' productivity, especially in large imaging surveys. The new system assists the data evaluation tasks in observations by the continuous monitoring of the characteristics of every data frame during observations. The evaluation results and data frames processed by this system are also useful for reducing the data-processing time in a full analysis after an observation. The primary analysis functions implemented in the data-analysis system are composed of automated realtime analysis for data evaluation and on-demand analysis, which is executed upon request, including mosaicing analysis and flat making analysis. In data evaluation, which is controlled by the organizing software, the database keeps track of the analysis histories, as well as the evaluated values of data frames, including seeing and sky background levels;it also helps in the selection of frames for mosaicing and flat making analysis. We examined the system performance and confirmed an improvement in the data-processing time by a factor of 9 with the aid of distributedparallel data processing and on-memory data processing, which makes the automated data evaluation effective.
In this paper an efficient method devoted to estimate the velocity vectors field is investigated. The method is based on a quasi-interpolant operator and involves a large amount of computation. The operations characte...
详细信息
In this paper an efficient method devoted to estimate the velocity vectors field is investigated. The method is based on a quasi-interpolant operator and involves a large amount of computation. The operations characterizing the computational scheme are ideal for parallelprocessing because they are local, regular and repetitive. Therefore, the spatial parallelism of the process is studied to rapidly proceed in the computation on distributed multiprocessor systems. The process has shown to be synchoronous, with good task balancing and requiring a small amount of data transfer. (C) 2009 Elsevier Ltd. All rights reserved.
Purpose - Content-based image retrieval (CBIR) technologies offer many advantages over purely text-based image search. However, one of the drawbacks associated with CBIR is the increased computational cost arising fro...
详细信息
Purpose - Content-based image retrieval (CBIR) technologies offer many advantages over purely text-based image search. However, one of the drawbacks associated with CBIR is the increased computational cost arising from tasks such as imageprocessing, feature extraction, image classification, and object detection and recognition. Consequently CBIR systems have suffered from a lack of scalability, which has greatly hampered their adoption for real-world public and commercial image search. At the same time, paradigms for large-scale heterogeneous distributed computing such as grid computing, cloud computing, and utility-based computing are gaining traction as a way of providing more scalable and efficient solutions to large-scale computing tasks. Design/methodology/approach - This paper presents an approach in which a large distributedprocessing grid has been used to apply a range of CBIR methods to a substantial number of images. By massively distributing the required computational task across thousands of grid nodes, very high through-put has been achieved at relatively low overheads. Findings - This has allowed one to analyse and index about 25 million high resolution images thus far, while using just two servers for storage and job submission. The CBIR system was developed by Imense Ltd and is based on automated analysis and recognition of image content using a semantic ontology. It features a range of image-processing and analysis modules, including image segmentation, region classification, scene analysis, object detection, and face recognition methods. Originality/value - In the case of content-based image analysis, the primary performance criterion is the overall through-put achieved by the system in terms of the number of images that can be processed over a given time frame, irrespective of the time taken to process any given image. As such, grid processing has great potential for massively parallel content-based image retrieval and other tasks with similar p
In this article, we look at early, recent, and state-of-the-art methods for registration of medical images using a range of high-performance computing (HPC) architectures including symmetric multiprocessing (SMP), mas...
详细信息
In this article, we look at early, recent, and state-of-the-art methods for registration of medical images using a range of high-performance computing (HPC) architectures including symmetric multiprocessing (SMP), massively multiprocessing (MMP), and architectures with distributed memory (DM), and nonuniform memory access (NUMA). The article is designed to be self-sufficient. We will take the time to define and describe concepts of interest, albeit briefly, in the context of image registration and HPC. We provide an overview of the registration problem and its main components in the section "Registration." Our main focus will be HPC-related aspects, and we will highlight relevant issues as we explore the problem domain. This approach presents a fresh angle on the subject than previously investigated by the more general and classic reviews in the literature [1]-[3]. The sections "Multi-CPU Implementations" and "Accelerator Implementations" are organized from the perspective of high-performance and parallel- computing with the registration problem embodied. This is meant to equip the reader with the knowledge to map a registration problem to a given computing architecture.
We incorporate the optimization problem of two-dimensional infinite impulse response (IIR) recursive filters and the optimization methodology of hybrid multiagent particle swarm optimization (HMAPSO) and then apply th...
详细信息
We incorporate the optimization problem of two-dimensional infinite impulse response (IIR) recursive filters and the optimization methodology of hybrid multiagent particle swarm optimization (HMAPSO) and then apply the resultant optimized IIR filter in imageprocessing for justifying HMAPSO robustness over other algorithm and its role in optimizing real-time situations. The design of the 2-D IIR filter is reduced to a constrained minimization problem whose robust solution is being achieved by a novel and optimal algorithm HMAPSO. This algorithm integrates the deterministic solution by the multiagent system, the particle swarm optimization (PSO) algorithm, and bee decision-making process. All agents search parallel in an equally distributed lattice-like structure to save energy and computational time as done by the bees in their hive selection process. Thus making use of deterministic search, multiagent PSO, and bee, the HMAPSO realizes the purpose of optimization. Experimental results and the application of the designed filters to focusing the defocused image show that the HMAPSO approach provides better upshots than the previous design methods.
暂无评论