Segmentation of anatomical structures, from modalities like computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound, is a key enabling technology for medical applications such as diagnostics, plannin...
详细信息
Segmentation of anatomical structures, from modalities like computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound, is a key enabling technology for medical applications such as diagnostics, planning and guidance. More efficient implementations are necessary, as most segmentation methods are computationally expensive, and the amount of medical imaging data is growing. The increased programmability of graphic processing units (GPUs) in recent years have enabled their use in several areas. GPUs can solve large data parallel problems at a higher speed than the traditional CPU, while being more affordable and energy efficient than distributed systems. Furthermore, using a GPU enables concurrent visualization and interactive segmentation, where the user can help the algorithm to achieve a satisfactory result. This review investigates the use of GPUs to accelerate medical image segmentation methods. A set of criteria for efficient use of GPUs are defined and each segmentation method is rated accordingly. In addition, references to relevant GPU implementations and insight into GPU optimization are provided and discussed. The review concludes that most segmentation methods may benefit from CPU processing due to the methods' data parallel structure and high thread count. However, factors such as synchronization, branch divergence and memory usage can limit the speedup. (C) 2014 The Authors. Published by Elsevier B.V.
Graph algorithms are challenging to parallelize on manycore architectures due to complex data dependencies and irregular memory access. We consider the well studied problem of coloring the vertices of a graph. In many...
详细信息
ISBN:
(纸本)9781509021413
Graph algorithms are challenging to parallelize on manycore architectures due to complex data dependencies and irregular memory access. We consider the well studied problem of coloring the vertices of a graph. In many applications it is important to compute a coloring with few colors in near-lineartime. In parallel, the optimistic (speculative) coloring method by Gebremedhin and Manne is the preferred approach but it needs to be modified for manycore architectures. We discuss a range of implementation issues for this vertex-based optimistic approach. We also propose a novel edge-based optimistic approach that has more parallelism and is better suited to GPUs. We study the performance empirically on two architectures(Xeon Phi and GPU) and across many data sets (from finite element problems to social networks). Our implementation uses the Kokkos library, so it is portable across platforms. We show that on GPUs, we significantly reduce the number of colors (geometric mean 4X, but up to 48X) as compared to the widely used cuSPARSE library. In addition, our edge-based algorithm is 1.5 times faster on average than cuSPARSE, where it hasspeedups up to 139X on a circuit problem. We also show the effect of the coloring on a conjugate gradient solver using multi-colored Symmetric Gauss-Seidel method as preconditioner, the higher coloring quality found by the proposed methods reduces the overall solve time up to 33% compared to cuSPARSE.
Functional neuroimaging and paralleldistributedprocessing (PDP) theory, both introduced to cognitive science in the 1980s, led to influential research programmes that have proceeded in parallel with little mutual in...
详细信息
Functional neuroimaging and paralleldistributedprocessing (PDP) theory, both introduced to cognitive science in the 1980s, led to influential research programmes that have proceeded in parallel with little mutual influence. The PDP approach advanced specific claims about the nature of neural representations that, perhaps surprisingly, have gone largely untested in functional brain imaging. One reason may be the widespread use of univariate statistical methods for analysing brain imaging data, which typically rely on assumptions that render them unable to detect distributed representations of the kind that PDP predicts. More recent multivariate methods for image analysis may be better suited to detecting such representations. In the current article, we consider why univariate methods have been insufficient to test PDP's representational claims, articulate some of the properties that neural representations ought to have if the PDP view is valid and then survey the recent neuroimaging literature for evidence that neural representations do or do not have these properties. The survey establishes that the PDP view of distributed representations has considerable evidential support. This analysis underscores the importance of understanding how the assumptions underlying methods for analysing functional imaging data constrain the kinds of questions that can be addressed. We then consider the implications for our developing understanding of the neural bases of cognition and for the design of future brain imaging studies.
Our country has vast potential to come up as a cardinal exporter of agricultural produce, but lack of quick quality evaluation techniques, huge losses in processing and handling after harvesting, diseased crop etc. re...
详细信息
ISBN:
(纸本)9781509036707
Our country has vast potential to come up as a cardinal exporter of agricultural produce, but lack of quick quality evaluation techniques, huge losses in processing and handling after harvesting, diseased crop etc. result in a lower contribution to global market. The tomato crop is often infected by a disease, where plant's leaves get covered with spots of colors dark brown with purple border and light grey center; termed as Septoria Leaf Spot. It causes the leaves to turn yellow, but most damage occurs due to loss of leaves by infection. In this paper, tomato maturity based on color and fungal infection in the tomato leaves is determined. Initially thresholding algorithm was performed to determine the maturity of tomato. To make the system more generalized and self-adapting a shift to k-means clustering algorithm is made. Finally a comparative analysis of both the methods was done to analyze which method is more suitable in different conditions. Also an unconventional machine vision system has been suggested that scrutinizes the leaves emerging out of the soil and depending upon leaf spots, it analyzes the nature of fungus and its depth into the stem of tomato. k-means algorithm along with thresholding is used for segmentation of image and eventually identifying fungus. The fungus part that is segmented, is then studied to derive the percentage of presence.
An efficient implementation are necessary, as most medical imaging methods are computational expensive, and the amount of medical imaging data is growing. Graphic processing units (GPUs) can solve large data parallel ...
详细信息
ISBN:
(纸本)9781467367226
An efficient implementation are necessary, as most medical imaging methods are computational expensive, and the amount of medical imaging data is growing. Graphic processing units (GPUs) can solve large data parallel problems at a higher speed than the traditional CPU, while being more affordable and energy efficient than distributed systems. This review investigates the use of GPUs to accelerate medical imaging methods. A set of criteria for efficient use of GPUs are defined. The review concludes that most medical imageprocessingmethods may benefit from GPU processing due to the methods' data parallel structure and high thread count. However, factors such as synchronization, branch divergence and memory usage can limit the speedup.
Quantum imageprocessing has been a hot issue in the last decade. However, the lack of the quantum feature extraction method leads to the limitation of quantum image understanding. In this paper, a quantum feature ext...
详细信息
Quantum imageprocessing has been a hot issue in the last decade. However, the lack of the quantum feature extraction method leads to the limitation of quantum image understanding. In this paper, a quantum feature extraction framework is proposed based on the novel enhanced quantum representation of digital images. Based on the design of quantum image addition and subtraction operations and some quantum image transformations, the feature points could be extracted by comparing and thresholding the gradients of the pixels. Different methods of computing the pixel gradient and different thresholds can be realized under this quantum framework. The feature points extracted from quantum image can be used to construct quantum graph. Our work bridges the gap between quantum imageprocessing and graph analysis based on quantum mechanics.
Three-dimension imaging and identification with radar has recently gained a lot of attention. Resulting from the over-simple imaging model and over-large computational complexity, the existing three-dimension radar im...
详细信息
Three-dimension imaging and identification with radar has recently gained a lot of attention. Resulting from the over-simple imaging model and over-large computational complexity, the existing three-dimension radar imaging methods are limited in few scenarios. To break above bottlenecks, we introduce a novel physics-driven three-dimension fast radar imaging method based on far-field-approximation assumption, in which the whole imaging region would be decomposed with a series of overlapping-patches to accelerate the imaging speed in parallel. The proposed method has four key steps: e.g., first, the whole imaging region would be divided into a series of overlapping-patches, which indicates that the large-scale imaging problem is efficiently decomposed into a set of small-scale imaging sub-problems and can be solved in the parallel or distributed manner. Second, the dyadic Green's function based on far-field-approximation will be applied to construct system response function of radar imaging problem. Third, a dual transform is further introduced to turn the imaging problem into the problem of physical-driven imageprocessing problem. In this way, the imageprocessing system response functions for all overlapping-patches will be identical for fixed transmitters and receivers. In other words, the system response functions of overlapping-patches are independent of specific imaging regions. So few system response functions matrices need be memorized. At last, the first-order method is introduced to implement the imaging process of all overlapping-patches in parallel. The far-field-approximation imaging method drastically decreases the memory requirement while maintaining fast imaging speed and high imaging quality. Therefore, the proposed fast imaging method based on far-field-approximation is applicable to more large-scale imaging scenarios. Some selected simulation results are presented to demonstrate the state-of-the-art performance of the far-field-approximation method
We present a parallel treecode for fast kernel summation in high dimensions-a common problem in data analysis and computational statistics. Fast kernel summations can be viewed as approximation schemes for dense kerne...
详细信息
ISBN:
(纸本)9781479986484
We present a parallel treecode for fast kernel summation in high dimensions-a common problem in data analysis and computational statistics. Fast kernel summations can be viewed as approximation schemes for dense kernel matrices. Treecode algorithms (or simply treecodes) construct low-rank approximations of certain off-diagonal blocks of the kernel matrix. These blocks are identified with the help of spatial data structures, typically trees. There is extensive work on treecodes and their parallelization for kernel summations in three dimensions, but there is little work on high-dimensional problems. Recently, we introduced a novel treecode, ASKIT, which resolves most of the shortcomings of existing methods. We introduce novel parallel algorithms for ASKIT, derive complexity estimates, and demonstrate scalability on synthetic, scientific, and image datasets. In particular, we introduce a local essential tree construction that extends to arbitrary dimensions in a scalable manner. We introduce data transformations for memory locality and use GPU acceleration. We report results on the "Maverick" and "Stampede" systems at the Texas Advanced Computing Center. Our largest computations involve two billion points in 64 dimensions on 32,768 x86 cores and 8 million points in 784 dimensions on 16,384 x86 cores.
暂无评论