An edge detection process in computer vision and imageprocessing detects any types of significant features appearing as discontinuities in intensities. This paper presents our experience with parallelizing an edge de...
详细信息
An edge detection process in computer vision and imageprocessing detects any types of significant features appearing as discontinuities in intensities. This paper presents our experience with parallelizing an edge detection application algorithm that reduces noise and unnecessary detail in a gray-scale image from a coarse level to a fine level of resolution by using an edge focusing technique. Numerical methods and parallel implementations of edge focusing are presented. The image detection algorithms are implemented on three representative massage-passing architectures: a low-cost heterogeneous PVM network, an Intel iPSC/860 hypercube, and a CM-5 massively parallel multicomputer. Our objectives are to provide insight into implementation and performance issues for imageprocessing applications on general-purpose message-passing architectures, to investigate implications on network variations, and to evaluate the computing scalabilities on the three network systems by examining execution and communication patterns of the image edge detection application.
The problem considered in this paper is the definition of an efficient parallel algorithm for texture analysis of an image. The target architectures are distributed-memory general-purpose MIMD parallel machine. The so...
详细信息
The problem considered in this paper is the definition of an efficient parallel algorithm for texture analysis of an image. The target architectures are distributed-memory general-purpose MIMD parallel machine. The solutions here proposed are based on two different methods: the Statistic Feature Matrix and the Wavelet Decomposition.
Scalable parallel algorithms for seismic imaging remain a significant challenge for the oil and gas industry. Scalability must address both the computational and the input/output portions of the algorithm in question....
详细信息
ISBN:
(纸本)0819416258
Scalable parallel algorithms for seismic imaging remain a significant challenge for the oil and gas industry. Scalability must address both the computational and the input/output portions of the algorithm in question. These issues are addressed by the ARCO Seismic Benchmark Suite, a public domain software system that provides an environment for development and performance analysis of parallel seismic processing algorithm. We illustrate some of the issues in the design of scalable parallel imaging algorithms with an example process, 3D post-stack depth migration. The algorithm used is based on an implicit finite difference formulation described by Zhiming Li. Scalability is obtained by designing computation, communication between processors, and input/output as parallel operations. The resulting application runs efficiently on both distributed memory and shared memory hardware platforms with processor counts from 1 - 128 nodes.
The Discrete Wavelet Transform (DWT) is becoming a widely used tool in imageprocessing and other data analysis areas. A non-conventional variation of a spatio-temporal 3D DWT has been developed in order to analyze mo...
详细信息
The Discrete Wavelet Transform (DWT) is becoming a widely used tool in imageprocessing and other data analysis areas. A non-conventional variation of a spatio-temporal 3D DWT has been developed in order to analyze motion in time-sequential imagery. The computational complexity of this algorithm is Θ(n3), where n is the number of samples in each dimension of the input image sequence. methods are needed to increase the speed of these computations for large data sets. Fortunately, wavelet decomposition is very amenable to parallelization. Coarse-grained parallel versions of this process have been design and implemented on three different architectures: a distributed network represented by a distributed network of Sun SPARCstation 2 workstations;two Intel hypercubes (an iPSC/2 and an iPSC/860);and a Thinking Machines Corporation CM-5, a massively parallel SPMD. This non-conventional 3D wavelet decomposition is very suitable for course-grain implementation on parallel computers with proper load balancing. Close to linear speedup over serial implementations has been achieved using a distributed network. Near-linear speedup was obtained on the hypercubes and the CM-5 for a variety of image-processing applications.
Iterative methods, such as the conjugate gradient method, have long been used for the computation of optical flow along contours. For a contour with N points, the conjugate gradient method requires O(N) iterations (i....
详细信息
The scheduling problem is viewed as being central to design distributed real-time object-oriented systems. This issue is directly related to the achievement of timeliness properties. The scheduling problems are NP-com...
详细信息
学位级别:Ph.D., Doctor of Philosophy, Ph.D./Interdisciplinary
Object-driven dataflow (ODF) is a methodology that provides a system to build applications on heterogeneous multiprocessing systems or networks that support multiple parallelprocessing topologies in a hierarchical st...
详细信息
Object-driven dataflow (ODF) is a methodology that provides a system to build applications on heterogeneous multiprocessing systems or networks that support multiple parallelprocessing topologies in a hierarchical structure. Its discriminating features are driven by the generalization of macro-dataflow computing to heterogeneous systems and the focus on large system development through object-oriented computing. Natural heterogeneous processing is examined to gain insight into the principles and conceptual models of heterogeneous processing which are integrated into ODF. ODF was developed to be applicable to both research and product development in the area of imageprocessing. To enable this, it must provide load-time (not compile time) adaptability without sacrificing performance. The intent is to take advantage of the inherent parallelism in dataflow computing on high level, while allowing individual tasks to be developed with traditional imperative methods, if desired. The ODF determines the execution flow on a task level, in addition to providing the communication framework for intra-task communication. Dynamic binding is used to initiate the appropriate code on each of the nodes given the availability of the different types of nodes, object types, and the specific application. The objects are commonly decomposed to the grain-size appropriate to a given task on the available processing node.
This paper describes an image-based inspection system for surface defects, implemented on a transputer system with a high- performance transfer bus to provide fast access to large blocks of data. Data-level parallelis...
详细信息
ISBN:
(纸本)0819414786
This paper describes an image-based inspection system for surface defects, implemented on a transputer system with a high- performance transfer bus to provide fast access to large blocks of data. Data-level parallelism (the image pixel data is partitioned horizontally into slices) and task-level parallelism (the algorithms themselves can be parallelized) are utilized. Surface defects and anomalies are detected by a texture-based segmentation procedure. In the training phase the objects of interest are marked and all feature vectors implemented in the system are computed. The system uses simple statistical features, features calculated from the cooccurrence matrix, features based on texture spectrum and the fractal dimension. The time critical run phase is realized on a parallel computer. The implementation is done fully in software, allowing flexibility in the use of features and window sizes for pixel descriptors and freedom in the use of various classifier-methods. The processing speed of the segmentation system is easily scalable with the number of processing units. The performance of the system is demonstrated on an industrial visual inspection task, the recognition of surface defects of aluminium cast workpieces, where a connectionist classifier is used for pixel classification.
An edge detection process in computer vision and imageprocessing detects any types of significant features appearing as discontinuities in intensities. This paper presents our experience with parallelizing an edge de...
详细信息
ISBN:
(纸本)0818664274
An edge detection process in computer vision and imageprocessing detects any types of significant features appearing as discontinuities in intensities. This paper presents our experience with parallelizing an edge detection application algorithm that reduces noise and unnecessary detail in a gray-scale image from a coarse level to a fine level of resolution by using an edge focusing technique. Numerical methods and parallel implementations of edge focusing are presented. The image detection algorithms are implemented on three representative message-passing architectures: a low-cost heterogeneous PVM network, an Intel iPSC/860 hypercube, and a CM-5 massively parallel multicomputer. Our objectives are to provide insight into implementation and performance issues for imageprocessing applications on general-purpose message-passing architectures, to investigate implications an network variations, and to evaluate the computing scalabilities on the three network systems by examining execution and communication patterns of the image edge detection application.< >
Given a polygonal curve P = [p1, p2, ..., p(n)], the polygonal approximation problem considered calls for determining a new curve P' = [p1', p2', ..., p(m)'] such that (i) m is significantly smaller th...
详细信息
Given a polygonal curve P = [p1, p2, ..., p(n)], the polygonal approximation problem considered calls for determining a new curve P' = [p1', p2', ..., p(m)'] such that (i) m is significantly smaller than n, (ii) the vertices of P' are an ordered subset of the vertices of P, and (iii) any line segment [p(A)', p(A+1)'] of P' that substitutes a chain [p(B), ..., p(C)] in P is such that for all i where B less-than-or-equal-to i less-than-or-equal-to C, the approximation error of p(i) with respect to [p(A)', p(A+1)'], according to some specified criterion and metric, is less than a predetermined error tolerance. Using the parallel-strip error criterion, we study the following problems for a curve P in R(d), where d = 2, 3: (i) minimize m for a given error tolerance and (ii) given m, find the curve P' that has the minimum approximation error over all curves that have at most m vertices. These problems are called the min-# and min-epsilon problems, respectively. For R2 and with any one of the L1, L2, or L(infinity) distance metrics, we give algorithms to solve the min-# problem in O(n2) time and the min-epsilon problem in O(n2 log n) time, improving the best known algorithms to date by a factor of log n. When P is a polygonal curve in R3 that is strictly monotone with respect to one of the three axes, we show that if the L1 and L(infinity) metrics are used then the min-# problem can be solved in O(n2) time and the min-epsilon problem can be solved in O(n3) time. If distances are computed using the L2 metric then the min-# and min-epsilon problems can be solved in O(n3) and O(n3 log n) time, respectively. All of our algorithms exhibit O(n2) space complexity. Finally, we show that if it is not essential to minimize m, simple modifications of our algorithms afford a reduction by a factor of n for both time and space. (C) 1994 Academic Press, Inc.
暂无评论