Although the hardware platform is often seen as the most important element of real-time imaging systems, software optimization can also provide remarkable reduction of overall computational costs. The recommended code...
详细信息
ISBN:
(纸本)0819448125
Although the hardware platform is often seen as the most important element of real-time imaging systems, software optimization can also provide remarkable reduction of overall computational costs. The recommended code development flow for digital signal processors based on the TMS320C6000(TM) architecture usually involves three phases: development of C code, refinement of C code;and programming linear assembly code. Each step requires a different level of knowledge of processor internals: The developer is not directly involved in the automatic scheduling process. In some cases, however, this may result in unacceptable code performance. A better solution can be achieved by scheduling the assembly code by hand. Unfortunately, scheduling of software pipelines by hand not only requires expert skills but is also time consuming, and moreover, prone to errors. To overcome these drawbacks we have designed an innovative development tool - the Software Pipeline Optimization Tool (SPOT(TM)). The SPOT is based on visualization of the scheduled assembly code by a two-dimensional interactive schedule editor, which is equipped with feedback mechanisms deduced from analysis of data dependencies and resource allocation conflicts. The paper addresses optimization techniques available by the application of the SPOT. Furthermore, the benefit of the SPOT is documented by more than 20 optimized imageprocessingalgorithms.
This paper describes a simulation and analysis of a sensor viewing a "pixelized" scene projector like the KHILS' Wideband Infrared Scene Projector (WISP). The main objective of this effort is to understa...
详细信息
ISBN:
(纸本)0819444677
This paper describes a simulation and analysis of a sensor viewing a "pixelized" scene projector like the KHILS' Wideband Infrared Scene Projector (WISP). The main objective of this effort is to understand and quantify the effects of different scene projector configurations on the performance of several sensor signal processingalgorithms. We present simulation results that quantify the performance of two signal processingalgorithms used to estimate the sub-pixel position and irradiance of a point source. The algorithms are characterized for different signal-to-noise ratios, different projector configurations, and two different methods for preparing images that drive the projector. We describe the simulation in detail, numerous results obtained by processing simulated images, algorithms and projector properties, and present conclusions.
This paper describes a work in progress to develop an updated road mapping system. The system is designed to generate map products working directly from multispectral imagery. The updated system is uses a resolution h...
详细信息
ISBN:
(纸本)0819444758
This paper describes a work in progress to develop an updated road mapping system. The system is designed to generate map products working directly from multispectral imagery. The updated system is uses a resolution hierarchy to match the size of the roads, measured in image pixels, to the optimal processing configurations. The original system was designed to work with low resolution Landsat TM imagery while the updated system is designed to be more versatile with the ability to generate products from the new systems now available such as Landsat vii, IKONOS. and Digital Globe. The majority of the map production is performed in an automated mode requiring no user interaction. A Java interface to support final editing of the automated results has been built to support application on multiple platforms. This paper describes the mapping algorithms, the special editing interface designed for road vector maps, and results of some processing experiments.
A multi-spectral imaging system can be defined as a combination of electro-optic imagers that are mechanically constrained to view the same scene. Subsequent processing of the output imagery invariably requires a spat...
详细信息
ISBN:
(纸本)0819440760
A multi-spectral imaging system can be defined as a combination of electro-optic imagers that are mechanically constrained to view the same scene. Subsequent processing of the output imagery invariably requires a spatial registration of one spectral band image to geometrically conform to the imagery from a different sensor. This paper outlines a procedure, to leverage motion estimation of a pair of video sequences to determine a transformation that minimizes the disparity in optical flow between the sequences.
In the frame of Bayes statistic criterion of optimization the processingalgorithms and the structures of radiometric systems for wide-and superwide band electro-magnetic fields spatial-temporal processing are develop...
详细信息
In the frame of Bayes statistic criterion of optimization the processingalgorithms and the structures of radiometric systems for wide-and superwide band electro-magnetic fields spatial-temporal processing are developed. The methods of imaging in multi-beam systems and in systems, that form coherence functions of decorrelated processes. The algorithms has been developed using proposed by authors VF - transforms and proved by them theorem, that are generalization of Fourier transforms and Van Zittert-Zernike theorem, respectively, for the case of wide- and superwide band wave fields spectral analysis.
This paper presents a novel scheme to detect and discriminate landmines from other clutter objects during the image formation process for ultra-wideband (UWB) synthetic aperture radar (SAR) systems. By identifying lik...
详细信息
This paper presents a novel scheme to detect and discriminate landmines from other clutter objects during the image formation process for ultra-wideband (UWB) synthetic aperture radar (SAR) systems. By identifying likely regions containing the targets of interest, i.e., landmines, it is possible to speed up the overall formation time by pruning the processing to resolve regions that do not contain targets. The image formation algorithm is a multiscale approximation to standard backprojection known as the quadtree that uses a 'divide-and-conquer' strategy. The intermediate quadtree data admits multiresolution representations of the scene, and we develop a contrast statistic to discriminate structured/diffuse regions and an aperture diversity statistic to discriminate between regions containing mines and desert scrub. The potential advantages of this technique are illustrated using data collected at Yuma, AZ by the ARL BoomSAR system.
We design a new compactly-supported interpolating wavelet - distributed approximating functional (DAF) wavelet for biomedical signal/imageprocessing. DAF class is a smooth, continuous interpolating function system wh...
详细信息
We design a new compactly-supported interpolating wavelet - distributed approximating functional (DAF) wavelet for biomedical signal/imageprocessing. DAF class is a smooth, continuous interpolating function system which is symmetric and fast-decaying. DAF neural networks are designed for time varying electrocardiogram (EKG) signal filtering. The neural nets use the Hermite-DAF as the basis function and implement a 3-layer structure. DAF wavelets and the corresponding subband filters are constructed for imageprocessing. Edge-enhancement normalization and device-adapted visual group normalization algorithms are presented which sharpen the desired image features (especially for digital mammography) without prior knowledge of the spatial characteristics of the images. We design a nonlinear multiscale gradient-stretch method for feature extraction of mammograms (such as the detection of ill-defined borders and spiculated lesions, etc). A fractal technique is introduced to characterize microcalcifications in localized regions of breast tissue. We employ a DAF wavelet-based multiscale edge detection and Dijkstra fractal technique to identify microcalcification regions, and use a stochastic thresholding method to detect the calcified spots. The combined perceptual techniques (such as regularization, visual group normalization and contrast nonlinear enhancement) produce natural high-quality images based on the human vision system. The underlying technologies significantly facilitate the creation of generic signal processing and computer-aided diagnostic (CAD) systems. The system is implemented in the JAVA language, which is cross-platform friendly and is facilitated for telemedicine application.
In this paper we consider a new form of successive coefficient refinement which can be used in conjunction with embedded compression algorithms like Shapiro's EZW (Embedded Zerotree Wavelet) and Said & Pearlma...
详细信息
In this paper we consider a new form of successive coefficient refinement which can be used in conjunction with embedded compression algorithms like Shapiro's EZW (Embedded Zerotree Wavelet) and Said & Pearlman's SPIHT (Set Partitioning in Hierarchical Trees). Using the conventional refinement process, the approximation of a coefficient that was earlier determined to be significant is refined by transmitting one of two symbols - an 'up' symbol if the actual coefficient value is in the top half of the current uncertainty interval or a 'down' symbol if it is in the bottom half. In the modified scheme developed here, we transmit one of 3 symbols instead - 'up', 'down', or 'exact'. The new 'exact' symbol tells the decoder that its current approximation of a wavelet coefficient is 'exact' to the level of precision desired. By applying this scheme in earlier work to lossless embedded compression (also called lossy/lossless compression), we achieved significant reductions in encoder and decoder execution times with no adverse impact on compression efficiency. These excellent results for lossless systems have inspired us to adapt this refinement approach to lossy embedded compression. Unfortunately, the results we have achieved thus far for lossy compression are not as good.
This paper describes the benchmarking of imageprocessingalgorithms using high-performance workstations and personal desktop computers. For the various platforms evaluated which included machines from Sun, SGI, Apple...
详细信息
ISBN:
(纸本)0819429821
This paper describes the benchmarking of imageprocessingalgorithms using high-performance workstations and personal desktop computers. For the various platforms evaluated which included machines from Sun, SGI, Apple, and Gateway, compiler options were varied to obtain the fastest execution times. algorithms evaluated included typical imageprocessing operations such as derivatives, logical operations, morphology, subtraction, median filter, and the new SKIPSM approach. Data were collected using the different platforms and are presented here in tabular form. The results indicate that the latest generation of personal computers have processing capabilities that are similar to UNIX-based work stations.
An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements...
详细信息
ISBN:
(纸本)0819428361
An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimisation. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question "what would the performance of this segmentation algorithm be under these new conditions?" To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.
暂无评论