The aim of our work is to implement a system of automatic face image processing on DSP's : face detection in an image, face recognition and face identification. The first step is to localize the face in an image. ...
详细信息
ISBN:
(纸本)0819432938
The aim of our work is to implement a system of automatic face image processing on DSP's : face detection in an image, face recognition and face identification. The first step is to localize the face in an image. Our approach consists to approximate the face oval shape with an ellipse and to compute coordinates of the center of the ellipse. For this purpose, we explore a new version of the Hough transformation : the Fuzzy Generalized Hough transformation. To reduce the computation time, we present also several parallel implementations of the algorithm on a multi-DSP architecture using SynDEx tool which is a programming environment to generate optimized distributed real-time executives. We show that an acceleration of factor 1.7 has been obtained.
It is well known that high-dimensional integrals can be solved with Monte Carlo algorithms. Recently, it was discovered that there is a relationship between low discrepancy sets and the efficient evaluation of higher-...
详细信息
ISBN:
(纸本)0819432938
It is well known that high-dimensional integrals can be solved with Monte Carlo algorithms. Recently, it was discovered that there is a relationship between low discrepancy sets and the efficient evaluation of higher-dimensional integrals. Theory suggests that for midsize dimensional problems, algorithms based on low discrepancy sets should out perform all other existing methods by an order of magnitude in terms of the number of sample points used to evaluate the integrals. We show that the field of image processing can potentially take advantage of specific properties of low discrepancy sets. To illustrate this, we applied the theory of low discrepancy sequences to some relatively simple image processing and computer vision related operations such as the estimation of gray level image statistics, fast location of objects in a binary image and the reconstruction of images from a sparse set of points. Our experiments show that compared to standard methods, the proposed new algorithms are faster and statistically more robust. Classical low discrepancy sets based on the Halton and Sobol' sequences were investigated thoroughly and showed promising results. The use of low discrepancy sequences in image processing for image characterization, understanding and object recognition is a novel and promising area for further investigation.
We suggest a new table-based method for evaluating the exponential function in double precision arithmetic. This method can easily be extended to the base 2 exponential function.
We suggest a new table-based method for evaluating the exponential function in double precision arithmetic. This method can easily be extended to the base 2 exponential function.
Cauchy-vandermonde matrices are related with rational interpolation problems. In this paper we consider the general case in which multiple poles can appear. Fast algorithms for solving the corresponding linear systems...
详细信息
Cauchy-vandermonde matrices are related with rational interpolation problems. In this paper we consider the general case in which multiple poles can appear. Fast algorithms for solving the corresponding linear systems are presented. Some results on the total positivity of these matrices and other related matrices are included.
Subband-domain algorithms provide an attractive technique for wideband radar array processing. The subband-domain approach decomposes a received wideband signal into a set of narrowband signals. While the number of pr...
详细信息
Subband-domain algorithms provide an attractive technique for wideband radar array processing. The subband-domain approach decomposes a received wideband signal into a set of narrowband signals. While the number of processing threads in the system increases, the narrowband signals within each subband can be sampled at a correspondingly slower rate. Therefore, the data rate at the input is similar to that at the output of the subband processor. There are several advantages to the subbanding method. It can simplify typical radar algorithms such as adaptive beamforming and equalization by the virtue of reducing subband signal bandwidth, thereby potentially reducing the computational complexity over an equivalent tapped-delay line approach. It also allows for a greater parallelization of the processing task, hence enabling the use of slower and less power consuming hardware. In order to evaluate the validity of the subbanding approach, it is compared with conventional processing methods. This paper focuses on adaptive beamforming and pulse compression performance for a wideband radar system. The performance of an adaptive beamformer is given for a polyphase filter based subband approach and is measured against narrowband processing. SINR loss curves and beampatterns for a subband system are presented. Design criteria for subband polyphase filter processing that minimizes signal distortion are provided and the distortion is characterized. Finally subband-domain pulse compression is demonstrated and compared with the conventional approach.
It is not uncommon for remote sensing systems to produce in excess of 100 Mbytes/sec. Los Alamos National Laboratory designed a reconfigurable computer to tackle the signal and image processing challenges of high band...
详细信息
It is not uncommon for remote sensing systems to produce in excess of 100 Mbytes/sec. Los Alamos National Laboratory designed a reconfigurable computer to tackle the signal and image processing challenges of high bandwidth sensors. Reconfigurable computing, based on field programmable gate arrays, offers ten to one hundred times the performance of traditional microprocessors for certain algorithms. This paper discusses the architecture of the computer and the source of performance gains, as well as an example application. The calculation of multiple matched filters applied to multispectral imagery, showing a performance advantage of forty-five over Pentium II (450 MHz), is presented as an exemplar of algorithms appropriate for this technology.
Investigating a number of different integral transforms uncovers distinct patterns in the type of scale-based convolution theorems afforded by each. It is shown that scaling convolutions behave in quite a similar fash...
详细信息
Investigating a number of different integral transforms uncovers distinct patterns in the type of scale-based convolution theorems afforded by each. It is shown that scaling convolutions behave in quite a similar fashion to translational convolution in the transform domain, such that the many diverse transforms have only a few different forms for convolution theorems. The hypothesis is put forth that the space of integral transforms is partitionable based on these forms.
Fault tolerance is increasingly important as society has come to depend on computers for more and more aspects of daily life. The current concern about the Y2K problems indicates just how we much we depend on accurate...
详细信息
Fault tolerance is increasingly important as society has come to depend on computers for more and more aspects of daily life. The current concern about the Y2K problems indicates just how we much we depend on accurate computers. This paper describes work on time-shared TMR, a technique which is used to provide arithmetic operations that produce correct results in spite of circuit faults.
Scale as a physical quantity is a recently developed concept. The scale transform can be viewed as a special case of the more general Mellin transform and its mathematical properties are very applicable in the analysi...
详细信息
Scale as a physical quantity is a recently developed concept. The scale transform can be viewed as a special case of the more general Mellin transform and its mathematical properties are very applicable in the analysis and interpretation of the signals subject to scale changes. A number of single-dimensional applications of scale concept have been made in speech analysis, processing of biological signals, machine vibration analysis and other areas. Recently, the scale transform was also applied in multi-dimensional signalprocessing and used for image filtering and denoising. Discrete implementation of the scale transform can be carried out using logarithmic sampling and the well-known fast Fourier transform. Nevertheless, in the case of the uniformly sampled signals, this implementation involves resampling. An algorithm not involving resampling of the uniformly sampled signals has been derived too. In this paper, a modification of the later algorithm for discrete implementation of the direct scale transform is presented. In addition, similar concept was used to improve a recently introduced discrete implementation of the inverse scale transform. Estimation of the absolute discretisation errors showed that the modified algorithms have a desirable property of yielding a smaller region of possible error magnitudes. Experimental results are obtained using artificial signals as well as signals evoked from the temporomandibular joint. In addition, discrete implementations for the separable two-dimensional direct and inverse scale transforms are derived. Experiments with image restoration and scaling through two-dimensional scale domain using the novel implementation of the separable two-dimensional scale transform pair are presented.
The ULv decomposition (ULvD) is an important member of a class of rank-revealing two-sided orthogonal decompositions used to approximate the singular value decomposition (SvD). The ULvD can be updated and downdated mu...
详细信息
The ULv decomposition (ULvD) is an important member of a class of rank-revealing two-sided orthogonal decompositions used to approximate the singular value decomposition (SvD). The ULvD can be updated and downdated much faster than the SvD, hence its utility in the solution of recursive total least squares (TLS) problems. However, the robust implementation of ULvD after the addition and deletion of rows (called updating and downdating respectively) is not altogether straightforward. When updating or downdating the ULvD, the accurate computation of the subspaces necessary to solve the TLS problem is of great importance. In this paper, algorithms are given to compute simple parameters that can often show when good subspaces have been computed.
暂无评论