In this paper, we develop anew intensity-based stereo matching algorithm using maximum a posteriori estimation based on the framework of Markov random field. The intensity-based stereo matching process is formulated a...
详细信息
ISBN:
(纸本)0819422118;9780819422118
In this paper, we develop anew intensity-based stereo matching algorithm using maximum a posteriori estimation based on the framework of Markov random field. The intensity-based stereo matching process is formulated as a problem to search for the minimum cost energy function which maximizes the a posteriori probability. We introduce an objective cost function called energy function of piecewise smooth disparity field, in which the discontinuities and occlusions are explicitly taken into account. In order to minimize the non-convex energy function for disparity estimation, we propose a relaxation algorithm called mean field annealing which provides results nearly as good as simulated annealing but with much faster convergence. Unlike the conventional correlation matching or feature matching, the proposed method provides a dense array of disparities, eliminating the need of interpolation for the 3D structure reconstruction. Several experimental results with synthetic and real stereo images are presented to evaluate the performance s of proposed algorithm.
This paper presents a new technique for the detection and description of moving objects in natural scenes which is based on a statistical multi-feature analysis of video sequences. In most conventional schemes for the...
详细信息
This paper presents a new technique for the detection and description of moving objects in natural scenes which is based on a statistical multi-feature analysis of video sequences. In most conventional schemes for the detection of moving objects, temporal differences of subsequent images from a video sequence are evaluated by so-called change detection algorithms. These methods are based on the assumption that significant temporal changes of an image signal are caused by moving objects in the scene. However, as temporal changes of an image signal can as well be caused by many other sources (camera noise, varying illumination, small camera motion), such systems are afflicted with the dilemma of either causing many false alarms or failing to detect relevant events. To cope with this problem, the additional features of texture and motion beyond temporal signal differences are extracted and evaluated in the new algorithm. The adaptation of this method to normal fluctuations of the observed scene is performed by a time-recursive space-variant estimation of the temporal probability distributions of the different features (signal difference, texture and motion). Feature data which differ significantly from the estimated distributions are interpreted to be caused by moving objects.
Gray-scale textures can be viewed as random surfaces in gray-scale space. One method of constructing such surfaces is the Boolean random function model wherein a surface is formed by taking the maximum of shifted rand...
详细信息
ISBN:
(纸本)0819422118;9780819422118
Gray-scale textures can be viewed as random surfaces in gray-scale space. One method of constructing such surfaces is the Boolean random function model wherein a surface is formed by taking the maximum of shifted random functions. This model is a generalization of the Boolean random set model in which a binary image is formed by the union of randomly positioned shapes. The Boolean random set model is composed of two independent random processes: a random shape process and a point process governing the placement of grains. The union of the randomly shifted grains forms a binary texture of overlapping objects. For the Boolean random function model, the random set or grain is replaced by a random function taking values among the admissible gray values. The maximum over all the randomly shifted functions produces a model of a rough surface that is appropriate for some classes of textures. The Boolean random function model is analyzed by viewing its behavior on intersecting lines. Under mild conditions in the discrete setting, 1D Boolean random set models are induced on intersecting lines. The discrete 1D model has been completely characterized in previous work. This analysis is used to derive a maximum- likelihood estimation for the Boolean random function.
In order to address simultaneously the two functionalities, content-based scalability required by MPEG-4, we introduce a segmentation-based wavelet transform (SBWT). SBWT takes into account both the mathematical prope...
详细信息
ISBN:
(纸本)0819422118
In order to address simultaneously the two functionalities, content-based scalability required by MPEG-4, we introduce a segmentation-based wavelet transform (SBWT). SBWT takes into account both the mathematical properties of multiresolution analysis and the flexibility of region-based approaches for image compression. The associated methodology has two stages: 1) image segmentation into convex and polygonal regions; 2) 2D-wavelet transform of the signal corresponding to each region. In this paper, we have mathematically studied a method for constructing a multiresolution analysis (VjOmega)j (epsilon) N adapted to a polygonal region which provides an adaptive region-based filtering. The explicit construction of scaling functions, pre-wavelets and orthonormal wavelets bases defined on a polygon is carried out by using scaling functions is established by using the theory of Toeplitz operators. The corresponding expression can be interpreted as a location property which allow defining interior and boundary scaling functions. Concerning orthonormal wavelets and pre-wavelets, a similar expansion is obtained by taking advantage of the properties of the orthogonal projector P(V(j(Omega )) perpendicular from the space Vj(Omega ) + 1 onto the space (Vj(Omega )) perpendicular. Finally the mathematical results provide a simple and fast algorithm adapted to polygonal regions.
In order to address simultaneously the two functionalities, content-based scalability required by MPEG-4, we introduce a segmentation-based wavelet transform (SBWT). SBWT takes into account both the mathematical prope...
详细信息
ISBN:
(纸本)0819422118;9780819422118
In order to address simultaneously the two functionalities, content-based scalability required by MPEG-4, we introduce a segmentation-based wavelet transform (SBWT). SBWT takes into account both the mathematical properties of multiresolution analysis and the flexibility of region-based approaches for image compression. The associated methodology has two stages: 1) image segmentation into convex and polygonal regions; 2) 2D-wavelet transform of the signal corresponding to each region. In this paper, we have mathematically studied a method for constructing a multiresolution analysis (VjOmega)j Ε N adapted to a polygonal region which provides an adaptive region-based filtering. The explicit construction of scaling functions, pre-wavelets and orthonormal wavelets bases defined on a polygon is carried out by using scaling functions is established by using the theory of Toeplitz operators. The corresponding expression can be interpreted as a location property which allow defining interior and boundary scaling functions. Concerning orthonormal wavelets and pre-wavelets, a similar expansion is obtained by taking advantage of the properties of the orthogonal projector P(VΩ(j)⊥ from the space VOmegaj+1 onto the space (VΩj)⊥. Finally the mathematical results provide a simple and fast algorithm adapted to polygonal regions.
The analysis of SAR images requires in a first step to reduce the speckle noise which is due to the coherent character of the RADAR signal. The application of the minimum variance bound estimator leads to process the ...
详细信息
ISBN:
(纸本)0819422118;9780819422118
The analysis of SAR images requires in a first step to reduce the speckle noise which is due to the coherent character of the RADAR signal. The application of the minimum variance bound estimator leads to process the energy image instead of the amplitude one for the reduction of this multiplicative noise. The proposed analyzing methods are based on a multiscale vision model for which the image is only described by its significant structural features at a set of dyadic scales. The multiscale analysis is performed by a redundant discrete wavelet transform, the a trous algorithm. The filtering algorithm is interactive. At each step we compute the ratio between the observed energy image and the restored one. We detect at each scale the significant structures, by taking into account the exponential probability distribution function of the energy for the determination of the significant wavelet coefficients. The ratio is restored from its significant coefficients, and the restored image is updated. The iterations are stopped when any significant structure is detected in the ratio. Then, we are interested to extract and to analyze the contained objects. The multiscale analysis allows us an approach well adapted to diffused objects, without contrasted edges. An object is defined as a local maximum in the wavelet transform space (WTS). All the structures form a 3D connected set which is hierarchically organized. This set gives the description of an object in the WTS. The image of each object is restored y an inverse algorithm. The comparison between images taken at different epochs is done using the multiscale vision model. THat allows us to enhance the features at a given scale which have significantly varied. The correlation coefficients between the structures detected at each scale are far form the ones obtained between the pixel energy. For example, this method is very suitable to detect and to describe faint large scale variations.
The analysis of SAR images requires in a first step to reduce the speckle noise which is due to the coherent character of the RADAR signal. The application of the minimum variance bound estimator leads to process the ...
详细信息
ISBN:
(纸本)0819422118
The analysis of SAR images requires in a first step to reduce the speckle noise which is due to the coherent character of the RADAR signal. The application of the minimum variance bound estimator leads to process the energy image instead of the amplitude one for the reduction of this multiplicative noise. The proposed analyzing methods are based on a multiscale vision model for which the image is only described by its significant structural features at a set of dyadic scales. The multiscale analysis is performed by a redundant discrete wavelet transform, the a trous algorithm. The filtering algorithm is interactive. At each step we compute the ratio between the observed energy image and the restored one. We detect at each scale the significant structures, by taking into account the exponential probability distribution function of the energy for the determination of the significant wavelet coefficients. The ratio is restored from its significant coefficients, and the restored image is updated. The iterations are stopped when any significant structure is detected in the ratio. Then, we are interested to extract and to analyze the contained objects. The multiscale analysis allows us an approach well adapted to diffused objects, without contrasted edges. An object is defined as a local maximum in the wavelet transform space (WTS). All the structures form a 3D connected set which is hierarchically organized. This set gives the description of an object in the WTS. The image of each object is restored y an inverse algorithm. The comparison between images taken at different epochs is done using the multiscale vision model. THat allows us to enhance the features at a given scale which have significantly varied. The correlation coefficients between the structures detected at each scale are far form the ones obtained between the pixel energy. For example, this method is very suitable to detect and to describe faint large scale variations.
stochastic clutter can often be modeled as a piecewise stationary random field. The individual stationary subregions of homogeneity in the field can then be characterized by marginal density functions. This level of c...
详细信息
ISBN:
(纸本)0819422118
stochastic clutter can often be modeled as a piecewise stationary random field. The individual stationary subregions of homogeneity in the field can then be characterized by marginal density functions. This level of characterization is often sufficient for determination of clutter type on a local basis. We present a technique for the simultaneous characterization of the sub-regions of a random field based on semiparametric density estimation on the entire random field. This technique is based on a borrowed strength methodology that allows the use of observations from potentially dissimilar subregions to improve local density estimation and hence random process characterization. This approach is illustrated through an application to a set of digitized mammogram images which requires the processing five million observations. The results indicate that there is sufficient similarity between images, in addition to the more intuitively obvious within- image similarities, to justify such a procedure. The results are analyzed for the utility of such a procedure to produce superior models in terms of 'stochastic clutter characterization' for target detection applications in which there are variable background processes.
We present a weighting scheme for local weighted regression designed to achieve tow goals: (1) to reduce noise within image regions of smoothly varying intensities;and (2) to maintain sharp boundaries between image re...
详细信息
ISBN:
(纸本)0819422118
We present a weighting scheme for local weighted regression designed to achieve tow goals: (1) to reduce noise within image regions of smoothly varying intensities;and (2) to maintain sharp boundaries between image regions. Such a procedure can function as a preprocessing step in an image segmentation problem or simply as an image enhancement technique.
A normalization algorithm is proposed that improves the reconstructions of signals. After decomposing a signal in even linear bandpass filtered signals and a low-pass residual, it can be reconstructed reasonable well ...
详细信息
ISBN:
(纸本)0819422118
A normalization algorithm is proposed that improves the reconstructions of signals. After decomposing a signal in even linear bandpass filtered signals and a low-pass residual, it can be reconstructed reasonable well for a good choice of filters. However, to obtain good results the filter parameters must be chosen in such a way that they cover the frequency domain sufficiently well. This is often difficult for a small set of filters. We derive and demonstrate that in many situations it can be profitable to normalize the reconstructed image with respect to two global statistical parameters of the original image.
暂无评论