A new method for classification of multi-spectral data is proposed. This method is based on fitting mixtures of multivariate Gaussian components to training and unlabeled samples by using the EM algorithm. Through a b...
ISBN:
(纸本)0819412813
A new method for classification of multi-spectral data is proposed. This method is based on fitting mixtures of multivariate Gaussian components to training and unlabeled samples by using the EM algorithm. Through a backtracking search strategy with appropriate depth bounds, a series of mixture models are compared. The validity of the candidate models are evaluated by considering their description lengths and allocation rates. The most suitable model is selected and the multi-spectral data are classified accordingly. The EM algorithm is mapped onto a massively parallel computer system to reduce the computational cost. Experimental results show that the proposed algorithm is more robust against variations in training samples than the conventional supervised Gaussian maximum likelihood classifier.
A new vector quantization method is proposed which incrementally generates a suitable codebook. During the generation process, new vectors are inserted in areas of the input vector space where the quantization error i...
详细信息
A new vector quantization method is proposed which incrementally generates a suitable codebook. During the generation process, new vectors are inserted in areas of the input vector space where the quantization error is especially high. A one-dimensional topological neighborhood makes it possible to interpolate new vectors from existing ones. Vectors not contributing to error minimization are removed. After the desired number of vectors is reached, a stochastic approximation phase fine tunes the codebook. The final quality of the codebooks is exceptional. A comparison with two methods for vector quantization is performed by solving an image compression problem. The results indicate that the new method is clearly superior to both other approaches.< >
We define a methodology for aligning multiple, three-dimensional, magnetic-resonance observations of the human brain over six degrees of freedom. The observations may be taken with disparate resolutions, pulse sequenc...
ISBN:
(纸本)0819412813
We define a methodology for aligning multiple, three-dimensional, magnetic-resonance observations of the human brain over six degrees of freedom. The observations may be taken with disparate resolutions, pulse sequences, and orientations. The alignment method is a practical combination of off- line and interactive computation. An off-line computation first automatically performs a robust surface extraction from each observation. Second, an operator executes interactively on a graphics workstation to produce the alignment. For our experiments, we were able to complete both alignment tasks interactively, due to the quick execution of our implementation of the off-line computation on a highly- parallel supercomputer. To assess accuracy of an alignment, we also propose a consistency measure.
The paper presented here explores the possibility of applying neural networks to identify authorized users of a computer system. computer security can be ensured only by restricting access to a computer system. This i...
详细信息
ISBN:
(纸本)0819409391
The paper presented here explores the possibility of applying neural networks to identify authorized users of a computer system. computer security can be ensured only by restricting access to a computer system. This in turn requires a sure means of identifying authorized users. The related research is based on the fact that every human being is distinguished by many unique physical characteristics. It has been known even before the age of computers that no two individuals sign their names identically. Signature samples collected from a group of individuals are analyzed and a neural network-based system that can recognize these signatures is designed.
A technique for recoding multidimensional data in a representation of reduced dimensionality is presented. A non-linear encoder-decoder for multidimensional data with compact representations is developed. The techniqu...
详细信息
ISBN:
(纸本)0819409391
A technique for recoding multidimensional data in a representation of reduced dimensionality is presented. A non-linear encoder-decoder for multidimensional data with compact representations is developed. The technique of training a neural network to learn the identity map through a `bottleneck' is extended to networks with non-linear representations, and an objective function which penalizes entropy of the hidden unit activations is shown to result in low dimensional encodings. For scalar time series data, a common technique is phase-space reconstruction by embedding the time-lagged scalar signal in a higher dimensional space. Choosing the proper embedding dimension is difficult. By using non-linear dimensionality reduction, the intrinsic dimensionality of the underlying system may be estimated.
In this paper we use neural network algorithms for office layout. A pixel matrix of coarse pixels is used to represent the objects of the room and their spatial relation. For each pixel the probabilities of the differ...
详细信息
ISBN:
(纸本)0819409391
In this paper we use neural network algorithms for office layout. A pixel matrix of coarse pixels is used to represent the objects of the room and their spatial relation. For each pixel the probabilities of the different objects are predicted from the neighboring pixels, assuming that the geometrical structure is mainly determined by local characteristics. Local receptive fields are employed to capture these local interactions using backpropagation networks. The reconstruction of the complete scene is achieved by an iterative process. Starting with given marginal constraints (or missing information for specific locations) each feature map performs an association with respect to its central pixel. This corresponds to the simulation of a Markov random field. External constraints on the sum of probabilities are taken into account using the iterative proportional fitting algorithm. The viability of the approach is demonstrated by an example.
We propose the use of self-organizing maps (SOMs) and learning vector quantization (LVQ) as an initialization method for the training of the continuous observation density hidden Markov models (CDHMMs). We apply CDHMM...
详细信息
ISBN:
(纸本)0819409391
We propose the use of self-organizing maps (SOMs) and learning vector quantization (LVQ) as an initialization method for the training of the continuous observation density hidden Markov models (CDHMMs). We apply CDHMMs to model phonemes in the transcription of speech into phoneme sequences. The Baum-Welch maximum likelihood estimation method is very sensitive to the initial parameter values if the observation densities are represented by mixtures of many Gaussian density functions. We suggest the training of CDHMMs to be done in two phases. First the vector quantization methods are applied to find suitable placements for the means of Gaussian density functions to represent the observed training data. The maximum likelihood estimation is then used to find the mixture weights and state transition probabilities and to re-estimate the Gaussians to get the best possible models. The result of initializing the means of distributions by SOMs or LVQ is that good recognition results can be achieved using essentially fewer Baum-Welch iterations than are needed with random initial values. Also, in the segmental K-means algorithm the number of iterations can be remarkably reduced with a suitable initialization. We experiment, furthermore, to enhance the discriminatory power of the phoneme models by adaptively training the state output distributions using the LVQ-algorithm.
The study of connectionist models for pattern recognition is mainly motivated by their presumed simultaneous feature selection and classification. Character recognition is a common test case to illustrate the feature ...
详细信息
ISBN:
(纸本)0819409391
The study of connectionist models for pattern recognition is mainly motivated by their presumed simultaneous feature selection and classification. Character recognition is a common test case to illustrate the feature extraction and classification characteristics of neural networks. Most of the variability concerning size and rotation can be handled easily, while acquisition conditions are usually controlled. Many examples of neural character recognition applications were presented where the most successful results for optical character recognition (OCR) with image inputs were reported on a layered network (LeCun et al., 1990) integrating feature selection and invariance notions introduced earlier in neocognitron networks. Previously, we have presented a supervised learning algorithm, based on Kohonen's self- organizing feature maps, and its applications to image and speech processing (Midenet et al., 1991). From pattern recognition point of view, the first network performs local feature extraction while the second does a global statistical template matching. We describe these models and their comparative results when applied to a common French handwritten zip-code database. We discuss possible cooperation schemes and show that the performances obtained by these networks working in parallel exceed those of the networks working separately. We conclude by the possible extensions of this work for automatic document processing systems.
Increasingly huge amounts of digital data from a wide range of sources such as B-ISDN services, satellite transmission of photographs, and police database of human face images are being transmitted and stored. Therefo...
详细信息
ISBN:
(纸本)0819409391
Increasingly huge amounts of digital data from a wide range of sources such as B-ISDN services, satellite transmission of photographs, and police database of human face images are being transmitted and stored. Therefore, both transmission channel capacity and disk space are limited. For some advanced techniques, such as multi-media terminal and HDTV etc., the problems are even more apparent. Based on this it is important that efficient image compression algorithms are used in order to reduce the transmission capacity and storage space. In this paper, a scheme of image data compression with an adaptive BP neural network is presented. The data compression property of mapping original image to a feature space of reduced dimensionality is utilized. images are divided as a set of 8 × 8 sub-image blocks which apply to a three layer BP neural network as inputs. It is shown from computer simulation that the results are better than Sonehara, et al.
暂无评论