Parallel distributed, connectionist, neuralnetworks present powerful computational metaphors for diverse applications ranging from machine perception to artificial intelligence [1-3,6]. Historically, such systems hav...
详细信息
Most machine vision systems are based on a three-parameter representation for colors. It is argued that in contrast to three-parameter methods, the whole color spectrum should be used for recognition, resulting in imp...
详细信息
Most machine vision systems are based on a three-parameter representation for colors. It is argued that in contrast to three-parameter methods, the whole color spectrum should be used for recognition, resulting in improved accuracy. A method based on the spectrum as color representation is the subspace method, which is capable of accurate color recognition after a learning phase. The subspace method stems from well-known neural-networkmodels for associative memory. A practical parallel realization for the subspace model is an optical one. Different possibilities for optical implementations are discussed, and concrete color classification results are given.
models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems[1,3,7,8,19,23]. A major consideration in such systems, however, is how stored models are initially ...
详细信息
models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems[1,3,7,8,19,23]. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system[17,19,26]. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neuralnetworks1 have been shown to have appealing content addressable memory properties[2,4,5,9,15]. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neuralnetwork. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component[6]. I
An optical computer which performs the classification of an input object pattern into one of two learned classes is designed and demonstrated. The classifier is an optical implementation of a neuralnetwork model of c...
An optical computer which performs the classification of an input object pattern into one of two learned classes is designed and demonstrated. The classifier is an optical implementation of a neuralnetwork model of computation featuring learning, self-organization, and decision-making competition. neural computation is discussed including models for learning networks and motivation for optical implementation. A discussion of photorefractive crystal holographic storage and adaptation is presented followed by experimental results of writing and erasing gratings in several different crystals. The opticalnetwork features a photorefractive crystal to store holographic interconnection weights and an opto-electronic circuit to provide a means of competitive decision making and feedback. Results of the optical learning network and its operation as an associative memory are followed by extensions of the architecture to allow improved performance and greater flexibility.
We present a method for computingoptical flow using a neuralnetwork. In order to detect rotating and translating objects in the scene, we use rotation invariant measurement primitives, intensity values and their pri...
详细信息
We present a method for computingoptical flow using a neuralnetwork. In order to detect rotating and translating objects in the scene, we use rotation invariant measurement primitives, intensity values and their principle curvatures, to compute the optical flow, under the assumption that the changes in intensity are strictly due to the motion of the object. We first fit a 2-D polynomial to find a smooth continuous image intensity function in a window and estimate the subpixel intensity values and their principle curvatures. Under local rigidity assumption and smoothness constraints, a neuralnetwork is then employed to implement the computing procedure based on the estimated intensity values and their principle curvatures. A deterministic decision rule is used for the updating scheme. Owing to the dense measured primitives, a dense optical flow with subpixel accuracy is obtained with only a few iterations. Since all neurons are updated in parallel, this algorithm can be implemented in real time.
A typical word recognition system requires that several major tasks be performed; necessary components include (1) a preprocessor to extract the significant information from the speech time waveform, (2) a section whi...
A typical word recognition system requires that several major tasks be performed; necessary components include (1) a preprocessor to extract the significant information from the speech time waveform, (2) a section which stores the training set of word models or templates and then compares an unknown input pattem with the training set, and (3) decision logic to determine the best matching word. This thesis reports on experiments that explore isolated word recognition with an artif cial neuralnetwork based on the Huberman- Hogg (H-H) model. The results presented in this manuscript were developed from computer simulations of the speech recognition system, but an electro-optical H-H system is also proposed and described. The principal goal of the experimental work is to test the suitability of the ambiguity function representation in preprocessing speech data. Employing the ambiguity function for the speech signal representation was expected to provide two advantages: the input pattems to the H-H network should become less sensitive to time shifts of the total speech waveform, perhaps even making time alignment of the words unnecessary; the ambiguity function of a signal can be obtained in real time with a coherent optical processor, as shown by Marks, Walkup, and Krile (1977), to provide two-dimensional input to an electro- optical H-H network. Since studies indicate that the H-H neuralnetwork effectively processes a variety of input functions, this network was chosen as a classifier/recognizer for the ambiguity function pattems representing speech data. Ambiguity functions for isolated words are generated from digitized voice recordings and then submitted to the H-H network for training and recognition testing. Which pattem of the training set best matches the unknown pattem is a decision clearly dependent on the distance metric employed and these experiments explore use of several similarity measures. FoUowing an introductory discussion including an overview of spe
Computational machines do deterministic mappings of inputs to outputs. Current implemented and proposed machines include the von Neumann computer, calculators, special purpose logic devices, neuralnetworks, etc. Each...
详细信息
Computational machines do deterministic mappings of inputs to outputs. Current implemented and proposed machines include the von Neumann computer, calculators, special purpose logic devices, neuralnetworks, etc. Each differs in its mechanism, efficiency, and utility in mapping input states to output states for the myriad of potential computing applications. neuralnetwork features include parallel execution, adaptive learning, generalization, etc. neuralnetworks are potentially powerful devices for many classes of applications, but not all. Many times much time and emotional backing is given to a model which can already be implemented in a much more efficient way with an alternate technology. For example, arbitrary mappings of boolean input and output vectors, where the majority of input states are defined and important, coupled with a guaranteed high speed learning mechanism, can be efficiently accomplished with a normal RAM chip. This talk seeks to delineate the classes and features of applications which may truly require the special features of neuralnetworkmodels, and which are not efficiently fulfilled by current alternate technologies. Applications are dichotomized by specific features into classes which lend themselves to efficient implementation with varied computing machines. The application classes which fall into the expedient domain for neuralnetworkmodels is detailed. Features of this domain include sparse input/output domains, types of generalization, adaptation, parallelism, spontaneous versus supervised learning, and symbolic aspects. Different proposed models which efficiently implement the subclasses of the neuralnetwork application domain are summarized.
Much recent work in the field of opticalcomputing has concentrated upon the parallel associative memory/neuralnetwork algorithms. The authors present results from an implementation of a simple neuralnetwork. An opt...
详细信息
Much recent work in the field of opticalcomputing has concentrated upon the parallel associative memory/neuralnetwork algorithms. The authors present results from an implementation of a simple neuralnetwork. An optically addressed spatial light modulator, the Hughes liquid crystal light valve (LCLV), is used to perform thresholding and thin amplitude computer generated holograms perform the weighted interconnections. Two 6-bit vectors and their complements were stored as the memories and the outputs for all the possible inputs were observed. These outputs were then compared to those obtained from a computer simulation of the system.< >
An opticalneural-network architecture is proposed that captures engineering design expertise and makes it available to designers. The network and its use as an associative memory are described. A novel fast learning ...
详细信息
An opticalneural-network architecture is proposed that captures engineering design expertise and makes it available to designers. The network and its use as an associative memory are described. A novel fast learning algorithm is shown to be orders-of-magnitude faster than backpropagation. Reasons are presented for the feasibility of an optical implementation that uses novel optical devices. A structural example illustrates how engineering design expertise is captured and recovered from the network.< >
neuralnetworkcomputing uses the structure of neurons in the brain as a model for parallel-processing computers. There are numerous models which differ in detail but share common features including: the training of n...
详细信息
neuralnetworkcomputing uses the structure of neurons in the brain as a model for parallel-processing computers. There are numerous models which differ in detail but share common features including: the training of networks; simple computational elements with large numbers of interconnections; interconnection strengths are real-valued analogue quantities; and the network forms internal representations capturing subtle correlations. The authors describe the recent advances, particularly in the UK.< >
暂无评论