It is pointed out that recurrent lateral connectivity in a layer of processing units gives rise to a rich variety of nonlinear response properties, such as overall gain control, emergent periodic response on a preferr...
详细信息
It is pointed out that recurrent lateral connectivity in a layer of processing units gives rise to a rich variety of nonlinear response properties, such as overall gain control, emergent periodic response on a preferred spatial scale (collective excitations), and distributed winner-take-all response. This diversity of response properties is observed in several different classes of simple network architectures, including the additive linear network, the additive sigmoidal network, and the nonlinear shunting network. When Hebbian learning is coupled with network dynamics, these models have been shown to support the development of modular connectivity structures analogous to cortical columns
A three-layer neuralnetwork is trained to perform spectral estimation. Excellent performance is found through computer simulations. When compared with an equivalent length radix-2 fast Fourier transform, the neural n...
详细信息
A three-layer neuralnetwork is trained to perform spectral estimation. Excellent performance is found through computer simulations. When compared with an equivalent length radix-2 fast Fourier transform, the neuralnetwork has the following advantages: (1) the neuralnetwork can suppress the signal magnitude in sidelobes without widening the mainlobe;(2) the neuralnetwork completes its computation in two stages regardless of the number of input signal samples;(3) the number of input signal samples does not need to be an integer power of two;and (4) the neuralnetwork still generates valid results even if there are 25% broken interconnects uniformly distributed between the output layer and the hidden layer or between the hidden layer and the input layer. An application example of using a three-layer neuralnetwork for moving target detection is also included.
It is shown that firing intervals in a temporal-coded spike train can be decoded by a multilayered time-delayed neuralnetwork. The serially coded firing intervals of a spike train can be converted into a spatially di...
详细信息
It is shown that firing intervals in a temporal-coded spike train can be decoded by a multilayered time-delayed neuralnetwork. The serially coded firing intervals of a spike train can be converted into a spatially distributed topographical map from which the interspike-interval and bandwidth information can be extracted. This network can be used to decode multiplexed pulse-coded signals embedded serially in the incoming spike train into parallel-distributed topographically mapped channels. The two-dimensionally distributed output neuron array can also be used to extract the variance (or inaccuracy) tolerance of the incoming firing interspike intervals. This mapping can also be used to characterize the underlying stochastic processes in the firing of the incoming spike train. Thus, the proposed network represents an implementation of a signal-processing scheme for code conversion using time for computing and coding that does not require learning.
The dynamics of simple chemical processes is described by multi-layer neuralnetwork models. The neuralnetwork is able to 'learn' non-linear relationships from the input-out pairs for the process and store th...
详细信息
The dynamics of simple chemical processes is described by multi-layer neuralnetwork models. The neuralnetwork is able to 'learn' non-linear relationships from the input-out pairs for the process and store the 'knowledge' in parallel distributedprocessing elements. neuralnetwork models employing back-propagation algorithm may be used to manipulate input variables for the control purpose. The advantage of using such a model for a chemical process is discussed.
Highly interconnected networks of relatively simple processing elements are shown to be very effective in solving difficult optimization problems. Problems that fall into the broad category of finding a least-cost pat...
详细信息
Highly interconnected networks of relatively simple processing elements are shown to be very effective in solving difficult optimization problems. Problems that fall into the broad category of finding a least-cost path between two points, given a distributed and sometimes complex cost map, are studied. A neurallike architecture and associated computational rules are proposed for the solution of this class of optimal path-finding problems in two- and higher-dimensional spaces. The proposed algorithm is local in nature and is very well suited for highly parallel, fine-grained, and distributed architectures. Also described is a collective multilayer neurallike architecture, characterized by speed of convergence, scalability. and guaranteed convergence to optimal solutions.
SFINX, a simulation environment for modeling a wide spectrum of neural architectures with both regular and irregular connectivity patterns, is discussed. SFINX is not based on a specific neural paradigm;it supports a ...
详细信息
ISBN:
(纸本)0879425970
SFINX, a simulation environment for modeling a wide spectrum of neural architectures with both regular and irregular connectivity patterns, is discussed. SFINX is not based on a specific neural paradigm;it supports a variety of computational models ranging from simple convolution filters such as difference-of-Gaussians receptive fields used in image processing to learning paradigms such as backward error propagation. SFINX's main components and data are described, and the representation of explicit and implicit networks is examined. An explicit network is a collection of data structures that contain information about each node in the network. The name implicit networks reflects the fact that a node's activation value, output value, etc., are all distributed across a set of buffer array data structures and that the connectivity information is stored inside the node function. A simulation example is given to illustrate the use of SFINX.
A neural-network-based routing algorithm is presented which demonstrates the ability to take into account simultaneously the shortest path and the channel capacity in computer communication networks. A Hopfield-type o...
详细信息
A neural-network-based routing algorithm is presented which demonstrates the ability to take into account simultaneously the shortest path and the channel capacity in computer communication networks. A Hopfield-type of neural-network architecture is proposed to provide the necessary connections and weights, and it is considered as a massively parallel distributedprocessing system with the ability to reconfigure a route through dynamic learning. This provides an optimum transmission path from the source node to the destination node. The traffic conditions measured throughout the system have been investigated. No congestion occurs in this network because it adjusts to the changes in the status of weights and provides a dynamic response according to the input traffic load. Simulation of a ten-node communication network shows not only the efficiency but also the capability of generating a route if broken links occur or the channels are saturated.
The application of artificial neuralnetwork technology to data fusion for target recognition is discussed. The specific application includes airborne target recognition primarily using information available from rada...
详细信息
The application of artificial neuralnetwork technology to data fusion for target recognition is discussed. The specific application includes airborne target recognition primarily using information available from radar and EO-IR thermal imaging sensors. The artificial neuralnetwork based fusion architectures from alternative approaches are discussed, such as alternative learning algorithms, a training set, and massively parallel distributedprocessing for making real-time automatic target recognition decisions. The experimental results indicate that target recognition performance can be greatly enhanced by using an array of complementary sensors whose outputs are fused to extract information not otherwise available from a single sensor.
The proceedings contain 23 papers. The special focus in this conference is on neuralnetworks. The topics include: Preface;when are k-nearest neighbor and back propagation accurate for feasible sized sets of examples?...
ISBN:
(纸本)9783540522553
The proceedings contain 23 papers. The special focus in this conference is on neuralnetworks. The topics include: Preface;when are k-nearest neighbor and back propagation accurate for feasible sized sets of examples?;rule-injection hints as a means of improving network performance and learning time;inversion in time;cellular neuralnetworks: Dynamic properties and adaptive learning algorithm;improved simulated annealing, Boltzmann machine, and attributed graph matching;artificial dendritic learning;a neural net model of human short-term memory development;large vocabulary speech recognition using neural-fuzzy and concept networks;speech feature extraction using neuralnetworks;neuralnetwork based continuous speech recognition by combining self organizing feature maps and hidden markov modeling;ultra-small implementation of a neural halftoning technique;complexity theory of neuralnetworks and classification problems;application of self-organising networks to signal processing;a study of neuralnetwork applications to signal processing;simulation machine and integrated implementation of neuralnetworks: A review of methods, problems and realizations;vLSI implementation of an associative memory based on distributed storage of information;generalization performance of overtrained back-propagation networks;stability of the random neuralnetwork model;temporal pattern recognition using EBPS;markovian spatial properties of a random field describing a stochastic neuralnetwork: Sequential or parallel implementation?;chaos in neuralnetworks;the "moving targets" training algorithm;acceleration techniques for the backpropagation algorithm.
A Gaussian-model classifier trained by maximum mutual information estimation (MMIE) is compared to one trained by maximum-likelihood estimation (MLE) and to an artificial neuralnetwork (ANN) on several classification...
详细信息
A Gaussian-model classifier trained by maximum mutual information estimation (MMIE) is compared to one trained by maximum-likelihood estimation (MLE) and to an artificial neuralnetwork (ANN) on several classification tasks. Similarity of MMIE and ANN results for uniformly distributed data confirm that the ANN is better than the MLE in some cases due to the ANNs use of an error-correcting training algorithm. When the probability model fits the data well, MLE is better than MMIE if the training data are limited, but they are equal if there are enough data. When the model is a poor fit, MMIE is better than MLE. Training dynamics of MMIE and ANN are shown to be similar under certain assumptions. MMIE seems more susceptible to overtraining and computational difficulties than the ANN. Overall, ANN is the most robust of the classifiers.
暂无评论