In response to a puff of wind, the American cockroach turns away and runs. The circuit underlying the initial turn of this escape response consists of three populations of individually identifiable nerve cells and app...
ISBN:
(纸本)9781558601840
In response to a puff of wind, the American cockroach turns away and runs. The circuit underlying the initial turn of this escape response consists of three populations of individually identifiable nerve cells and appears to employ distributed representations in its operation. We have reconstructed several neuronal and behavioral properties of this system using simplified neuralnetwork models and the backpropagation learning algorithm constrained by known structural characteristics of the circuitry. In order to test and refine the model, we have also compared the model's responses to various lesions with the insect's responses to similar lesions.
neuralnetworks have two interesting features: robustness and fault tolerance on the one hand, massive parallelism on the other hand. The best way to keep those features and take into account the underlying massive pa...
ISBN:
(纸本)9780897913720
neuralnetworks have two interesting features: robustness and fault tolerance on the one hand, massive parallelism on the other hand. The best way to keep those features and take into account the underlying massive parallelism is to map the neuralnetwork over a massively parallel architecture. However, a communication problem remains since the neurons are highly interconnected. A communication system, based on message transfers and without need for allocating a physical link for each connection, seems to be a solution for any parallel machine but is very hard to implement efficiently on hypercubes. We propose a dedicated VLSI architecture based on a two-dimensional array of asynchronous cells, each of which processing the simple algorithms of a neuron : both the back-propagation recall and learning schemes and being connected to its four immediate neighbors through eight unidirectional buffers, one for each way of the four directions. The main feature of this architecture is its hardware-based array-wide message transmission mechanism allowing a particular cell to communicate with potentially any other. A message is passed from a cell to one of its neighbors until it reaches its destination, its path being set dynamically in each passed-through cell. This non local communication system allows to process efficiently a class of distributed algorithms leading to a disorganized parallelism. This paper present the VLSI implementation of this architecture, shows how it can process feed-forward neuralnetworks and discuss its performances. Our implementation of the back-propagation algorithm is at least more than 4 times faster at evaluating the NETtalk text-to-speech network than the Warp, a 20 processor systolic array, which ought to be the fastest back-propagation simulator reported in the literature. Our results indicate that two-dimensional arrays can be good candidates for neuralprocessing, providing they can handle high communication rates.
This paper presents the architectural design and RISC based implementation of a prototype supercomputer, namely the Orthogonal MultiProcessor (OMP). The OMP system is constructed with 16 Intel 1860 RISC microprocessor...
ISBN:
(纸本)9780897913690
This paper presents the architectural design and RISC based implementation of a prototype supercomputer, namely the Orthogonal MultiProcessor (OMP). The OMP system is constructed with 16 Intel 1860 RISC microprocessors and 256 parallel memory modules, which are 2-D interleaved and orthogonally accessed using custom-designed spanning buses. The architectural design has been validated by a CSIM-based multiprocessor simulator. The design choices are based on worst-case delay analysis and simulation validation. The current OMP prototype chooses a 2-dimensional memory architecture, mainly for image processing, computer vision, and neuralnetwork simulation applications. The 16-processor OMP prototype is targeted to achieve a peak performance of 400 RISC integer MIPS or a maximum of 640 Mflops. This paper presents the architectural design of the OMP prototype at system and PC board levels. We are presently entering the fabrication stage of all the PC boards. The system is expected to become operational in late 1991 and benchmarking results will be available in 1992. Only hardware design features are reported here. Software and simulation results are reported elsewhere.
This paper proposes the use of massively parallel learning networks for classifying signals from electromagnetic transducers used in nondestructive evaluation. A description of the application problem is given. One of...
详细信息
ISBN:
(纸本)0852963882
This paper proposes the use of massively parallel learning networks for classifying signals from electromagnetic transducers used in nondestructive evaluation. A description of the application problem is given. One of the major contributions of the paper lies in the development of a distributedprocessingnetwork for preprocessing the transducer signal. Preprocessing is required for achieving invariance under rotation, translation and scaling of the signal. Issues relating to implementation of the preprocessingnetwork are presented. The complete architecture learning algorithm are described. Results of implementing the network as well as a discussion on the performance of the network are presented.
Summary form only given, as follows. A new parallel distributedprocessing architecture called an entropy machine (EM) is proposed. This machine, which is based on an artificial neuralnetwork composed of massive neur...
详细信息
Summary form only given, as follows. A new parallel distributedprocessing architecture called an entropy machine (EM) is proposed. This machine, which is based on an artificial neuralnetwork composed of massive neurons and interconnections, is used for solving a variety of NP-complete optimization problems. The EM performs either the parallel distributed gradient descent method or gradient ascent method to search for minima or maxima.
Summary form only given, as follows. The authors argue that distributed representations must satisfy five criteria in order to serve as an adequate foundation for constructing and manipulating conceptual knowledge. Th...
详细信息
Summary form only given, as follows. The authors argue that distributed representations must satisfy five criteria in order to serve as an adequate foundation for constructing and manipulating conceptual knowledge. These criteria are: automaticity, portability, structure encoding, semantic microcontent, and convergence. In our approach, distributed representations of semantic relations (i.e., propositions) are formed by recirculating the hidden layer in recurrent parallel distributedprocessingnetworks. The authors' experiments show that the resulting distributed semantic representations (DSRs) satisfy all of the above criteria. They believe that DSRs can help supply an important building block in developing more complex connectionist architectures for higher level inferencing, as is required in natural language processing.
Summary form only given, as follows. The great majority of computational cognitive models have adhered to the physical symbol system hypothesis (PSSH) of Newell. An approach that seems to be incompatible with the PSSH...
详细信息
Summary form only given, as follows. The great majority of computational cognitive models have adhered to the physical symbol system hypothesis (PSSH) of Newell. An approach that seems to be incompatible with the PSSH is that of parallel distributedprocessing (PDP), or connectionism. It is a controversial issue as to whether PSSH or PDP is a better characterization of the mind. At the root of this controversy are two questions: what sort of computer is the brain, and what sort of programs run on that computer? A theory is presented which bridges the apparent gap between PSSH and PDP approaches.
Summary form only given, as follows. Many distributed models of computation have been proposed over the years. On the surface they seem to be quite different from more traditional models, and often from each other. Bu...
详细信息
Summary form only given, as follows. Many distributed models of computation have been proposed over the years. On the surface they seem to be quite different from more traditional models, and often from each other. But do they offer a fundamentally different type of computation? Some of the more prominent models are investigated from this computational perspective. A brief description is given of each model, followed by a discussion of their capability both with respect to each other and in relation to classical models of computation.
Summary form only given. A neural autoassociator is proposed for transitive closure derivation. A number of neural-network-based heuristics are identified which lend themselves to easy solutions in transitive closure ...
详细信息
Summary form only given. A neural autoassociator is proposed for transitive closure derivation. A number of neural-network-based heuristics are identified which lend themselves to easy solutions in transitive closure derivations. Several parallel distributedprocessing algorithms are developed in a stepwise manner based on the heuristics and a few theorems are proved which support the correctness and time efficiency of different algorithms in different cases. It is shown that, in performing a closure derivation, the neuralnetwork model self-organizes its connectivity matrix into a transitive closure using the heuristics opportunistically at run time with little overhead. In addition to practical applications, the proposed approach identifies a new category of heuristics called neural-network-based heuristics. It is suggested and demonstrated that this category of heuristics may gain high efficiency, which has never been possible by sequential or traditional parallel processing.
A computing architecture for adaptive control and system modeling based on computational features of nonlinear discrete neuralnetworks is proposed. These features are massively parallel and distributed structures for...
详细信息
A computing architecture for adaptive control and system modeling based on computational features of nonlinear discrete neuralnetworks is proposed. These features are massively parallel and distributed structures for signal processing, with the potential for ever-improving performance through dynamical learning. The proposed delayed-input delayed-state network architecture and the general training scheme are described. A solution for the problem of analog signal representation by a binary neuralnetwork is suggested. Illustrative simulation results are promising and show good and robust performance in various cases.
暂无评论