VLSI hardware architectures of Kanerva's Sparse distributed Memory are described. These architectures are designed for high-speed parallel processing with modular expandability. Architectures for an analog address...
详细信息
VLSI hardware architectures of Kanerva's Sparse distributed Memory are described. These architectures are designed for high-speed parallel processing with modular expandability. Architectures for an analog address comparator and a systolic array have been developed with advanced structures. parallel shift register architecture and parallel comparator architecture with binary tree adders have been studied from an effective VLSI implementation point of view. The realization and performance estimations for each architecture are also presented.
The following topics are dealt with: object-oriented database systems;distributed database systems;design and human interfaces;data engineering techniques;artificial intelligence and knowledge-based systems;access met...
详细信息
ISBN:
(纸本)0818621389
The following topics are dealt with: object-oriented database systems;distributed database systems;design and human interfaces;data engineering techniques;artificial intelligence and knowledge-based systems;access methods and file structures;parallel query processing;deductive and extensive databases;distributed database control;heterogeneous, federated or multidatabase systems;query languages and processing;performance evaluation;applications and application systems;query processing;database design and modeling;benchmarks and performance evaluation;database management;multimedia database systems;object-oriented environments, and artificial intelligence and databases. Abstracts of individual papers can be found under the relevant classification codes in this or other issues.
A novel architecture of neural networks with distributed structures which is designed so that each class in the application has a one-output backpropagation subnetwork is presented. A novel architecture (One-Net-One-C...
详细信息
A novel architecture of neural networks with distributed structures which is designed so that each class in the application has a one-output backpropagation subnetwork is presented. A novel architecture (One-Net-One-Class) can overcome the drawbacks of conventional backpropagation architectures which must be completely retrained whenever a class is added. This architecture features complete paralleldistributed processing in that the network is comprised of subnetworks each of which is a single output two-layer backpropagation which can be trained and retrieved parallely and independently. The proposed architecture also enjoys rapid convergence in both the training phase and the retrieving phase.
This paper presents efficient hypercube algorithms for solving triangular systems of linear equations by using various matrix partitioning and mapping schemes. Recently, several parallel algorithms have been developed...
详细信息
暂无评论