Hierarchical Temporal Memory (HTM) is a model with hierarchically connected modules doing spatial and temporal pattern recognition, as described by Jeff Hawkins in his book entitled On Intelligence. corticallearning ...
详细信息
ISBN:
(纸本)9781479945498
Hierarchical Temporal Memory (HTM) is a model with hierarchically connected modules doing spatial and temporal pattern recognition, as described by Jeff Hawkins in his book entitled On Intelligence. cortical learning algorithms (CLAs) comprise the second implementation of HTM. CLAs are an attempt by Numenta Inc. to create a computational model of perceptual analysis and learning inspired by the neocortex in the brain. In its current state only an implementation of one isolated region has been completed. The goal of this paper is to test if adding a second higher level region implementing CLAs to a system with just one region of CLAs, helps in improving the prediction accuracy of the system. The LIDA model (learning Intelligent Distribution Agent - LIDA is a cognitive architecture) can use such a hierarchical implementation of CLAs for its Perceptual Associative Memory.
The recent development in the theory of Hierarchical Temporal Memory (HTM) - cortical learning algorithms (CLA) which models the structural and algorithmic properties of neocortex has brought in a new paradigm in the ...
详细信息
The recent development in the theory of Hierarchical Temporal Memory (HTM) - cortical learning algorithms (CLA) which models the structural and algorithmic properties of neocortex has brought in a new paradigm in the field of machine intelligence. As the theory of HTM-CLA continues to evolve, the ways of inferring the patterns and structures recognized by the HTM-CLA algorithm are still a big challenge. Moreover, the existing methods used to infer the classification output from HTM-CLA are far from satisfactory. In this paper, we propose two new classifiers using similarity evaluation methods based on dot similarity (HDS) and mean-shift clustering (H-MSC) to obtain classification from the sparse distributed representation (SDR) output of HTM-CLA. We validate and benchmark the performance of our proposed classifiers using three datasets from the UCI machine learning repository. The results show that the proposed classifiers enhance the classification performance of HTM-CLA and their performance is also comparable to other traditional machine learning technique such as decision tree. (C) 2015 The Authors. Published by Elsevier B.V.
The recent development in the theory of Hierarchical Temporal Memory (HTM) - cortical learning algorithms (CLA) which models the structural and algorithmic properties of neocortex has brought in a new paradigm in the ...
详细信息
The recent development in the theory of Hierarchical Temporal Memory (HTM) - cortical learning algorithms (CLA) which models the structural and algorithmic properties of neocortex has brought in a new paradigm in the field of machine intelligence. As the theory of HTM-CLA continues to evolve, the ways of inferring the patterns and structures recognized by the HTM-CLA algorithm are still a big challenge. Moreover, the existing methods used to infer the classification output from HTM-CLA are far from satisfactory. In this paper, we propose two new classifiers using similarity evaluation methods based on dot similarity (H-DS) and mean-shift clustering (H-MSC) to obtain classification from the sparse distributed representation (SDR) output of HTM-CLA. We validate and benchmark the performance of our proposed classifiers using three datasets from the UCI machine learning repository. The results show that the proposed classifiers enhance the classification performance of HTM-CLA and their performance is also comparable to other traditional machine learning technique such as decision tree.
Recent advances in neuroscientific understanding have highlighted the highly parallel computation power of the mammalian neocortex. In this paper we describe a GPGPU-accelerated implementation of an intelligent learni...
详细信息
Recent advances in neuroscientific understanding have highlighted the highly parallel computation power of the mammalian neocortex. In this paper we describe a GPGPU-accelerated implementation of an intelligent learning model inspired by the structural and functional properties of the neocortex. Furthermore, we consider two inefficiencies inherent to our initial implementation and propose software optimizations to mitigate such problems. Analysis of our application's behavior and performance provides important insights into the GPGPU architecture, including the number of cores, the memory system, atomic operations, and the global thread scheduler. Additionally, we create a runtime profiling tool for the cortical network that proportionally distributes work across the host CPU as well as multiple GPGPUs available to the system. Using the profiling tool with these optimizations on Nvidia's CUDA framework, we achieve up to 60 x speedup over a single-threaded CPU implementation of the model. (c) 2012 Elsevier Inc. All rights reserved.
暂无评论