In developing mammalian visual cortex, an initially uniform innervation by neurons representing the two eyes breaks into a series of stripes or patches - termed ocular dominance columns - in which one eye or the other...
详细信息
In developing mammalian visual cortex, an initially uniform innervation by neurons representing the two eyes breaks into a series of stripes or patches - termed ocular dominance columns - in which one eye or the other alternately dominates. Each column, many cells wide, consists of cortical cells largely or exclusively driven by input from a single eye. Individual afferents (neurons carrying sensory information into visual cortex) have their arbors of synaptic endings focused into one or a few patches within the appropriate eye's columns. The signal for this separation of the eyes appears to be local correlations in activity within each eye. We have developed a simple mathematical model for synaptic plasticity that incorporates, as special cases, several biological models that have been proposed: either 'Hebb' synapses;or synapses strengthened by activity-dependent uptake of trophic factor, said factor released either by cortical cells or afferents in an activity-dependent way.
The optical implementation of computing systems whose structure and function are motivated by natural in-telligence systems is a subject that involves opticalcomputing and neuralnetworkmodels for computation. These...
The optical implementation of computing systems whose structure and function are motivated by natural in-telligence systems is a subject that involves opticalcomputing and neuralnetworkmodels for computation. These are two areas that have individually received attention in yecent years and they share the common property that they promise to provide solutions for fundamental problems in computation. In the case of optical computers the limitation that is being addressed is communication. With optics it is possible to have large arrays of processing elements communicating with each other without connecting a wire between each pair. The need to provide wires for communication in an electronic circuit is perceived as a major technological limitation of VLSI [1]. The primary justification for opticalcomputing is therefore to extend the communication capability in computers [2,3]. It is not clear however precisely how this global communication capability can be put to good use. Through free space inter-connects and volume holograms we can have thousands of computing elements all talking to each other at the same time. Is it possible to do useful computation in such a system? We look at neuralnetworkmodels for an answer to this question.
The author discusses the optical implementation of computing systems whose structure and function are motivated by natural intelligence systems with emphasis on communication. The need to provide wires for communicati...
详细信息
ISBN:
(纸本)0818607432
The author discusses the optical implementation of computing systems whose structure and function are motivated by natural intelligence systems with emphasis on communication. The need to provide wires for communication in an electronic circuit is perceived as a major technological limitation of VLSI. The primary justification for opticalcomputing is therefore to extend the communication capability in computers. With optics it is possible to have large arrays of processing elements communicating with each other without a connecting wire between each pair. It is not clear, however, precisely how this global communication capability can be put to good use. The author examines neuralnetworks for an answer.
In recent years, there has been a rapid expansion in the field of nonlinear optics as weIl as in the field of neuralcomputing. Up to date, no one would doubt that nonlinear optics is one of the most promising fields ...
详细信息
ISBN:
(数字)9783663122722
ISBN:
(纸本)9783663122746
In recent years, there has been a rapid expansion in the field of nonlinear optics as weIl as in the field of neuralcomputing. Up to date, no one would doubt that nonlinear optics is one of the most promising fields of realizing large neuralnetworkmodels due to their inherent parallelism, the use of the speed of light and their ability to process two-dimensional data arrays without carriers or transformation bottlenecks. This is the reason why so many of the interesting applications of nonlinear optics - associative memories, Hopfield networks and self-organized nets - are realized in an all optical way using nonlinear optical processing elements. Both areas attracting people from a wide variety of disciplines and judged by the proliferation of published papers, conferences, international collaborations and enterprises, more people than ever before are now in volved in research and applications in these two fields. These people all bring a different background to the area, and one of the aims of this book is to provide a common ground from which new development can grow. Another aim is to explain the basic concepts of neural computation as weIl as its nonlinear optical realizations to an interested audi ence. Therefore, the book is about the whole field of opticalneuralnetwork applications, covering all the major approaches and their important results. Especially, it its an in troduction that develops the concepts and ideas from their simple basics through their formulation into powerful experimental neural net systems.
With the growth of areas such as edge computing, there is an increasing need to bring neuralnetworks to a wide diversity of different hardware. Requiring the software to perform the inference of neuralnetworks alrea...
With the growth of areas such as edge computing, there is an increasing need to bring neuralnetworks to a wide diversity of different hardware. Requiring the software to perform the inference of neuralnetworks already trained in these devices. Today, existing solutions are engines that interpret models and perform operation by operation. However, these engines are designed for specific hardware, and its architectures make it difficult to add new backends and frontends. To provide greater flexibility for supporting different hardware, this work proposes NNC, a flexible neuralnetwork compiler whose architecture is loosely coupled, to allow the programmer to add support for different hardware more flexibly. With this approach, NNC can generate code for GPUs with higher processing power such as ARM Mali, and microcontrollers with lower processing power. NNC is divided into three main layers, the frontend, the optimization and analysis layer, and the backend. The frontend allows writing parsers for different models. The optimization and analysis layer allows performing passes to improve the performance execution of the neuralnetwork and performs analyzes as the memory required to run the model. And the backend generates code for several platforms from the computational graph. With an architecture that uses a low coupling approach, it is possible to write new backends and connect them easily. NNC has backends for NNAPI, ARMNN, and also has a library of its kernels using Vulkan. What is a technology that allows cross-platform access to modern GPUs used in a wide variety of devices, such as mobile phones and embedded platforms. Even with NNC being still in its initial version, it was possible with this approach to achieve performance results similar to several other engines to perform inference of neuralnetworks, having the advantage of already supporting more hardware platforms than other engines
Billions of devices will compose the Internet of Things (IoT) in the next few years, generating a vast amount of data that will have to be processed and interpreted efficiently. Most data are currently processed on th...
Billions of devices will compose the Internet of Things (IoT) in the next few years, generating a vast amount of data that will have to be processed and interpreted efficiently. Most data are currently processed on the cloud, however, this paradigm cannot be adopted to process the vast amount of data generated by the IoT, mainly due to bandwidth limits and latency requirements of many applications. Thus, we can use edge computing to process these data using the devices themselves. In this context, deep learning techniques are generally suitable to extract information from these data, but the memory requirements of deep neuralnetworks may prevent even the inference from being executed on a single resource-constrained device. Furthermore, the computational requirements of deep neuralnetworks may yield an unfeasible execution time. To enable the execution of neuralnetworkmodels on resource-constrained IoT devices, the code may be partitioned and distributed among multiple devices. Different partitioning approaches are possible, nonetheless, some of them reduce the inference rate at which the system can execute or increase the amount of communication that needs to be performed between the IoT devices. In this thesis, the objective is to distribute the inference execution of Convolutional neuralnetworks to several constrained IoT devices. We proposed three automatic partitioning algorithms, which model the deep neuralnetwork as a dataflow graph and focus on the IoT features, using objective functions such as inference rate maximization or communication minimization and considering memory limitations as restrictions. The first algorithm is the Kernighan-and-Lin-based Partitioning, whose objective function is to minimize communication, respecting the memory restrictions of each device. The second algorithm is the Deep neuralnetworks Partitioning for Constrained IoT Devices, which, additionally to the first algorithm, can maximize the neuralnetwork inference rate an
暂无评论