Graphs widely exist in real-world, and Graph neuralnetworks (GNNs) have exhibited exceptional efficacy in graph learning in diverse fields. With the strengthening of data privacy protection worldwide in recent years,...
详细信息
In face of constant fluctuations and sudden bursts of data stream, elasticity of distributed stream processing system has become increasingly important. The proactive policy offers a powerful means to realize the effe...
详细信息
neuralnetworks offer an intriguing set of techniques for learning based on the adjustment of weights of connections between processing units. However, the power and limitations of connectionist methods for learning, ...
详细信息
neuralnetworks offer an intriguing set of techniques for learning based on the adjustment of weights of connections between processing units. However, the power and limitations of connectionist methods for learning, such as the method of back propagation in parallel distributedprocessingnetworks, are not yet entirely clear. A set of experiments that more precisely identify the power and limitations of the method of back propagation is reported on. The experiment on learning to compute the exclusive-or function suggests that the computational efficiency of learning by the method of back propagation depends on the initial weights in the network. The experiment on learning to play tic-tac-toe suggests that the information content of what is learned by the back propagation method is dependent on the initial abstractions in the network. It also suggests that these abstractions are a major source of power for learning in parallel distributedprocessingnetworks. In addition, it is shown that the learning task addressed by connectionist methods, including the back propagation method, is computationally intractable. These experimental and theoretical results strongly indicate that current connectionist methods may be too limited for the complex task of learning they seek to solve. It is proposed that the power of neuralnetworks may be enhanced by developing task-specific connectionist methods.
This paper discusses the issue of a class of cellular neuralnetworks with continuously distributed delays in the leakage terms. By applying Lyapunov functional method and differential inequality techniques, without a...
详细信息
This paper discusses the issue of a class of cellular neuralnetworks with continuously distributed delays in the leakage terms. By applying Lyapunov functional method and differential inequality techniques, without assuming the boundedness conditions on the activation functions, a new delay dependent sufficient condition is derived to ensure that all solutions of the networks converge exponentially to the zero point, which corrects some recent results of Xiong and Meng (Electron J Qual Theory Differ Equ, (10):1-12, ***://***/ejqtde/).
This research is about selection of deep neuralnetwork models for anomaly detection in Internet of Things network traffic. We are experimentally evaluating deep neuralnetwork models using the same software, hardware...
详细信息
ISBN:
(纸本)9781665414555
This research is about selection of deep neuralnetwork models for anomaly detection in Internet of Things network traffic. We are experimentally evaluating deep neuralnetwork models using the same software, hardware and the same subsets of the UNSW-NB 15 dataset for training and testing. The assessment results are quality metrics of anomaly detection and the time spent on training models.
Numerous neuroscience experiments have suggested that the cognitive process of human brain is realized as probability reasoning and further modeled as Bayesian inference. It is still unclear how Bayesian inference cou...
详细信息
ISBN:
(纸本)9781538649756
Numerous neuroscience experiments have suggested that the cognitive process of human brain is realized as probability reasoning and further modeled as Bayesian inference. It is still unclear how Bayesian inference could be implemented by neural underpinnings in the brain. Here we present a novel Bayesian inference algorithm based on importance sampling. By distributed sampling through a deep tree structure with simple and stackable basic motifs for any given neural circuit, one can perform local inference while guaranteeing the accuracy of global inference. We show that these task-independent motifs can be used in parallel for fast inference without iteration and scale-limitation. Furthermore, experimental simulations with a small-scale neuralnetwork demonstrate that our distributed sampling-based algorithm, consisting with our theoretical analysis, can approximate Bayesian inference. Taken all together, we provide a proof-of-principle to use distributedneuralnetworks to implement Bayesian inference, which gives a road-map for large-scale Bayesian network implementation based on spiking neuralnetworks with computer hardwares, including neuromorphic chips.
In this article, we develop a distributed algorithm for learning a large neuralnetwork that is deep and wide. We consider a scenario where the training dataset is not available in a single processing node, but distri...
详细信息
ISBN:
(纸本)9781538646588
In this article, we develop a distributed algorithm for learning a large neuralnetwork that is deep and wide. We consider a scenario where the training dataset is not available in a single processing node, but distributed among several nodes. We show that a recently proposed large neuralnetwork architecture called progressive learning network (PLN) can be trained in a distributed setup with centralized equivalence. That means we would get the same result if the data be available in a single node. Using a distributed convex optimization method called alternating-direction-method-of-multipliers (ADMM), we perform training of PLN in the distributed setup.
This paper researches and analyses the effective e-commerce coordination big data processing strategy through the infinite-depth neuralnetwork topology. This paper firstly proposes a neuralnetwork training model Neu...
详细信息
ISBN:
(纸本)9781728176499
This paper researches and analyses the effective e-commerce coordination big data processing strategy through the infinite-depth neuralnetwork topology. This paper firstly proposes a neuralnetwork training model neuralnetwork-Storm (NN-S) based on Storm streaming distributed architecture, which decomposes the neuralnetwork training task into multiple computing units by data-parallel method, the parameters are updated synchronously after the training of a single batch of data is completed. In the Storm architecture, a Zookeeper network is used for multi-server distributed deployment. The training results show that the NN-S model can significantly improve the training speed of neuralnetworks. At the same time, the NN-S architecture can quickly recover from node failures and network resource scheduling abnormalities with strong robustness. In this paper, we investigate the streaming-based distributedneuralnetwork training and design a Storm-based distributedneuralnetwork training model and optimized training algorithms, which are of reference significance for distributedneuralnetwork training.
In this paper we present the implementation of a framework for accelerating training and classification of arbitrary Convolutional neuralnetworks (CNNs) on the GPU. CNNs are a derivative of standard Multilayer Percep...
详细信息
ISBN:
(纸本)9780769539393
In this paper we present the implementation of a framework for accelerating training and classification of arbitrary Convolutional neuralnetworks (CNNs) on the GPU. CNNs are a derivative of standard Multilayer Perceptron (MLP) neuralnetworks optimized for two-dimensional pattern recognition problems such as Optical Character Recognition (OCR) or face detection. We describe the basic parts of a CNN and demonstrate the performance and scalability improvement that can be achieved by shifting the computation-intensive tasks of a CNN to the GPU. Depending on the network topology training and classification on the GPU performs 2 to 24 times faster than on the CPU. Furthermore, the GPU version scales much better than the CPU implementation with respect to the network size.
This paper presents a work in progress that aims to reduce the overall training and processing time of feed-forward multi-layer neuralnetworks. If the network is large processing is expensive in terms of both; time a...
详细信息
This paper presents a work in progress that aims to reduce the overall training and processing time of feed-forward multi-layer neuralnetworks. If the network is large processing is expensive in terms of both; time and space. In this paper, we suggest a cost-effective and presumably a faster processing technique by utilizing a heterogeneous distributed system composed of a set of commodity computers connected by a local area network. neuralnetwork computations can be viewed as a set of matrix multiplication processes. These can be adapted to utilize the existing matrix multiplication algorithms tailored for such systems. With Java technology as an implementation means, we discuss the different factors that should be considered in order to achieve this goal highlighting some issues that might affect such a proposed implementation.
暂无评论