Deep learning is a branch of machine learning, which has been used to solve many problems in the fields of computer vision, speech recognition, natural language processing and so on. Deep neural networks are trained b...
详细信息
Training neural network is a NP complete problem[1][3].A learning algorithm is proposed in this *** analysis and simulation results show that this kind of algorithm, which is used to train large BP neural network,can ...
详细信息
ISBN:
(纸本)0780312333
Training neural network is a NP complete problem[1][3].A learning algorithm is proposed in this *** analysis and simulation results show that this kind of algorithm, which is used to train large BP neural network,can speed up the learning rate and gain accurate results.
In this paper, we propose a novel algorithm to learn a Buchi automaton from a teacher who knows an omega-regular language. The learned Buchi automaton can be a nondeterministic Buchi automaton or a limit deterministic...
详细信息
In this paper, we propose a novel algorithm to learn a Buchi automaton from a teacher who knows an omega-regular language. The learned Buchi automaton can be a nondeterministic Buchi automaton or a limit deterministic Buchi automaton. The learning algorithm is based on learning a formalism called family of DFAs (FDFAs) recently proposed by Angluin and Fisman. The main catch is that we use a classification tree structure instead of the standard observation table structure. The worst case storage space required by our algorithm is quadratically better than that required by the table-based algorithm proposed by Angluin and Fisman. We implement the proposed learning algorithms in the learning library ROLL (Regular Omega Language learning), which also consists of other complete omega-regular learning algorithms available in the literature. Experimental results show that our tree-based learning algorithms have the best performance among others regarding the number of solved learning tasks. (C) 2020 Elsevier Inc. All rights reserved.
Edge computing and IIoT (Industrial Internet of Things) are two representative application scenarios in 5G (5th Generation) mobile communication technology network. Therefore, this paper studies the resource allocatio...
详细信息
Edge computing and IIoT (Industrial Internet of Things) are two representative application scenarios in 5G (5th Generation) mobile communication technology network. Therefore, this paper studies the resource allocation methods for these two typical scenarios, and proposes an energy-efficient resource allocation method by using DRL algorithm, which enhances the network performance and reduces the network operation cost. Firstly, ac-cording to the characteristics of edge computing, taking the connection relationship between base stations, users and the transmission power allocated by base stations to users as decision variables, minimizing the overall energy efficiency as the goal, and taking the needs of mobile users as constraints, a resource allocation model for user quality of service (QoS) guarantee is designed. Secondly, simulation experiments are used to verify that the proposed method has faster training speed and convergence speed, ensures the quality-of-service requirements of mobile users, and realizes intelligent and energy-saving resource allocation. The approximate amounts of steps to convergence under four cases are 500, 350, 250, and 220, respectively, and the values of awards are 21.76, 21.09, 20.38, and 20.25, respectively. Thirdly, aiming at the IIoT environment scenario, this paper proposes a resource allocation method for 5G wide connection low delay services based on asynchronous dominant action evaluation algorithm (A3C). Firstly, according to the characteristics of IIoT environment, taking the connection relationship between base stations and users and the transmission power allocated by base stations to users as decision variables and maximizing energy efficiency while satisfying needs of each user as optimization goal, a resource allocation model for 5G wide connection low delay services is designed. On this basis, an energy -efficient resource allocation method according to A3C is proposed. The hierarchical aggregation clustering al-gorithm
Neural networks are usually trained using local gradient-based procedures. Such methods frequently found sub-optimal solutions being trapped in local minima. In solving some application problems, the input/output data...
详细信息
Neural networks are usually trained using local gradient-based procedures. Such methods frequently found sub-optimal solutions being trapped in local minima. In solving some application problems, the input/output data sets used to train a neural network may not be hundred percent precise but within certain range. It is difficult for the traditional neural network to solve such problems. A learning algorithm based on interval optimization is presented in this paper. The above disadvantages of the traditional learning algorithm are settled by using this method.
The Cerebellar Model Arithmetic Computer (CMAC) proposed by Albus is a neural network that imitates the human cerebellum, and it is basically a table lookup technique for representing nonlinear functions. The CMAC can...
详细信息
The Cerebellar Model Arithmetic Computer (CMAC) proposed by Albus is a neural network that imitates the human cerebellum, and it is basically a table lookup technique for representing nonlinear functions. The CMAC can approximate a wide variety of nonlinear functions by learning. However, the CMAC requires a number of learning iterations to achieve the adequate approximation accuracy, because CMAC learning algorithm is based on the Least Mean Square method. In this paper, a CMAC learning algorithm based on the Kalman filter is proposed. Using this algorithm, the number of learning iterations is reduced, and the comparable approximation accuracy is achieved as compared to the system using conventional learning algorithm. Generally, the learning algorithm based on the Kalman filter requires much larger computational quantity than that required by algorithms based on the Least Mean Square method. For CMAC system equations contain a sparse matrix, the computational quantity of the proposed learning algorithm can be reduced. Furthermore, since two CMAC weights being in the far place from each other can be considered to have little correlation, the number of weights that should be updated for an learning iteration can be reduced. This reduces the computational quantity of the learning algorithm. Computer simulation results for the modeling problems are presented.
In this paper, the hysteresis characterization in fuzzy spaces is presented by utilizing a fuzzy learning algorithm to generate fuzzy rules automatically from numerical data. The hysteresis phenomenon is first describ...
详细信息
In this paper, the hysteresis characterization in fuzzy spaces is presented by utilizing a fuzzy learning algorithm to generate fuzzy rules automatically from numerical data. The hysteresis phenomenon is first described to analyze its underlying mechanism. Then a fuzzy learning algorithm is presented to learn the hysteresis phenomenon and is used for predicting a simple hysteresis phenomenon. The results of learning are illustrated by mesh plots and input-output relation plots. Furthermore, the dependency of prediction accuracy on the number of fuzzy sets is studied. The method provides a useful tool to model the hysteresis phenomenon in fuzzy spaces.
The development of a high-performance optimization algorithm to solve the learning problem of the neural networks is strongly demanded with the advance of the deep learning. The learning algorithms with gradient norma...
详细信息
ISBN:
(纸本)9784907764579
The development of a high-performance optimization algorithm to solve the learning problem of the neural networks is strongly demanded with the advance of the deep learning. The learning algorithms with gradient normalization mechanisms have been investigated, and their effectiveness has been shown. In the learning algorithms, the adaptation of the learning rate is very important issue. The learning algorithms of the neural networks are classified into the batch learning and the mini-batch learning. In the learning with vast training data, the mini-batch type learning is often used due to the limitation of memory size and the computational cost. The mini-batch type learning algorithms with gradient normalization mechanisms have been investigated. However, the adaptation of the learning rate in the mini-batch type learning algorithm with the gradient normalization has not been investigated well. This study proposes to introduce a new learning rate adaptation mechanism based on sign variation of gradient to a mini-batch type learning algorithm with the gradient normalization. The effectiveness of the proposed algorithm is verified through applications to a learning problem of the multi-layered neural networks and a learning problem of the convolutional neural networks.
Multi-target tracking is one of the important fields in computer vision, which aims to solve the problem of matching and correlating targets between adjacent frames. In this paper, we propose a fine-grained track reco...
详细信息
Multi-target tracking is one of the important fields in computer vision, which aims to solve the problem of matching and correlating targets between adjacent frames. In this paper, we propose a fine-grained track recoverable (FGTR) matching strategy and a heuristic empirical learning (HEL) algorithm. The FGTR matching strategy divides the detected targets into two different sets according to the distance between them, and adopts different matching strategies respectively, in order to reduce false matching, we evaluated the trust degree of the target's appearance feature information and location feature information, adjusted the proportion of the two reasonably, and improved the accuracy of target matching. In order to solve the problem of trajectory drift caused by the cumulative increase of Kalman filter error during the occlusion process, the HEL algorithm predicts the position information of the target in the next few frames based on the effective information of other previous target trajectories and the motion characteristics of related targets. Make the predicted trajectory closer to the real trajectory. Our proposed method is tested on MOT16 and MOT17, and the experimental results verify the effectiveness of each module, which can effectively solve the occlusion problem and make the tracking more accurate and stable.
A distribution system becomes the most essential part of a power system as it links the utility and utility customers. Under abnormal conditions of the system, a definitive goal of the utility is to provide continuous...
详细信息
A distribution system becomes the most essential part of a power system as it links the utility and utility customers. Under abnormal conditions of the system, a definitive goal of the utility is to provide continuous power supply to the customers. This demands a fast restoration process and provision of optimal solutions without violating the power system operational constraints. The main objective of the proposed work is to reduce the service restoration cost (SRC) along with the elimination of the out-of-service loads. In addition, this work concentrates on the minimal usage and finding of optimal locations for additional equipment, such as capacitor placement (CP) and distributed generators (DGs). This paper proposes a two-stage strategy, namely, the service restoration phase and optimization phase. The first phase ensures the restoration of the system from the fault condition, and the second phase identifies the optimal solution with reconfiguration, CP, and DG placement. The optimization phase uses the teaching-learning algorithm (TLA) for optimal restructuring and optimal capacitor and DG placement. The robustness of the algorithm is validated by addressing the test cases under different fault conditions, such as single, multiple, and critical. The effectiveness of the proposed strategy is exhibited with the implementation to IEEE 33-bus radial distribution system (RDS) and 83-bus Taiwan Power Distribution Company (TPDC) System.
暂无评论