This paper proposes a novel training algorithm for high-quality Deep Neural Network (DNN)-based speech synthesis. The parameters of synthetic speech tend to be over-smoothed, and this causes significant quality degrad...
详细信息
ISBN:
(纸本)9781509041176
This paper proposes a novel training algorithm for high-quality Deep Neural Network (DNN)-based speech synthesis. The parameters of synthetic speech tend to be over-smoothed, and this causes significant quality degradation in synthetic speech. The proposed algorithm takes into account an Anti-Spoofing Verification (ASV) as an additional constraint in the acoustic model training. The ASV is a discriminator trained to distinguish natural and synthetic speech. Since acoustic models for speech synthesis are trained so that the ASV recognizes the synthetic speech parameters as natural speech, the synthetic speech parameters are distributed in the same manner as natural speech parameters. Additionally, we find that the algorithm compensates not only the parameter distributions, but also the global variance and the correlations of synthetic speech parameters. The experimental results demonstrate that 1) the algorithm outperforms the conventional training algorithm in terms of speech quality, and 2) it is robust against the hyper-parameter settings.
A visual pattern recognition network and its training algorithm are proposed. The network constructed of a one-layer morphology network and a two-layer modified Hamming net. This visual network can implement invariant...
详细信息
A visual pattern recognition network and its training algorithm are proposed. The network constructed of a one-layer morphology network and a two-layer modified Hamming net. This visual network can implement invariant pattern recognition with respect to image translation and size projection. After supervised learning takes place, the visual network extracts image features and classifies patterns much the same as living beings do. Moreover we set up its optoelectronic architecture for real-time pattern recognition. (C) 1996 Optical Society of America
In this study, the effect of the algorithms used in the training of artificial neural networks on the prediction performance of artificial neural networks has been extensively investigated. In order to make this analy...
详细信息
In this study, the effect of the algorithms used in the training of artificial neural networks on the prediction performance of artificial neural networks has been extensively investigated. In order to make this analysis, three different artificial neural network models have been developed using the Levenberg-Marquardt, Bayesian regularization, and scaled conjugate gradient training algorithms, which are frequently used in the literature. In the training of artificial neural networks, the specific heat values measured experimentally by the differential thermal analysis method of ZrO2/water nanofluid prepared in five different volumetric concentrations have been used. Temperature (T) and volumetric concentration (phi) are defined as input parameters of the multilayer perceptron feed-forward back-propagation artificial neural network model with 15 neurons in the hidden layer and specific heat values are predicted at the output layer. The results showed that artificial neural networks are an ideal tool for predicting the thermophysical properties of nanofluids. However, it has been found that the artificial neural network designed with the Bayesian regularization training algorithm has the highest prediction performance with an average margin of deviation of 0.00009%. It has been observed that the artificial neural network developed with the scaled conjugate gradient training algorithm has the lowest prediction performance with an average error of -0.0032%.
This paper presents a new method for recognition of nine control chart patterns (CCPs) based on the intelligent use of shape and statistical features and optimized fuzzy system. The proposed technique contains three l...
详细信息
This paper presents a new method for recognition of nine control chart patterns (CCPs) based on the intelligent use of shape and statistical features and optimized fuzzy system. The proposed technique contains three levels of separation. In each level of separation, an effective set of shape and statistical features are utilized as the input of classifier for recognizing a part of patterns. Due to the good performance of the adaptive neuro-fuzzy inference system (ANFIS) in pattern recognition problems, in the proposed method an ANFIS is used as a classifier at each level of separation which is trained by chaotic whale optimization algorithm (CWOA). Intelligent utilization of new extracted features, improving robustness of ANFIS and considering nine patterns in CCP recognition problem are the main contribution of the proposed method. The simulation results showed that the proposed method performs better than other similar methods and can recognize the type of pattern with 99.77% accuracy. (C) 2019 ISA. Published by Elsevier Ltd. All rights reserved.
The purpose of this study was to determine whether artificial neural network (ANN) programs implementing different backpropagation algorithms and default settings are capable of generating equivalent highly predictive...
详细信息
The purpose of this study was to determine whether artificial neural network (ANN) programs implementing different backpropagation algorithms and default settings are capable of generating equivalent highly predictive models. Three ANN packages were used: INForm, CAD/Chem and MATLAB. Twenty variants of gradient descent, conjugate gradient, quasi-Newton and Bayesian regularisation algorithms were used to train networks containing a single hidden layer of 3-12 nodes. All INForm and CAD/Chem models trained satisfactorily for tensile strength, disintegration time and percentage dissolution at 15, 30, 45 and 60 min. Similarly, acceptable training was obtained for MATLAB models using Bayesian regularisation. training of MATLAB models with other algorithms was erratic. This effect was attributed to a tendency for the MATLAB implementation of the algorithms to attenuate training in local minima of the error surface. Predictive models for tablet capping and friability could not be generated. The most predictive models from each ANN package varied with respect to the optimum network architecture and training algorithm. No significant differences were found in the predictive ability of these models. It is concluded that comparable models are obtainable from different ANN programs provided that both the network architecture and training algorithm are optimised. A broad strategy for optimisation of the predictive ability of an ANN model is proposed. (c) 2005 Elsevier B.V. All rights reserved.
This paper is concerned with a proposal for a fuzzy artificial neuron with bi-nary input. The fuzzy neuron is based on fuzzy logic in that each component of the input Vector is compared to a number which represent the...
详细信息
This paper is concerned with a proposal for a fuzzy artificial neuron with bi-nary input. The fuzzy neuron is based on fuzzy logic in that each component of the input Vector is compared to a number which represent the membership value for a 0 in that position. The results of the comparisons are then combined using a generalized mean function to produce a single number which is compared to a threshold as in the case of a perceptron consisting of a linear combiner with hard limiting function. A training algorithm is developed based on an algorithm for linear inequalities described by Ho and Kashyap in a paper titled 'An algorithm for Linear Inequalities and its Applications'. The results obtained by simulation look promising.
Long short-term memory deep artificial neural network is the most commonly used artificial neural network in the literature to solve the forecasting problem, and it is usually trained with the Adam algorithm, which is...
详细信息
Long short-term memory deep artificial neural network is the most commonly used artificial neural network in the literature to solve the forecasting problem, and it is usually trained with the Adam algorithm, which is a derivative-based method. It is known that derivative-based methods are adversely affected by local optimum points and training results can have large variance due to their random initial weights. In this study, a new training algorithm is proposed, which is less affected by the local optimum problem and has a lower variance due to the random selection of initial weights. The proposed new training algorithm is based on particle swarm optimization, which is an artificial intelligence optimization method used to solve the numerical optimization problem. Since particle swarm optimization does not need the derivative of the objective function and it searches in the random search space with more than one solution point, the probability of getting stuck in the local optimum problem is lower than the derivative algorithms. When the proposed training algorithm is based on particle swarm optimization, the probability of getting stuck in the local optimum problem is very low. In the training algorithm, the restart strategy and the early stop condition are included so that the algorithm eliminates the overfitting problem. To test the proposed training algorithm, 10-time series obtained from FTSE stock exchange data sets are used. The proposed training algorithm is compared with Adam's algorithm and other ANNs using various statistics and statistical hypothesis tests. The application results show that the proposed training algorithm improves the results of long short-term memory, it is more successful than the Adam algorithm, and the long short-term memory trained with the proposed training algorithm gives superior forecasting performance compared to other ANN types.
How to design proper architectures of neural networks for solving given problems is an important issue in neural network research. Nowadays, the existing training algorithms of neural networks only focus on adjusting ...
详细信息
ISBN:
(纸本)9781424427932
How to design proper architectures of neural networks for solving given problems is an important issue in neural network research. Nowadays, the existing training algorithms of neural networks only focus on adjusting neural networks' weights to improve training accuracy, and few of them adaptively adjust the networks' architecture. However, the architecture is indeed very critical for training neural networks to have high performance and needs to be coped with in the training process. In this paper, we present a new training algorithm of Madalines, which takes not only weight but also architecture adjusting into consideration. The algorithm can thus train Madalines with smaller architecture and higher generalization ability. Experimental results have demonstrated that our algorithm is effective.
In this paper, we aims to improve stability of learning processes by the SpikeProp algorithm. We proposed the method that reduce the increase of the error in learning processes. It repeats two steps: (1) original Spik...
详细信息
ISBN:
(纸本)9781424496365
In this paper, we aims to improve stability of learning processes by the SpikeProp algorithm. We proposed the method that reduce the increase of the error in learning processes. It repeats two steps: (1) original SpikeProp algorithm, and (2) use a linear search in the steepest descent direction only if the first step is failed. Some experimental results shows the improvement of learning processes.
Support Vector Machines have obtained much success in machine learning. But their training require to solve a quadratic optimization problem so that training time increases dramatically with the increase of the traini...
详细信息
ISBN:
(纸本)9781424423750
Support Vector Machines have obtained much success in machine learning. But their training require to solve a quadratic optimization problem so that training time increases dramatically with the increase of the training set size. Hence, standard SVM have difficulty in handling large scale problems. In this paper, we present a new fast training algorithm for soft margin Support Vector Classification. This algorithm searches for successive efficient feasible directions. A heuristic for searching the direction maximally correlated with the gradient is applied and the optimum step size of the optimization algorithm is analytically determined. Furthermore the solution, gradient and objective function are recursively obtained. In order to deal with large scale problems, the Gram matrix has not to be stored. Our iterative algorithm fully exploits quadratic functions properties. F-SVC is very simple, easy to implement and able to perform on large data sets.
暂无评论