Artificial Neural Network (ANN) is a mathematical model that is inspired by the structured and/or functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial ne...
详细信息
Artificial Neural Network (ANN) is a mathematical model that is inspired by the structured and/or functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes information using a connectionist approach to computation. Recent advances in the software and hardware technologies of neural networks have motivated new studies in the architecture and applications of these networks. Neural networks have potentially powerful characteristics, which can be utilized in the development of our research goal, namely, a true autonomous machine. Machine learning is a major step in this development. The immense computational power of modern digital machines has increased the feasibility of implementing closed-loop control systems in many applications. The main focus of this thesis work is completed on the performance evaluation ADALINE neural configuration for system identification of dynamical systems, noise cancellation and adaptive prediction. It is well known that ADALINE is slow in the convergence which is not appropriate for the online applications and identification of the time varying systems. Two techniques are proposed to speed up the convergence of learning, thus increase the tracking capability of the time varying system parameters. One idea is to introduce a momentum term to the weight adjustment during convergence period. The other technique is to train the generalized ADALINE network multiple epochs with data from a sliding window of the system's input output data. We present an online identification method based on a generalized Adaptive Linear Element (ADALINE) neural network, called GADALINE, for linear time varying systems which can be described by a discrete time model. The fine tuned GADLINE is quite suitable for online system identification and real time adaptive control applications due to its low computational demand. GADALINE neural configuration with a variable learning-rate pa
Speech recognition is a rapidly emerging research area as the speech signal contains linguistic information and speaker information that can be used in applications including surveillance, authentication, and forensic...
详细信息
Speech recognition is a rapidly emerging research area as the speech signal contains linguistic information and speaker information that can be used in applications including surveillance, authentication, and forensic field. The performance of speech recognition systems degrades expeditiously nowadays due to channel degradations, mismatches, and noise. To provide better performance of speech recognition, the Taylor-Deep Belief Network (Taylor-DBN) classifier is proposed, which is the modification of the gradientdescent (GD) algorithm with Taylor series in the existing DBN classifier. Initially, the noise present in the speech signal is removed through the speech signal enhancement. The features, such as Holoentropy with the eXtended Linear Prediction using autocorrelation Snapshot (HXLPS), spectral kurtosis, and spectral skewness, are extracted from the enhanced speech signal, which is fed to the Taylor-DBN classifier that identifies the speech of the impaired persons. The experimentation is done using the TensorFlow speech recognition database, the real database, and the ESC-50 dataset. The accuracy, False Acceptance Rate (FAR), False Rejection Rate (FRR), and Mean Square Error (MSE) of the Taylor-DBN for TensorFlow speech recognition database are 96.95%, 3.04%, 3.04%, and 0.045, respectively, and for real database, the accuracy, FAR, FRR, and MSE are 96.67%, 3.32%, 3.32%, and 0.0499, respectively. Similarly, for the ESC-50 dataset, the accuracy, FAR, FRR, and MSE are 96.81%, 3.18%, 3.18%, and 0.047, respectively. The results imply that the Taylor-DBN provides better performance as compared to the existing conventional methods.
Deep multi-layer neural networks represent hypotheses of very high degree polynomials to solve very complex problems. gradientdescent optimization algorithms are utilized to train such deep networks through backpropa...
详细信息
Deep multi-layer neural networks represent hypotheses of very high degree polynomials to solve very complex problems. gradientdescent optimization algorithms are utilized to train such deep networks through backpropagation, which suffers from permanent problems such as the vanishing gradient problem. To overcome the vanishing problem, we introduce a new anti-vanishing back-propagated learning algorithm called oriented stochastic loss descent (OSLD). OSLD updates a random-initialized parameter iteratively in the opposite direction of its partial derivative sign by a small positive random number, which is scaled by a tuned ratio of the model loss. This paper compares OSLD to stochastic gradient descent algorithm as the basic backpropagation algorithm and Adam as one of the best backpropagation algorithms in five benchmark models. Experimental results show that OSLD is very competitive to Adam in small and moderate depth models, and OSLD outperforms Adam in very long models. Moreover, OSLD is compatible with current backpropagation architectures except for learning rates. Finally, OSLD is stable and opens more choices in front of the very deep multi-layer neural networks. (C) 2021 Elsevier B.V. All rights reserved.
To control the robot and track the designed trajectory with uncertain disturbances in a specified precision range, an adaptive fuzzy control scheme for the robot arm manipulator is discussed. The controller output err...
详细信息
To control the robot and track the designed trajectory with uncertain disturbances in a specified precision range, an adaptive fuzzy control scheme for the robot arm manipulator is discussed. The controller output error method (COEM) is used to design the adaptive fuzzy controller. A few or all of the parameters of the controller are adjusted by using the gradient descent algorithm to minimize the output error. COEM is adopted in the adaptive control system for the robot arm manipulator with 5-DOF. Simulation results show the effectiveness of the method and the real time adjustment of the parameters.
This paper suggests a neural network method for solving the time-fractional Fokker-Planck equation. An energy function is constructed by means of initial-boundary value conditions. The Caputo derivative is approximate...
详细信息
This paper suggests a neural network method for solving the time-fractional Fokker-Planck equation. An energy function is constructed by means of initial-boundary value conditions. The Caputo derivative is approximated by L-1 numerical scheme and an unconstrained discretization minimization problem is presented. gradient descent algorithm is adopted for neural network training. Semi-analytical solutions are obtained where two numerical examples demonstrate the method's efficiency.
The Graphics Processing Units (GPUs) have been used for accelerating graphic calculations as well as for developing more general devices. One of the most used parallel platforms is the Compute Unified Device Architect...
详细信息
The Graphics Processing Units (GPUs) have been used for accelerating graphic calculations as well as for developing more general devices. One of the most used parallel platforms is the Compute Unified Device Architecture (CUDA), which allows implementing in parallel multiple GPUs obtaining a high computational performance. Over the last years, CUDA has been used for the implementation of several parallel distributed systems. At the end of the 80s, it was introduced a type of Neural Networks (NNs) inspired of the behavior of queueing networks named Random Neural Networks (RNN). The method has been successfully used in the Machine Learning community for solving many learning benchmark problems. In this paper, we implement in CUDA the gradient descent algorithm for optimizing a RNN model. We evaluate the performance of the algorithm on two real benchmark problems about energy sources. In addition, we present a comparison between the parallel implement in CUDA and the traditional implementation in C programming language.
This paper focuses on the concept of a subsea satellite well system in deep water oil field, and a mixed-integer nonlinear programming model (MINLP) is proposed to help design the layout to minimize the payback period...
详细信息
This paper focuses on the concept of a subsea satellite well system in deep water oil field, and a mixed-integer nonlinear programming model (MINLP) is proposed to help design the layout to minimize the payback period. The proposed model has two key aspects. First, the objective function, payback period, reflects the link between the total cost and production. And second, the Floating Production Storage and Offloading units (FPSOs), subsea wells, flowlines, and subsea obstacles are integrated for the modeling, presenting the interaction of the components from underground to the sea level. A decomposition strategy based on gradient descent algorithm is proposed to solve the model. The case studies indicate the model's feasibility. Besides, two different applications of the model are presented, one is to optimize the layout under given horizontal displacements and the other is the cost comparison between drilling horizontal well and vertical well, indicating the model's flexibility.
In this paper, a new method is proposed that integrates the 3D terrain information, ditch geometry, and biped dynamics for motion planning of a 14-DOF biped robot on 3D terrain containing a 3D ditch. The path planning...
详细信息
In this paper, a new method is proposed that integrates the 3D terrain information, ditch geometry, and biped dynamics for motion planning of a 14-DOF biped robot on 3D terrain containing a 3D ditch. The path planning is modeled as a wavefront propagation in Non-uniform medium represented by the Eikonal equation. A speed function expresses the inhomogeneity of uneven terrain. The Eikonal equation is solved by the Fast marching method (FMM) to obtain the global path. Ten Footstep variables (FSVs) characterize one step of walk on a 3D uneven terrain surface. The hip and foot trajectory parameters (HFTPs) are used to construct cubic spline-based hip and foot trajectories of the biped robot. The optimal value of HFTPs is obtained by a Genetic algorithm by minimization of energy and satisfying the constraint of dynamic balance. A walk-dataset is created that contains the optimal trajectories for different FSVs. The generated walk-dataset was used to train the biped to walk on rough terrain using a feed-forward Neural network for making a real-time estimate of optimal HFTPs. Simulation results validate the effectiveness of the proposed method of ditch crossing on different uneven terrains.
We introduce a multi-period location-inventory-routing problem with time windows and fuel consumption, which simultaneously optimizes the location, routing, and inventory decisions for both the distribution center and...
详细信息
We introduce a multi-period location-inventory-routing problem with time windows and fuel consumption, which simultaneously optimizes the location, routing, and inventory decisions for both the distribution center and customers in a multi-echelon supply chain. To better reflect reality, the fuel consumption is also incorporated into the variable transportation cost. The problem is formulated as a mixed integer nonlinear programming model. We then propose a two-stage hybrid metaheuristic algorithm to address this problem. In the first stage, a customized genetic algorithm is proposed. In the second stage, a gradient descent algorithm is used to improve the inventory decision to further reduce the total cost. Results of numerical experimentations on generated instances confirm the effectiveness of the algorithm. Results show that inventory management activities contribute considerably to total cost saving. Given the cost trade-off between transportation and inventory, the retailers' inventory level shows more shortages after post-optimization, while the inventory level in the distribution centers can be either reduced or increased, depending on the spatial distribution of retailers in the vicinity of the distribution center. Sensitivity analysis on the model parameters is also conducted to provide managerial insights.
Recommender technologies have been developed to give helpful predictions for decision making under uncertainty. An extensive amount of research has been done to increase the quality of such predictions, currently the ...
详细信息
Recommender technologies have been developed to give helpful predictions for decision making under uncertainty. An extensive amount of research has been done to increase the quality of such predictions, currently the methods based on matrix factorization are recognized as one of the most efficient. The focus of this paper is to extend a matrix factorization algorithm with content awareness to increase prediction accuracy. A recommender system prototype based on the resulting Extended Content-Boosted Matrix Factorization algorithm is designed, developed and evaluated. The algorithm has been evaluated by empirical evaluation, which starts with creating of an experimental design, then conducting off-line empirical tests with accuracy measurement. The result revealed further potential of the content awareness in matrix factorization methods, which has not been fully realized in the generalized alignment-biased algorithm by Nguyen and Zhu and uncovers opportunities for future research.
暂无评论