The back propagation algorithm has wide range of applications for training of feed forward neural networks. Over the years, many researchers have used back propagation algorithm to train their neural network based sys...
详细信息
ISBN:
(纸本)9781509052561
The back propagation algorithm has wide range of applications for training of feed forward neural networks. Over the years, many researchers have used back propagation algorithm to train their neural network based systems without emphasizing on how to fine tune the parameters of the algorithm. The paper throws the light on how researchers can manipulate and experiment with the parameters of the back propagation algorithm to achieve the optimum learning performance. The paper presents the results of the laboratory experiments of fine tuning the parameters of the back propagation algorithm. The process of fine tuning the parameters was applied on the neural network based expert system prototype. The prototype aims to analyze and design customized motivational strategies based on employees' perspective. The laboratory experiments were conducted on the following parameters of back propagation algorithm: learning rate, momentum rate and activation functions. Learning performance are measured and recorded. At the same time, the impact of activation function on the final output is also measured. Based on the results, the values of the above parameters which provide the optimum learning performance is chosen for the full scale system implementation.
In the past years modern arithmetical methods for image investigation have led to a rebellion in many fields, from computer vision to scientific imaging. Though, some recently developed image processing techniques suc...
详细信息
ISBN:
(纸本)9781509012862
In the past years modern arithmetical methods for image investigation have led to a rebellion in many fields, from computer vision to scientific imaging. Though, some recently developed image processing techniques successfully oppressed by other sections have been infrequently, if ever, experimented on celestial observations. Here we present a new idea of super resolution of Astronomical objects using back propagation algorithm."Super-resolution "is efficient in improving the excellence of analysis of diffused sources formerly unobserved by the background noise, efficiently rising the depth of obtainable observations. Higher-resolution image out of a set of low resolution frames can be obtained through super-resolution. Super-resolution is viable only for point sources which have negligible dimensions, then for wideranging objects the knowledge about intensity vacillation at angular prevalence is irreversibly mislaid. Again obtaining super resolved image for extended sources(e.g. comets, meteoroids, etc) is a new challenge if the speed of the object is very high. Acquiring High resolution images of celestial objects from ground based telescopes is intricate and often requires computational post processing techniques to remove blur caused by atmospheric commotion. Even images obtained through satellite imaging are compressed and sent to earth. So there is need for Super resolution of those compressed or noisy images. So, here we simply implement Super-resolution for Astronomical objects using back propagation algorithm to overcome lost information and challenges for high speedy celestial objects. The purpose is to super resolve high speedy celestial objects whose analysis may in future help to prevent collisions of such celestial objects with earth and also avoid future solar system damage
Building an effective methodology to detect characters from images with less error rate is the great task. Our aim is to furnish such an algorithm that will be able to generate error free recognition of text from the ...
详细信息
Building an effective methodology to detect characters from images with less error rate is the great task. Our aim is to furnish such an algorithm that will be able to generate error free recognition of text from the given input image which will help in document digitizing and prevention to the hand written text recognition. OCR has been in the intensive research topic for more than 4 decades, it is probably the most time consuming and labor intensive work of inputting the data through keyboard. This paper discuss about mechanical or electronic conversion of scanned images, text which contain graphics, image captured by camera, scanned images and the recognition of images where characters may be broken or smeared. The optical character recognition is the desktop based application developed using Java IDE and mysql as a database. We have gain 91.82% accuracy when applied on different data sets, in pre-processing we used different techniques to remove noise from the image in post processing we used dictionary for the characters which are not recognized during classification, in classification we have used the back propagation algorithm for the training of neural network, feature extraction has been performed by template matching and hamming distance. All the algorithms have been developed in java technology.
This study established an adaptive memetic differential evolution-backpropagation-fuzzy neural network (AMDE-BP-FNN) control method to achieve high-efficiency and precise control of robots with complex dynamic charac...
详细信息
This study established an adaptive memetic differential evolution-backpropagation-fuzzy neural network (AMDE-BP-FNN) control method to achieve high-efficiency and precise control of robots with complex dynamic characteristics while reducing control costs. The adaptive differential evolution (ADE) method was applied to search the optimal parameters in the global scope and delimited the pseudo-global search scope. The memetic differential evolution (MDE) method was used to search for optimal parameters in the pseudo-global scope, and the probability factor was set to decide whether to use the backpropagation (BP) algorithm for online optimization. Finally, simulations, experiments, and real-world applications were conducted. The results indicated the high efficiency, high precision, and viability of the proposed AMDE-BP-FNN method.
A novel Extreme Learning Machine algorithm is used to train the neural network for Power System Stabilizer (PSS) to minimize low-frequency oscillations. The use of rapid-acting exciters, the interconnection of various...
详细信息
A novel Extreme Learning Machine algorithm is used to train the neural network for Power System Stabilizer (PSS) to minimize low-frequency oscillations. The use of rapid-acting exciters, the interconnection of various power systems, and disturbances like faults and load changes all contribute to the generation of low-frequency oscillations. If sufficient damping is not provided, these oscillations generate and sustain, and eventually cause the power system to shut down entirely. The lead-lag power system stabilizer is a conventional device used but it is slow in operation and can apply to linear systems only. Artificial intelligence techniques like, fuzzy and neural networks are used to overcome the bottlenecks. The neural networks are trained using backpropagation and extreme learning algorithms. The operation of the designed power system stabilizers is verified on a 7-machine, 29-bus system, 4-machine, 11-bus system, and SMIB System. The proposed controller has been providing better damping performance compared to other controllers in terms of Integral Time Squared Error (ITSE) and Integral Squared Error (ISE). The proposed system is designed and validated through the R2023b MATLAB/Simulink Software environment.
The development of automation and intelligence in geological core drilling is not yet mature. The selection and improvement of drilling parameters rely mainly on experience, and adjustments are often made after drilli...
详细信息
The development of automation and intelligence in geological core drilling is not yet mature. The selection and improvement of drilling parameters rely mainly on experience, and adjustments are often made after drilling by evaluating the core, which introduces a lag and reduces the drilling efficiency. Therefore, this study first establishes a geological core drilling experiment platform to collect drilling data. Through the constructed geological core drilling experimental platform, the practical data at the drill bit can be directly obtained, solving the problem of the data from the surface equipment in the practical drilling differing from the practical data. Second, the backpropagation (BP) algorithm is used to perform the ROP prediction, with weight on bit (WOB), torque (TOR), flow rate (Q), and rotation speed (RPM) as input parameters, and rate of penetration (ROP) as the output. Subsequently, correlation analysis is used to perform the feature parameter optimization, and the effects of bit wear and bit cutting depth on the experiment are considered. Finally, comparison with algorithms such as ridge regression, SVM and KNN shows that the ROP prediction model using the BP neural network has the highest prediction accuracy of 94.1%. The results provide a reference for ROP prediction and the automation of geological core drilling rigs.
The privacy and security of big data have become a major concern in recent years, necessitating privacy-preserving data mining strategies to preserve the balance between data value and privacy. The application of data...
详细信息
The privacy and security of big data have become a major concern in recent years, necessitating privacy-preserving data mining strategies to preserve the balance between data value and privacy. The application of data mining techniques to the web is known as web mining. The majority of consumers seek complete anonymity when using web apps and engaging in online activities, which raises privacy problems. Condensation, randomness, tree structure, and other traditional approaches are employed to maintain privacy. Existing techniques have limitations in that they are unable to balance data usefulness and may have privacy and scalability issues. Privacy-preserving tools such as encryption and machine learning techniques, among others, can be used to protect and classify the data stream. To overcome this, here, the data are transformed into image, and then, the random transformation is performed using enhanced particle swarm optimization. Here, the optimization is performed to identify the optimal random rotation of the data for both protection and better classification. The classification is performed through back propagation algorithm, and the perturbed data are tested against independent component analysis attack. The classification accuracy, computation time, and error rate of the classifier are measured, and it is compared with the existing method. The comparison result shows the achieved result of proposed method. This proposed system is done with the help of MATLAB 2021a.
In this study, a Nucleolus Theory (NT) based iterative method has been presented to compute Distribution Locational Marginal Price (DLMP) at buses where embedded generator (EG) units were installed in the distribution...
详细信息
In this study, a Nucleolus Theory (NT) based iterative method has been presented to compute Distribution Locational Marginal Price (DLMP) at buses where embedded generator (EG) units were installed in the distribution network. The NT-based iterative method provides financial incentives to EG owners as per their contribution to loss reduction and emission reduction for specified loading conditions. In this study, DLMP values depend on the decision-maker preference among loss reduction, emission reduction, and distribution company additional benefit. The main objective of the proposed method is to optimize the active power loss and emissions based on DISCO decision-maker priority subjected to EG capacity. DLMP value for each EG unit is a variable that has to compute based on DISCO decision-maker priority. This proposed NT based iterative method has been implemented on Taiwan power company Distribution System (TPCDS) consisting of 84 buses with 15 EG units, and 201 bus radial distribution networks with 15 EG units under MATLAB environment and computed DLMP values based on the contribution of EG units on loss and emission reduction. It was inferred from the results that the proposed method enables EG owners to receive more financial benefit and also enables DISCO decision-makers can operate the network with more optimal in terms of loss and emissions.
Objective and accurate evaluation of rock mass quality classification is the prerequisite for reliable sta-bility *** develop a tool that can deliver quick and accurate evaluation of rock mass quality,a deep learning ...
详细信息
Objective and accurate evaluation of rock mass quality classification is the prerequisite for reliable sta-bility *** develop a tool that can deliver quick and accurate evaluation of rock mass quality,a deep learning approach is developed,which uses stacked autoencoders(SAEs)with several autoencoders and a softmax net *** rock parameters of rock mass rating(RMR)system are calibrated in this *** model is trained using 75%of the total database for training sample *** SAEs trained model achieves a nearly 100%prediction *** comparison,other different models are also trained with the same dataset,using artificial neural network(ANN)and radial basis function(RBF).The results show that the SAEs classify all test samples correctly while the rating accuracies of ANN and RBF are 97.5%and 98.7%,repectively,which are calculated from the confusion ***,this model is further employed to predict the slope risk level of an abandoned *** proposed approach using SAEs,or deep learning in general,is more objective and more accurate and requires less human *** findings presented here shall shed light for engineers/researchers interested in analyzing rock mass classification criteria or performing field investigation.
This paper introduces a nonlinear adaptive controller of unknown nonlinear dynamical systems based on the approximate models using a multi-layer perceptron neural network. The proposal of this study is to employ the s...
详细信息
This paper introduces a nonlinear adaptive controller of unknown nonlinear dynamical systems based on the approximate models using a multi-layer perceptron neural network. The proposal of this study is to employ the structure of the Multi-Layer Perceptron (MLP) model into the NARMA-L2 structure in order to construct a hybrid neural structure that can be used as an identifier model and a nonlinear controller for the MIMO nonlinear systems. The big advantage of the proposed control system is that it doesn't require previous knowledge of the model. Our ultimate goal is to determine the control input using only the values of the input and output. The developed NARMA-L2 neural network model is tuned for its weights employing the backpropagation optimizer algorithm. Nonlinear autoregressive-moving average-L2 (NARMA-L2) neural network controller, based on the inputs and outputs from the nonlinear model, is designed to perform control action on the nonlinear for the attitude control of unmanned aerial vehicles (UAVs) model. Once the system has been modeled efficiently and accurately, the proposed controller is designed by rearranging the generalized submodels. The controller performance is evaluated by simulation conducted on a quadcopter MIMO system, which is characterized by a nonlinear and dynamic behavior. The obtained results show that the NARMA-L2-based neural network achieved good performances in modeling and control.
暂无评论