Deep learning adjusts parameters to optimize the model through model training. Training algorithm is the key to model optimization and implementation. Therefore, the improvement of model training algorithm is of great...
详细信息
Deep learning adjusts parameters to optimize the model through model training. Training algorithm is the key to model optimization and implementation. Therefore, the improvement of model training algorithm is of great significance to deep learning. Based on the original gradient algorithm, the paper proposes a new gradient descent algorithmadagrad Restricted by Windows (AdaRW) for Deep learning model training optimization. Aiming at the defects of adagrad algorithm, the new algorithm uses the subset of window to limit historical accumulation, so as to slow down the attenuation of learning rate, and improve the speed of model training. The paper constructs OceanTDA9, a Deep learning model of marine target detection for Synthetic Aperture Radar (SAR) data, and adopts the proposed AdaRW algorithm to train the model based on SAR data with a resolution of 10 m in the Bohai Sea. Experiment shows that the accuracy and loss of the algorithm are better than those of adagrad and Stochastic Gradient Descent (SGD) algorithms, and the standard deviation is better than that of Adam algorithm.
This paper deals with the research of different artificial neural network algorithms and their application on data sets with different characteristics. In the first part of the paper, a description of six neural netwo...
详细信息
ISBN:
(纸本)9783031477201;9783031477218
This paper deals with the research of different artificial neural network algorithms and their application on data sets with different characteristics. In the first part of the paper, a description of six neural network algorithms is given, on the one hand, and the characteristics of data sets measured through meta-features, on the other hand. The empirical part of the paper describes the development of the predictive models through the process of data preparation for modeling, hyperparameters optimization, and analysis and empirical comparison of the algorithms' performance on different data sets. The research results show differences in the performance of the algorithms: Adam algorithm and its modifications have better performance than the adagrad algorithm and the basic gradient descent algorithm.
Aiming at the problem of slow convergence speed and long training time of traditional Denoising Autoencoder (DAE) deep learning model in the process of feature expression, a deep learning model of DAE additional with ...
详细信息
ISBN:
(纸本)9781538685273
Aiming at the problem of slow convergence speed and long training time of traditional Denoising Autoencoder (DAE) deep learning model in the process of feature expression, a deep learning model of DAE additional with momentum term and adaptive learning rate is proposed in this paper, which is used for text feature learning. And at the last layer of the model, softmax is used for classification. Finally, the text categorization experiments are carried out by using KNN classifier, denoising autoencoder model and improved model of denoising autoencoder respectively. The experimental results show that the reconstruction error curve of the improved DAE model is obviously lower than the traditional DAE model after 30 iterations, which greatly improves the convergence speed of the model. The value of the reconstruction error is also reduced. Moreover, the comprehensive correct rate of the improved DAE model is 95%, which is higher than that of KNN algorithm (88%) and traditional DAE model (92%). Experiments show that the method is feasible and practical.
暂无评论