As deep learning technologies continue to permeate various sectors, optimization algorithms have become increasingly crucial in neural network training. This paper introduces two adaptive momentum algorithms based on ...
详细信息
As deep learning technologies continue to permeate various sectors, optimization algorithms have become increasingly crucial in neural network training. This paper introduces two adaptive momentum algorithms based on Grünwald–Letnikov and Caputo fractional-order differences—Fractional Order Adagrad (FAdagrad) and Fractional Order Adam (FAdam)—to update parameters more flexibly by adjusting momentum information. Commencing from the definitions of fractional derivatives, we propose integrating fractional-order differences with gradient algorithms in convolutional neural networks (CNNs). These adaptive momentum algorithms, leveraging Grünwald–Letnikov and Caputo fractional-order differences, offer enhanced flexibility, thereby accelerating convergence. Our nonlinear parameter tuning method for CNNs demonstrates superior performance compared to traditional integer-order momentumalgorithms and the standard Adam algorithm. Experimental results on the BraTS2021 dataset and CIFAR-100 dataset reveal that the proposed fractional-order optimization algorithms significantly outperform their integer-order counterparts in model optimization. They not only expedite convergence but also improve the accuracy of image recognition and segmentation.
暂无评论