Convolutional neural networks (CNNs) play an important role in image recognition applications. Fast training of image recognition systems is a crucial point, because the system should be trained for each new image cla...
详细信息
Convolutional neural networks (CNNs) play an important role in image recognition applications. Fast training of image recognition systems is a crucial point, because the system should be trained for each new image class. These networks are trained using lengthy calculations. Focus of engineering is on obtaining a fast, but stable optimisation method. Momentum technique which is used in backpropagation algorithms is like a proportional-integral (PI) controller that is widely employed in automatic control systems. It takes the integral of past errors and helps reaching the training targets. Proportional + momentum + derivative (promod) method adds gradient of update matrices to the training process and builds an optimiser such as the widely used PI-derivative controller. The method accelerates the movement toward the target accuracy levels. This is achieved by doing bigger corrections in the beginning using the differences in the calculated update matrices. In this research, promod method is tested on image recognition applications and CNNs. Modified national institute of standards and technology database (MNIST) and Fashion-MNIST datasets are used for evaluating the performance. Experimental results showed that promod might perform much faster in training of CNNs and consume proportionally less power with respect to the momentum and stochastic gradient descent (SGD) techniques.
暂无评论