International Journal on Advanced Science, Engineering and Information Technology, Vol. 1 (2011) No. 2, pages: 178-184, Proceeding of the International Conference on Advanced Science, Engineering and Information Technology (ICASEIT 2011), Bangi, Malaysia, 14-15 January 2011, DOI:10.18517/ijaseit.1.2.38

The Effect of Adaptive Gain and Adaptive Momentum in Improving Training Time of Gradient Descent Back Propagation Algorithm on Classification Problems

Norhamreeza Abdul Hamid, Nazri Mohd. Nawi, Rozaida Ghazali


The back propagation algorithm has been successfully applied to wide range of practical problems. Since this algorithm uses a gradient descent method, it has some limitations which are slow learning convergence velocity and easy convergence to local minima. The convergence behaviour of the back propagation algorithm depends on the choice of initial weights and biases, network topology, learning rate, momentum, activation function and value for the gain in the activation function. Previous researchers demonstrated that in ‘feed forward’ algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. This research proposed an algorithm for improving the performance of the current working back propagation algorithm which is Gradien Descent Method with Adaptive Gain by changing the momentum coefficient adaptively for each node. The influence of the adaptive momentum together with adaptive gain on the learning ability of a neural network is analysed. Multilayer feed forward neural networks have been assessed. Physical interpretation of the relationship between the momentum value, the learning rate and weight values is given. The efficiency of the proposed algorithm is compared with conventional Gradient Descent Method and current Gradient Descent Method with Adaptive Gain was verified by means of simulation on three benchmark problems. In learning the patterns, the simulations result demonstrate that the proposed algorithm converged faster on Wisconsin breast cancer with an improvement ratio of nearly 1.8, 6.6 on Mushroom problem and 36% better on  Soybean data sets. The results clearly show that the proposed algorithm significantly improves the learning speed of the current gradient descent back-propagatin algorithm.


back propagation algorithm; gain; activation function; adaptive momentum

Viewed: 1072 times (since abstract online)

cite this paper     download