Second Order Learning Algorithm for Back Propagation Neural Networks

Nazri Mohd Nawi (1), Noorhamreeza Abdul Hamid (2), Noor Azah Samsudin (3), Mohd Amin Mohd Yunus (4), Mohd Firdaus Ab Aziz (5)
(1) Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, 86400, Johor, Malaysia
(2) Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, 86400, Johor, Malaysia
(3) Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, 86400, Johor, Malaysia
(4) Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, 86400, Johor, Malaysia
(5) Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, 86400, Johor, Malaysia
Fulltext View | Download
How to cite (IJASEIT) :
Nawi, Nazri Mohd, et al. “Second Order Learning Algorithm for Back Propagation Neural Networks”. International Journal on Advanced Science, Engineering and Information Technology, vol. 7, no. 4, Aug. 2017, pp. 1162-71, doi:10.18517/ijaseit.7.4.1956.
Training of artificial neural networks (ANN) is normally a time consuming task due to iteratively search imposed by implicit nonlinearity of the network behavior.  In this work an improvement to ‘batch-mode’ offline training methods, gradient based or gradient free is proposed. The new procedure computes and improves the search direction along the negative gradient by introducing the ‘gain’ value of the activation functions and calculating the negative gradient on error with respect to the weights as well as ‘gain’ values in minimizing the error function. The main advantage of this new procedure is that it is easy to implement into other faster optimization algorithms such as conjugate gradient method and Quasi-Newton method. The pperformance of the proposed method implemented into conjugate gradient method and Quasi-Newton method is demonstrated by comparing the simulation results to the neural network toolbox for the chosen benchmark. The results show that the proposed method considerably improves the convergence rate significantly faster the learning process of the general back propagation algorithm because of it new efficient search direction.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).