Cite Article

An Optimized Back Propagation Learning Algorithm with Adaptive Learning Rate

Choose citation format

BibTeX

@article{IJASEIT2972,
   author = {Nazri Mohd Nawi and Faridah Hamzah and Norhamreeza Abdul Hamid and Muhammad Zubair Rehman and Mohammad Aamir and Azizul Ramli Azhar},
   title = {An Optimized Back Propagation Learning Algorithm with Adaptive Learning Rate},
   journal = {International Journal on Advanced Science, Engineering and Information Technology},
   volume = {7},
   number = {5},
   year = {2017},
   pages = {1693--1700},
   keywords = {Back Propagation; classification; momentum; adaptive learning rate; local minima; gradient descent},
   abstract = {Back Propagation (BP) is commonly used algorithm that optimize the performance of network for training multilayer feed-forward artificial neural networks. However, BP is inherently slow in learning and it sometimes gets trapped at local minima. These problems occur mailnly due to a constant and non-optimum learning rate (a fixed step size) in which the fixed value of learning rate is set to an initial starting value before training patterns for an input layer and an output layer. This fixed learning rate often leads the BP network towrds failure during steepest descent. Therefore to overcome the limitations of BP, this paper introduces an improvement to back propagation gradient descent with adapative learning rate (BPGD-AL) by changing the values of learning rate locally during the learning process. The simulation results on selected benchmark datasets show that the adaptive learning rate significantly improves the learning efficiency of the Back Propagation Algorithm},
   issn = {2088-5334},
   publisher = {INSIGHT - Indonesian Society for Knowledge and Human Development},
   url = {http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=2972},
   doi = {10.18517/ijaseit.7.5.2972}
}

EndNote

%A Nawi, Nazri Mohd
%A Hamzah, Faridah
%A Hamid, Norhamreeza Abdul
%A Rehman, Muhammad Zubair
%A Aamir, Mohammad
%A Azhar, Azizul Ramli
%D 2017
%T An Optimized Back Propagation Learning Algorithm with Adaptive Learning Rate
%B 2017
%9 Back Propagation; classification; momentum; adaptive learning rate; local minima; gradient descent
%! An Optimized Back Propagation Learning Algorithm with Adaptive Learning Rate
%K Back Propagation; classification; momentum; adaptive learning rate; local minima; gradient descent
%X Back Propagation (BP) is commonly used algorithm that optimize the performance of network for training multilayer feed-forward artificial neural networks. However, BP is inherently slow in learning and it sometimes gets trapped at local minima. These problems occur mailnly due to a constant and non-optimum learning rate (a fixed step size) in which the fixed value of learning rate is set to an initial starting value before training patterns for an input layer and an output layer. This fixed learning rate often leads the BP network towrds failure during steepest descent. Therefore to overcome the limitations of BP, this paper introduces an improvement to back propagation gradient descent with adapative learning rate (BPGD-AL) by changing the values of learning rate locally during the learning process. The simulation results on selected benchmark datasets show that the adaptive learning rate significantly improves the learning efficiency of the Back Propagation Algorithm
%U http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=2972
%R doi:10.18517/ijaseit.7.5.2972
%J International Journal on Advanced Science, Engineering and Information Technology
%V 7
%N 5
%@ 2088-5334

IEEE

Nazri Mohd Nawi,Faridah Hamzah,Norhamreeza Abdul Hamid,Muhammad Zubair Rehman,Mohammad Aamir and Azizul Ramli Azhar,"An Optimized Back Propagation Learning Algorithm with Adaptive Learning Rate," International Journal on Advanced Science, Engineering and Information Technology, vol. 7, no. 5, pp. 1693-1700, 2017. [Online]. Available: http://dx.doi.org/10.18517/ijaseit.7.5.2972.

RefMan/ProCite (RIS)

TY  - JOUR
AU  - Nawi, Nazri Mohd
AU  - Hamzah, Faridah
AU  - Hamid, Norhamreeza Abdul
AU  - Rehman, Muhammad Zubair
AU  - Aamir, Mohammad
AU  - Azhar, Azizul Ramli
PY  - 2017
TI  - An Optimized Back Propagation Learning Algorithm with Adaptive Learning Rate
JF  - International Journal on Advanced Science, Engineering and Information Technology; Vol. 7 (2017) No. 5
Y2  - 2017
SP  - 1693
EP  - 1700
SN  - 2088-5334
PB  - INSIGHT - Indonesian Society for Knowledge and Human Development
KW  - Back Propagation; classification; momentum; adaptive learning rate; local minima; gradient descent
N2  - Back Propagation (BP) is commonly used algorithm that optimize the performance of network for training multilayer feed-forward artificial neural networks. However, BP is inherently slow in learning and it sometimes gets trapped at local minima. These problems occur mailnly due to a constant and non-optimum learning rate (a fixed step size) in which the fixed value of learning rate is set to an initial starting value before training patterns for an input layer and an output layer. This fixed learning rate often leads the BP network towrds failure during steepest descent. Therefore to overcome the limitations of BP, this paper introduces an improvement to back propagation gradient descent with adapative learning rate (BPGD-AL) by changing the values of learning rate locally during the learning process. The simulation results on selected benchmark datasets show that the adaptive learning rate significantly improves the learning efficiency of the Back Propagation Algorithm
UR  - http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=2972
DO  - 10.18517/ijaseit.7.5.2972

RefWorks

RT Journal Article
ID 2972
A1 Nawi, Nazri Mohd
A1 Hamzah, Faridah
A1 Hamid, Norhamreeza Abdul
A1 Rehman, Muhammad Zubair
A1 Aamir, Mohammad
A1 Azhar, Azizul Ramli
T1 An Optimized Back Propagation Learning Algorithm with Adaptive Learning Rate
JF International Journal on Advanced Science, Engineering and Information Technology
VO 7
IS 5
YR 2017
SP 1693
OP 1700
SN 2088-5334
PB INSIGHT - Indonesian Society for Knowledge and Human Development
K1 Back Propagation; classification; momentum; adaptive learning rate; local minima; gradient descent
AB Back Propagation (BP) is commonly used algorithm that optimize the performance of network for training multilayer feed-forward artificial neural networks. However, BP is inherently slow in learning and it sometimes gets trapped at local minima. These problems occur mailnly due to a constant and non-optimum learning rate (a fixed step size) in which the fixed value of learning rate is set to an initial starting value before training patterns for an input layer and an output layer. This fixed learning rate often leads the BP network towrds failure during steepest descent. Therefore to overcome the limitations of BP, this paper introduces an improvement to back propagation gradient descent with adapative learning rate (BPGD-AL) by changing the values of learning rate locally during the learning process. The simulation results on selected benchmark datasets show that the adaptive learning rate significantly improves the learning efficiency of the Back Propagation Algorithm
LK http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=2972
DO  - 10.18517/ijaseit.7.5.2972