Cite Article

The Effect of Pre-Processing Techniques and Optimal Parameters selection on Back Propagation Neural Networks

Choose citation format

BibTeX

@article{IJASEIT2074,
   author = {Nazri M Nawi and Ameer Saleh Hussein and Noor Azah Samsudin and Norhamreeza Abdul Hamid and Mohd Amin Mohd Yunus and Mohd Firdaus Ab Aziz},
   title = {The Effect of Pre-Processing Techniques and Optimal Parameters selection on Back Propagation Neural Networks},
   journal = {International Journal on Advanced Science, Engineering and Information Technology},
   volume = {7},
   number = {3},
   year = {2017},
   pages = {770--777},
   keywords = {Multi-layer perceptron; back propagation; data pre-processing; gradient descent; classification},
   abstract = {The architecture of Artificial Neural Network laid the foundation as a powerful technique in handling problems such as pattern recognition and data analysis. Its data driven, self-adaptive, and non-linear capabilities channel it for use in processing at high speed and ability to learn the solution to a problem from a set of examples. Neural network training has been a dynamic area of research, with the Multi-Layer Perceptron (MLP) trained with Back Propagation (BP) mostly worked on by various researchers. In this study, a performance analysis based on BP training algorithms; gradient descent and gradient descent with momentum, both using the sigmoidal and hyperbolic tangent activation functions, coupled with pre-processing techniques are executed. The Min-Max, Z-Score, and Decimal Scaling pre-processing techniques are analyzed. Results generated from the simulations reveal that pre-processing the data greatly increase the ANN convergence, with Z-Score producing the overall best performance on all datasets},
   issn = {2088-5334},
   publisher = {INSIGHT - Indonesian Society for Knowledge and Human Development},
   url = {http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=2074},
   doi = {10.18517/ijaseit.7.3.2074}
}

EndNote

%A Nawi, Nazri M
%A Hussein, Ameer Saleh
%A Samsudin, Noor Azah
%A Hamid, Norhamreeza Abdul
%A Mohd Yunus, Mohd Amin
%A Ab Aziz, Mohd Firdaus
%D 2017
%T The Effect of Pre-Processing Techniques and Optimal Parameters selection on Back Propagation Neural Networks
%B 2017
%9 Multi-layer perceptron; back propagation; data pre-processing; gradient descent; classification
%! The Effect of Pre-Processing Techniques and Optimal Parameters selection on Back Propagation Neural Networks
%K Multi-layer perceptron; back propagation; data pre-processing; gradient descent; classification
%X The architecture of Artificial Neural Network laid the foundation as a powerful technique in handling problems such as pattern recognition and data analysis. Its data driven, self-adaptive, and non-linear capabilities channel it for use in processing at high speed and ability to learn the solution to a problem from a set of examples. Neural network training has been a dynamic area of research, with the Multi-Layer Perceptron (MLP) trained with Back Propagation (BP) mostly worked on by various researchers. In this study, a performance analysis based on BP training algorithms; gradient descent and gradient descent with momentum, both using the sigmoidal and hyperbolic tangent activation functions, coupled with pre-processing techniques are executed. The Min-Max, Z-Score, and Decimal Scaling pre-processing techniques are analyzed. Results generated from the simulations reveal that pre-processing the data greatly increase the ANN convergence, with Z-Score producing the overall best performance on all datasets
%U http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=2074
%R doi:10.18517/ijaseit.7.3.2074
%J International Journal on Advanced Science, Engineering and Information Technology
%V 7
%N 3
%@ 2088-5334

IEEE

Nazri M Nawi,Ameer Saleh Hussein,Noor Azah Samsudin,Norhamreeza Abdul Hamid,Mohd Amin Mohd Yunus and Mohd Firdaus Ab Aziz,"The Effect of Pre-Processing Techniques and Optimal Parameters selection on Back Propagation Neural Networks," International Journal on Advanced Science, Engineering and Information Technology, vol. 7, no. 3, pp. 770-777, 2017. [Online]. Available: http://dx.doi.org/10.18517/ijaseit.7.3.2074.

RefMan/ProCite (RIS)

TY  - JOUR
AU  - Nawi, Nazri M
AU  - Hussein, Ameer Saleh
AU  - Samsudin, Noor Azah
AU  - Hamid, Norhamreeza Abdul
AU  - Mohd Yunus, Mohd Amin
AU  - Ab Aziz, Mohd Firdaus
PY  - 2017
TI  - The Effect of Pre-Processing Techniques and Optimal Parameters selection on Back Propagation Neural Networks
JF  - International Journal on Advanced Science, Engineering and Information Technology; Vol. 7 (2017) No. 3
Y2  - 2017
SP  - 770
EP  - 777
SN  - 2088-5334
PB  - INSIGHT - Indonesian Society for Knowledge and Human Development
KW  - Multi-layer perceptron; back propagation; data pre-processing; gradient descent; classification
N2  - The architecture of Artificial Neural Network laid the foundation as a powerful technique in handling problems such as pattern recognition and data analysis. Its data driven, self-adaptive, and non-linear capabilities channel it for use in processing at high speed and ability to learn the solution to a problem from a set of examples. Neural network training has been a dynamic area of research, with the Multi-Layer Perceptron (MLP) trained with Back Propagation (BP) mostly worked on by various researchers. In this study, a performance analysis based on BP training algorithms; gradient descent and gradient descent with momentum, both using the sigmoidal and hyperbolic tangent activation functions, coupled with pre-processing techniques are executed. The Min-Max, Z-Score, and Decimal Scaling pre-processing techniques are analyzed. Results generated from the simulations reveal that pre-processing the data greatly increase the ANN convergence, with Z-Score producing the overall best performance on all datasets
UR  - http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=2074
DO  - 10.18517/ijaseit.7.3.2074

RefWorks

RT Journal Article
ID 2074
A1 Nawi, Nazri M
A1 Hussein, Ameer Saleh
A1 Samsudin, Noor Azah
A1 Hamid, Norhamreeza Abdul
A1 Mohd Yunus, Mohd Amin
A1 Ab Aziz, Mohd Firdaus
T1 The Effect of Pre-Processing Techniques and Optimal Parameters selection on Back Propagation Neural Networks
JF International Journal on Advanced Science, Engineering and Information Technology
VO 7
IS 3
YR 2017
SP 770
OP 777
SN 2088-5334
PB INSIGHT - Indonesian Society for Knowledge and Human Development
K1 Multi-layer perceptron; back propagation; data pre-processing; gradient descent; classification
AB The architecture of Artificial Neural Network laid the foundation as a powerful technique in handling problems such as pattern recognition and data analysis. Its data driven, self-adaptive, and non-linear capabilities channel it for use in processing at high speed and ability to learn the solution to a problem from a set of examples. Neural network training has been a dynamic area of research, with the Multi-Layer Perceptron (MLP) trained with Back Propagation (BP) mostly worked on by various researchers. In this study, a performance analysis based on BP training algorithms; gradient descent and gradient descent with momentum, both using the sigmoidal and hyperbolic tangent activation functions, coupled with pre-processing techniques are executed. The Min-Max, Z-Score, and Decimal Scaling pre-processing techniques are analyzed. Results generated from the simulations reveal that pre-processing the data greatly increase the ANN convergence, with Z-Score producing the overall best performance on all datasets
LK http://ijaseit.insightsociety.org/index.php?option=com_content&view=article&id=9&Itemid=1&article_id=2074
DO  - 10.18517/ijaseit.7.3.2074