The Effect of Pre-Processing Techniques and Optimal Parameters selection on Back Propagation Neural Networks

Nazri M Nawi (1), Ameer Saleh Hussein (2), Noor Azah Samsudin (3), Norhamreeza Abdul Hamid (4), Mohd Amin Mohd Yunus (5), Mohd Firdaus Ab Aziz (6)
(1) Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, 86400, Johor, Malaysia
(2) Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, 86400, Johor, Malaysia
(3) Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, 86400, Johor, Malaysia
(4) Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, 86400, Johor, Malaysia
(5) Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, 86400, Johor, Malaysia
(6) Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, 86400, Johor, Malaysia
Fulltext View | Download
How to cite (IJASEIT) :
Nawi, Nazri M, et al. “The Effect of Pre-Processing Techniques and Optimal Parameters Selection on Back Propagation Neural Networks”. International Journal on Advanced Science, Engineering and Information Technology, vol. 7, no. 3, June 2017, pp. 770-7, doi:10.18517/ijaseit.7.3.2074.
The architecture of Artificial Neural Network laid the foundation as a powerful technique in handling problems such as pattern recognition and data analysis. Its data driven, self-adaptive, and non-linear capabilities channel it for use in processing at high speed and ability to learn the solution to a problem from a set of examples. Neural network training has been a dynamic area of research, with the Multi-Layer Perceptron (MLP) trained with Back Propagation (BP) mostly worked on by various researchers. In this study, a performance analysis based on BP training algorithms; gradient descent and gradient descent with momentum, both using the sigmoidal and hyperbolic tangent activation functions, coupled with pre-processing techniques are executed. The Min-Max, Z-Score, and Decimal Scaling pre-processing techniques are analyzed. Results generated from the simulations reveal that pre-processing the data greatly increase the ANN convergence, with Z-Score producing the overall best performance on all datasets
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).