International Journal on Advanced Science, Engineering and Information Technology, Vol. 10 (2020) No. 6, pages: 2410-2418, DOI:10.18517/ijaseit.10.6.7189

Calibrating Trip Distribution Neural Network Models with Different Scenarios of Transfer Functions Used in Hidden and Output Layers

Gusri Yaldi, Imelda M. Nur, - Apwiddhal


The transfer function is used to process the summation outputs in the hidden and output nodes. It can generally be categorized as either a non-linear or linear function. Examples are Sigmoid and Purelin functions representing non-linear and linear transfer functions. It is often mentioned that there is no standard guideline in the transfer function selection, and the Sigmoid or Logsig is widely used. However, the transfer function and training algorithm have a procedural relationship in training Multilayer Feedforward Neural Network (MLFFNN), a famous Artificial Neural Network model structure. In the feedforward stage, this function transforms the linear summation output to either linear (Purelin) or non-linear form (Sigmoid). In the backpropagation stage, this function is used in calculating the magnitude of change in the connection weights involving its derivative. Nine scenarios of MLFFNN were developed based on different transfer functions used in both hidden and output layers. In order to make fair comparisons, each scenario has the same initial connection weight. The modelling is conducted at the calibration level only; however, it involves different levels of complexity. It was calibrated by using the Levenberg-Marquard training algorithm. The results suggest that some calibrations failed and negative estimations occurred once non-linear transfer functions were used in hidden and output layers. It was found that Purelin was superior to other transfer functions. However, it has a weakness which is its negative estimations.


neural network model; transfer function; model calibration; estimated OD matrices

Viewed: 321 times (since abstract online)

cite this paper     download