International Journal on Advanced Science, Engineering and Information Technology, Vol. 5 (2015) No. 4, pages: 314-322, DOI:10.18517/ijaseit.5.4.542

Camera-Vision Based Oil Content Prediction for Oil Palm (Elaeis Guineensis Jacq) Fresh Fruits Bunch at Various Recording Distances

Dinah Cherie, Sam Herodian, Tineke Mandang, Usman Ahmad

Abstract

In this study, the correlation between oil palm fresh fruits bunch (FFB) appearance and its oil content (OC) was explored. FFB samples were recorded from various distance (2, 7, and 10 m) with different lighting spectrums and configurations (Ultraviolet: 280-380nm, Visible: 400-700nm, and Infrared: 720-1100nm) and intensities (600watt and 1000watt lamps) to explore the correlations. The recorded FFB images were segmented and its color features were subsequently extracted to be used as input variables for modeling the OC of the FFB. In this study, four developed models were selected to perform oil content prediction (OCP) for intact FFBs. These models were selected based on their validity and accuracy upon performing the OCP. Models were developed using Multi-Linear-Perceptron-Artificial-Neural-Network (MLP-ANN) methods, employing 10 hidden layers and 15 images features as input variables. Statistical engineering software was used to create the models. Although the number of FFB samples in this study was limited, four models were successfully developed to predict intact FFB’s OC, based on its images’ color features. Three OCP models developed for image recording from 10 m under UV, Vis2, and IR2 lighting configurations. Another model was successfully developed for short range imaging (2m) under IR2 light. The coefficient of correlation for each model when validated was 0.816, 0.902, 0.919, and 0.886, respectively. For bias and error, these selected models obtained root-mean-square error (RMSE) of 1.803, 0.753, 0.607, and 1.104, respectively.

Keywords:

FFB, Oil Content Prediction, Recording Distance, Machine Vision, MLP-ANN Methods

Viewed: 449 times (since Sept 4, 2017)

cite this paper     download