Fruit Identification and Quality Detection by Means of DAG-CNN

Cesar G. Pachón-Suescún (1), Javier O. Pinzón-Arenas (2), Robinson Jiménez-Moreno (3)
(1) Department of Mechatronics Engineering, Militar Nueva Granada University, Bogotá D.C, 110111, Colombia
(2) Department of Mechatronics Engineering, Militar Nueva Granada University, Bogotá D.C, 110111, Colombia
(3) Department of Mechatronics Engineering, Militar Nueva Granada University, Bogotá D.C, 110111, Colombia
Fulltext View | Download
How to cite (IJASEIT) :
Pachón-Suescún, Cesar G., et al. “Fruit Identification and Quality Detection by Means of DAG-CNN”. International Journal on Advanced Science, Engineering and Information Technology, vol. 10, no. 5, Oct. 2020, pp. 2183-8, doi:10.18517/ijaseit.10.5.8684.
The design of quality control systems in food has become essential in research to guarantee an adequate state for its consumption. It is necessary to develop automatic and efficient systems that can verify its state before its distribution. This paper presents an algorithm based on deep learning for the identification of fruits and the state they are in, oriented to changes in camera focus, capture angles, lighting variations, and change of backgrounds. In this case, 8 types of fruit are chosen to identify what kind of fruit is being observed and if it is in good condition or not, establishing a total of 16 categories that the network must classify. A convolutional neural network with a DAG structure is proposed for the learning of fruits and their state. A graphic user interface is designed to allow the acquisition of the image of the fruit and its subsequent classification in some of the categories. A 94.43% accuracy was obtained in the 1600 test images classification, with approximate processing times of 45-55 milliseconds. Therefore, it can be concluded that the proposed system based on Deep learning can adequately perform a process of detection of types of fruits and their state.

Y. Yang, Z. Zha, M. Gao, and Z. He, “A robust vision inspection system for detecting surface defects of film capacitors,” Signal Processing, vol. 124, pp. 54-62, 2016. Doi: 10.1016/j.sigpro.2015.10.028.

Y. Cha, W. Choi, and O. Bí¼yí¼kí¶ztí¼rk, “Deep learning”based crack damage detection using convolutional neural networks,” Computer”Aided Civil and Infrastructure Engineering, vol. 32, no. 5, pp. 361-378, 2017. doi: 10.1111/mice.12263.

M. Makky, P. Soni, and V. Salokhe, “Automatic non-destructive quality inspection system for oil palm fruits,” International Agrophysics, vol. 28, no. 3, pp. 319-329, 2014. doi: 10.2478/intag-2014-0022.

World Health Organization, “Global Strategy on Diet, Physical Activity and Health,” [Online]. Available: http://www.who.int/dietphysicalactivity/fruit/en.

B. Zhang, W. Huang, J. Li, C. Zhao, S. Fan, J. Wu, and C. Liu, “Principles, developments and applications of computer vision for external quality inspection of fruits and vegetables: A review,” Food Research International, vol. 62, pp. 326-343, 2014. doi: 10.1016/j.foodres.2014.03.012.

B. Zhang, W. Huang, L. Gong, J. Li, C. Zhao, C. Liu, and D. Huang, “Computer vision detection of defective apples using automatic lightness correction and weighted RVM classifier,” Journal of Food Engineering, vol. 146, pp. 143-151. 2015. doi: 10.1016/j.jfoodeng.2014.08.024.

C. Enciso-Aragón, C. Pachón-Suescíºn, and R. Jimenez-Moreno, “Quality control system by means of CNN and fuzzy systems,” International Journal of Applied Engineering Research, vol. 13, no. 16, pp. 12846-12853, 2018.

A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” In Advances in neural information processing systems, 2012, pp. 1097-1105.

A. Sawant, M. Bhandari, R. Yadav, R., Yele, and M. Bendale, “Brain Cancer Detection From Mri: A Machine Learning Approach (Tensorflow),” Brain, vol. 5, no. 4, pp. 2089-2094, 2018.

Q. Fan, L. Brown, and J. Smith, “A closer look at Faster R-CNN for vehicle detection,” In Intelligent Vehicles Symposium IEEE, 2016, pp. 124-129. doi: 10.1109/IVS.2016.7535375.

S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” In Advances in neural information processing systems, 2015, pp. 91-99.

A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231-1237, 2013. doi: 10.1177/0278364913491297.

C.G. Pachón, J.O. Pinzón, and R.J. Moreno, 2018, “Product Detection System for Home Refrigerators implemented though a Region-based Convolutional Neural Network,” International Journal of Applied Engineering Research, vol. 13, no. 12, pp.10381-10388, 2018.

K. Thulasiraman, and M.N. Swamy, “Graphs: theory and algorithms,” John Wiley & Sons, 2011.

Z. Golrizkhatami, S. Taheri, and A. Acan, “Multi-scale features for heartbeat classification using directed acyclic graph CNN,” Applied Artificial Intelligence, vol. 32, no. 7-8, pp. 613-628, 2018. doi: 10.1080/08839514.2018.1501910.

C.G. Pachón, J.O. Pinzón, and R.J. Moreno, “FRUIT-16K”, [Online], Available: https://github.com/DEEP-CGPS/FRUIT-16K.git

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929-1958, 2014.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).