Convolutional Neural Networks for Herb Identification: Plain Background and Natural Environment

Supawadee Chaivivatrakul (1), Jednipat Moonrinta (2), Suchada Chaiwiwatrakul (3)
(1) Agriculture Faculty, Ubon Ratchathani University, Warinchamrap, Ubon Ratchathani, 34190, Thailand
(2) Asian Institute of Technology, P.O. Box 4, Klong Luang, Pathumthani, 12120, Thailand
(3) Faculty of Humanities and Social Sciences, Ubon Ratchathani Rajabhat University, Ubon Ratchathani 34000, Thailand
Fulltext View | Download
How to cite (IJASEIT) :
Chaivivatrakul, Supawadee, et al. “Convolutional Neural Networks for Herb Identification: Plain Background and Natural Environment”. International Journal on Advanced Science, Engineering and Information Technology, vol. 12, no. 3, June 2022, pp. 1244-52, doi:10.18517/ijaseit.12.3.15348.
Convolutional neural networks have achieved success in resolving object identification problems. This study contributes a suitable new approach to herb identification for educational and research purposes based on a small dataset and small-sized images. Two self-collected Thai herb datasets with either plain or natural environment backgrounds were used for experimentation to realize this objective. The plain background dataset includes 4,400 images of 11 leaf types, and the natural dataset contains 1,620 images of nine leaf types. The images were divided into a training set containing 75% of the images and a separate test set with the remaining 25%. The experiments included five-fold cross-validation applied to the training set; the InceptionV3, MobileNetV2, ResNet50V2, VGG16, and Xception convolutional neural network models RMSprop and Adam optimizers. Further, dropout rates of 0.3, 0.5, and 0.7 were considered along with five and ten epochs. Transfer learning was applied using pre-trained weights. The model with the best outcome, based on the average accuracy of the cross-validation results on both datasets (the plain background dataset was 94.55%, and the natural dataset was 90.37%), was the VGG16 with the RMSprop optimizer, which exhibited a dropout rate of 0.5 over ten epochs. The model achieved 96.64% and 92.00% accuracy on the plain background training and test sets, and 99.59% and 91.36% on the natural environment training and test sets, respectively. The results show that the method has a high potential for objective tasks and application in identifying herbs based on visual leaf information.

E. Yigit, K. Sabanci, A. Toktas, and A. Kayabasi, “A study on visual features of leaves in plant identification using artificial intelligence techniques,” Comput. Electron. Agric., vol. 156, no. June 2018, pp. 369-377, 2019, doi: 10.1016/j.compag.2018.11.036.

A. Soleimanipour, G. R. Chegini, and J. Massah, “Classification of anthurium flowers using combination of PCA, LDA and support vector machine,” Agric. Eng. Int. CIGR J., vol. 20, no. 1, pp. 219-228, 2018, [Online]. Available: https://www.semanticscholar.org/paper/Classification-of-Anthurium-flowers-using-of-PCA-%2C-Soleimanipour-Chegini/8279a45ebd28776b81f61df7ed4e3285cfed075d.

A. Ambarwari, Q. J. Adrian, Y. Herdiyeni, and I. Hermadi, “Plant species identification based on leaf venation features using SVM,” TELKOMNIKA (Telecommunication Comput. Electron. Control., vol. 18, no. 2, p. 726, Apr. 2020, doi: 10.12928/telkomnika.v18i2.14062.

J. Chaki, N. Dey, L. Moraru, and F. Shi, “Fragmented plant leaf recognition: Bag-of-features, fuzzy-color and edge-texture histogram descriptors with multi-layer perceptron,” Optik (Stuttg)., vol. 181, no. December 2018, pp. 639-650, Mar. 2019, doi: 10.1016/j.ijleo.2018.12.107.

J. R. Kala and S. Viriri, “Plant Specie Classification Using Sinuosity Coefficients of Leaves,” Image Anal. Stereol., vol. 37, no. 2, p. 119, Jul. 2018, doi: 10.5566/ias.1821.

T. Q. Bao, N. T. T. Kiet, T. Q. Dinh, and H. X. Hiep, “Plant species identification from leaf patterns using histogram of oriented gradients feature space and convolution neural networks,” J. Inf. Telecommun., vol. 4, no. 2, pp. 140-150, Apr. 2020, doi: 10.1080/24751839.2019.1666625.

V. Muthireddy and C. V. Jawahar, “Indian Plant Recognition in the Wild,” in Computer Vision, Pattern Recognition, Image Processing, and Graphics, N. V. P. Babu R.V., Prasanna M., Ed. Springer, Singapore, 2020, pp. 439-449.

L. Mookdarsanit and P. Mookdarsanit, “Thai Herb Identification with Medicinal Properties Using Convolutional Neural Network,” Suan Sunandha Sci. Technol. J., vol. 06, no. 2, pp. 34-40, 2019, doi: 10.14456/ssstj.2019.8.

U. Barman, R. D. Choudhury, D. Sahu, and G. G. Barman, “Comparison of convolution neural networks for smartphone image based real time classification of citrus leaf disease,” Comput. Electron. Agric., vol. 177, no. July, p. 105661, Oct. 2020, doi: 10.1016/j.compag.2020.105661.

S. UÄŸuz and N. Uysal, “Classification of olive leaf diseases using deep convolutional neural networks,” Neural Comput. Appl., vol. 33, no. 9, pp. 4133-4149, May 2021, doi: 10.1007/s00521-020-05235-5.

C. Bi, J. Wang, Y. Duan, B. Fu, J.-R. Kang, and Y. Shi, “MobileNet Based Apple Leaf Diseases Identification,” Mob. Networks Appl., Aug. 2020, doi: 10.1007/s11036-020-01640-1.

P. Lottes, J. Behley, A. Milioto, and C. Stachniss, “Fully Convolutional Networks With Sequential Information for Robust Crop and Weed Detection in Precision Farming,” IEEE Robot. Autom. Lett., vol. 3, no. 4, pp. 2870-2877, Oct. 2018, doi: 10.1109/LRA.2018.2846289.

H. Jiang, C. Zhang, Y. Qiao, Z. Zhang, W. Zhang, and C. Song, “CNN feature based graph convolutional network for weed and crop recognition in smart farming,” Comput. Electron. Agric., vol. 174, no. April, p. 105450, Jul. 2020, doi: 10.1016/j.compag.2020.105450.

T.-T. Tran, J.-W. Choi, T.-T. Le, and J.-W. Kim, “A Comparative Study of Deep CNN in Forecasting and Classifying the Macronutrient Deficiencies on Development of Tomato Plant,” Appl. Sci., vol. 9, no. 8, p. 1601, Apr. 2019, doi: 10.3390/app9081601.

H. Kuang, C. Liu, L. L. H. Chan, and H. Yan, “Multi-class fruit detection based on image region selection and improved object proposals,” Neurocomputing, vol. 283, pp. 241-255, 2018, doi: https://doi.org/10.1016/j.neucom.2017.12.057.

C. Hu, X. Liu, Z. Pan, and P. Li, “Automatic Detection of Single Ripe Tomato on Plant Combining Faster R-CNN and Intuitionistic Fuzzy Set,” IEEE Access, vol. 7, pp. 154683-154696, 2019, doi: 10.1109/ACCESS.2019.2949343.

X. Mai, H. Zhang, X. Jia, and M. Q. H. Meng, “Faster R-CNN With Classifier Fusion for Automatic Detection of Small Fruits,” IEEE Trans. Autom. Sci. Eng., vol. 17, no. 3, pp. 1-15, 2020, doi: 10.1109/TASE.2020.2964289.

M. Abadi et al., “TensorFlow: A System for Large-Scale Machine Learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), Nov. 2016, pp. 265-283, [Online]. Available: https://www.usenix.org/conference/osdi16/technical-sessions/presentation/abadi.

K. Wongsuphasawat et al., “Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow,” IEEE Trans. Vis. Comput. Graph., vol. 24, no. 1, pp. 1-12, Jan. 2018, doi: 10.1109/TVCG.2017.2744878.

J. Persano, S. M. Mikki, and Y. M. M. Antar, “Gradient Population Optimization: A Tensorflow-Based Heterogeneous Non-Von-Neumann Paradigm for Large-Scale Search,” IEEE Access, vol. 6, pp. 77097-77122, 2018, doi: 10.1109/ACCESS.2018.2868236.

X. Liu, F. Xu, Y. Sun, H. Zhang, and Z. Chen, “Convolutional Recurrent Neural Networks for Observation-Centered Plant Identification,” J. Electr. Comput. Eng., vol. 2018, pp. 1-7, 2018, doi: 10.1155/2018/9373210.

T. Boston and A. Van Dijk, “Some experiments in automated identification of Australian plants using convolutional neural networks,” in MODSIM2019, 23rd International Congress on Modelling and Simulation., Dec. 2019, no. October, pp. 15-21, doi: 10.36334/modsim.2019.A1.boston.

M. Chohan, R. Adil Khan, S. H. K. Chohan, and M. S. Mahar, “Plant Disease Detection using Deep Learning,” Int. J. Recent Technol. Eng., vol. 9, no. 1, pp. 909-914, May 2020, doi: 10.35940/ijrte.A2139.059120.

D. Argí¼eso et al., “Few-Shot Learning approach for plant disease classification using images taken in the field,” Comput. Electron. Agric., vol. 175, no. June, p. 105542, Aug. 2020, doi: 10.1016/j.compag.2020.105542.

F. Hutter, L. Kotthoff, and J. Vanschoren, Automated Machine Learning. Cham: Springer International Publishing, 2019.

C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2017, vol. 31, no. 1.

J. Gu et al., “Recent advances in convolutional neural networks,” Pattern Recognit., vol. 77, pp. 354-377, May 2018, doi: 10.1016/j.patcog.2017.10.013.

M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp. 4510-4520, doi: 10.1109/CVPR.2018.00474.

B. Singh, D. Toshniwal, and S. K. Allur, “Shunt connection: An intelligent skipping of contiguous blocks for optimizing MobileNet-V2,” Neural Networks, vol. 118, pp. 192-203, Oct. 2019, doi: 10.1016/j.neunet.2019.06.006.

L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 4, pp. 834-848, Apr. 2018, doi: 10.1109/TPAMI.2017.2699184.

A. Brock, J. Donahue, and K. Simonyan, “Large Scale GAN Training for High Fidelity Natural Image Synthesis,” 7th Int. Conf. Learn. Represent. ICLR 2019, pp. 1-35, Sep. 2018, [Online]. Available: http://arxiv.org/abs/1809.11096.

F. Chollet, “Xception: Deep Learning with Depthwise Separable Convolutions,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, vol. 2017-Janua, pp. 1800-1807, doi: 10.1109/CVPR.2017.195.

A. K. A. S. P. M. and P. R. Vinod Kumar, “Design and implementation of web-based expert tool for selection of climate resilient rapeseed-mustard varieties,” J. Oilseed Brassica, vol. 0, no. 0, pp. 168-175, 2018.

V. H. Nhu et al., “Effectiveness assessment of Keras based deep learning with different robust optimization algorithms for shallow landslide susceptibility mapping at tropical area,” Catena, vol. 188, no. November 2019, p. 104458, 2020, doi: 10.1016/j.catena.2020.104458.

F. Zou, L. Shen, Z. Jie, W. Zhang, and W. Liu, “A Sufficient Condition for Convergences of Adam and RMSProp,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2019, vol. 2019-June, no. 1, pp. 11119-11127, doi: 10.1109/CVPR.2019.01138.

A. Sharma, “Guided Stochastic Gradient Descent Algorithm for inconsistent datasets,” Appl. Soft Comput. J., vol. 73, pp. 1068-1080, 2018, doi: 10.1016/j.asoc.2018.09.038.

K. He, R. Girshick, and P. Dollar, “Rethinking ImageNet Pre-Training,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Oct. 2019, vol. 2019-Octob, no. ii, pp. 4917-4926, doi: 10.1109/ICCV.2019.00502.

A. Berger and S. Guda, “Threshold optimization for F measure of macro-averaged precision and recall,” Pattern Recognit., vol. 102, p. 107250, Jun. 2020, doi: 10.1016/j.patcog.2020.107250.

E. Bisong, “Google Colaboratory,” in Building Machine Learning and Deep Learning Models on Google Cloud Platform, Berkeley, CA: Apress, 2019, pp. 59-64.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).