International Journal on Advanced Science, Engineering and Information Technology, Vol. 12 (2022) No. 2, pages: 572-579, DOI:10.18517/ijaseit.12.2.13801

Performance Analysis of Deep Learning-based Object Detectors on Raspberry Pi for Detecting Melon Leaf Abnormality

Hanif Rahmat, Sri Wahjuni, Hendra Rahmawan

Abstract

Melon requires intensive treatment with a high cost of maintenance. Digital image processing with deep learning can help handle diseases in melon plants efficiently. Deep-learning-based object detection has significantly better accuracy than the traditional one. However, the deep-learning-based approach leads to high computational and storage resources consumption. Speed and accuracy become tradeoffs to deal with its implementation on devices with limited computing capabilities like Raspberry Pi. This study comparatively analyzes deep-learning-based object detection algorithm performance implemented on a limited computing device, namely Raspberry Pi. The detected object in this study is melon leaves which are classified into two categories, namely abnormal and normal. The experiment was conducted using Faster R-CNN, Single Shot Multibox Detection (SSD), and YOLOv3. The results showed that Faster R-CNN had the highest mAP (~49 %) that ran ~2.5 seconds for an image but had the highest resource usage. Since accuracy is more important than time complexity in melon leaf detection, Faster R-CNN can be recommended as the best object detection algorithm to implement on Raspberry Pi. However, SSD is a fast algorithm with considerable accuracy for real-time detection. In addition, it had not only fast computational time, but SSD MobileNetV2 also spent the least resource usage. Although YOLOv3 had a significantly better running time (0.5 s) which made YOLOv3 the fastest algorithm, it had too low mAP (below 20%). Therefore, YOLOv3 is not recommended for melon leaf abnormality detection since it can allow more detection errors to occur.

Keywords:

Faster R-CNN; melon; object detection; Raspberry Pi; SSD; YOLOv3.

Viewed: 88 times (since abstract online)

cite this paper     download