Mobile Skin Disease Classification using MobileNetV2 and NASNetMobile

Idhawati Hestiningsih (1), Afandi Nur Aziz Thohari (2), - Kurnianingsih (3), Nur Diyana Kamarudin (4)
(1) Department of Electrical Engineering, Politeknik Negeri Semarang, Prof. Soedarto SH, Semarang, 50275, Indonesia
(2) Department of Electrical Engineering, Politeknik Negeri Semarang, Prof. Soedarto SH, Semarang, 50275, Indonesia
(3) Department of Electrical Engineering, Politeknik Negeri Semarang, Prof. Soedarto SH, Semarang, 50275, Indonesia
(4) Faculty of Science and Defence Technology, National Defence University of Malaysia, Kem Perdana Sungai Besi, 5700, Malaysia
Fulltext View | Download
How to cite (IJASEIT) :
Hestiningsih, Idhawati, et al. “Mobile Skin Disease Classification Using MobileNetV2 and NASNetMobile”. International Journal on Advanced Science, Engineering and Information Technology, vol. 13, no. 4, July 2023, pp. 1472-9, doi:10.18517/ijaseit.13.4.18290.
The people of Indonesia often suffer from three diseases: tinea versicolor, ringworm, and scabies. Most Indonesians cannot distinguish the type of skin disease they suffer from because some have the same characteristics and patterns. Therefore, this study built an M-Health application to predict skin diseases. The prediction process uses a deep learning model deployed in the smartphone. The challenge of this study is the limited number of datasets because the data used is personal and requires permission from the patient or hospital. Therefore, transfer learning is used to overcome these data limitations. The transfer learning process in this study uses two pre-trained models, MobileNetV2 and NASNetMobile. To obtain a model with high accuracy, performed modifications to the architecture MobileNetV2 and NASNetMobile. The test results showed that MobileNetV2 performs best using a learning rate of 0.0005 and activation function ELU. While NASNetMobile produces the best performance using a learning rate of 0.0001 and activation function ReLU6. The test results using data from gallery smartphones show that NASNetMobile has an accuracy of 91.6%, while MobileNetV2 has an accuracy of 88.9%. They were testing using the camera in real-time, which resulted in accuracy that was not as accurate as if using data from the gallery. Accuracy using a smartphone camera shows an accuracy of 75% when using NASNetMobile and 72.2% when using MobileNetV2. Accuracy will increase when using a flashlight while capturing objects. Based on the testing results using a flashlight, the NASNetMobile accuracy value increased to 80.5%, while MobileNetV2 accuracy changed to 77.8%.

E. S. S. Daili, S. L. Menaldi, and I. M. Wisnu, Penyakit Kulit Yang Umum Di Indonesia. Jakarta: PT. Medical Multimedia Indonesia, 2006.

A. Nadiya, R. Listiawaty, and C. Wuni, “Hubungan Personal Hygiene Dan Sanitasi Lingkungan Dengan Penyakit Scabies Pada Santri Di Pondok Pesantren Sa’Adatuddaren,” Contag. Sci. Period. J. Public Heal. Coast. Heal., vol. 2, no. 2, p. 99, 2020, doi: 10.30829/contagion.v2i2.7240.

L. Wijaya;, R. Fernando;, and S. Lembar, Pemeriksaan Penunjang dan Laboratorium Pada Penyakit Kulit dan Kelamin, First. Jakarta: Universitas Katolik Indonesia Atma Jaya, 2019.

GSMA, “The Mobile Economy 2022,” 2022. [Online]. Available: www.gsmaintelligence.com.

Mednet, “Medgic.” 2021, Accessed: Aug. 10, 2022. [Online]. Available: https://play.google.com/store/apps/details?id=co.medgic.medgic.

IDerma, “Model Dermatology - Skin Disease.” 2022, Accessed: Aug. 09, 2022. [Online]. Available: https://play.google.com/store/apps/details?id=com.phonegap.whichderm.

K. Studio, “All Skin Diseases and Treatment,” 2020. https://play.google.com/store/apps/details?id=com.ruthieapps.all.skin.diseases.with.photo.treatment (accessed Aug. 09, 2022).

S. Ayhan, “AI Skin Disease Detection,” 2021. https://play.google.com/store/apps/details?id=com.suna.PlantDiseaseClassification (accessed Aug. 09, 2022).

A. Wibowo, C. A. Hartanto, and P. W. Wirawan, “Android skin cancer detection and classification based on mobilenet v2 model,” Int. J. Adv. Intell. Informatics, vol. 6, no. 2, pp. 135-148, 2020, doi: 10.26555/ijain.v6i2.492.

P. N. Srinivasu, J. G. Sivasai, M. F. Ijaz, A. K. Bhoi, W. Kim, and J. J. Kang, “Classification of Skin Disease using Deep Learning Neural Networks with Mobilenet V2 and LSTM,” Sensors, vol. 21, no. 8, pp. 1-27, 2021, doi: 10.3390/s21082852.

J. Velasco et al., “A Smartphone-Based Skin Disease Classification Using MobileNet CNN,” Int. J. Adv. Trends Comput. Sci. Eng., vol. 8, no. 5, pp. 2632-2637, 2019, doi: 10.30534/ijatcse/2019/116852019.

B. Ahmad, M. Usama, C. M. Huang, K. Hwang, M. S. Hossain, and G. Muhammad, “Discriminative Feature Learning for Skin Disease Classification Using Deep Convolutional Neural Network,” IEEE Access, vol. 8, pp. 39025-39033, 2020, doi: 10.1109/ACCESS.2020.2975198.

E. Goceri, “Diagnosis of skin diseases in the era of deep learning and mobile technology,” Comput. Biol. Med., vol. 134, no. April, p. 104458, 2021, doi: 10.1016/j.compbiomed.2021.104458.

F. Zhuang et al., “A Comprehensive Survey on Transfer Learning,” in Proceedings of the IEEE, 2021, vol. 109, no. 1, pp. 43-76, doi: 10.1109/JPROC.2020.3004555.

F. Saxen, P. Werner, S. Handrich, E. Othman, L. Dinges, and A. Al-Hamadi, “Face attribute detection with mobilenetv2 and nasnet-mobile,” Int. Symp. Image Signal Process. Anal. ISPA, vol. 2019-Septe, no. C, pp. 176-180, 2019, doi: 10.1109/ISPA.2019.8868585.

K. Maharana, S. Mondal, and B. Nemade, “A Review: Data Pre-processing and Data Augmentation T echniques,” Glob. Transitions Proc., vol. 3, no. 1, pp. 91-99, 2022, doi: 10.1016/j.gltp.2022.04.020.

S. Agustin, H. Tjandrasa, and R. V. H. Ginardi, “Deep Learning-based Method for Multi-Class Classification of Oil Palm Planted Area on Plant Ages Using Ikonos Panchromatic Imagery,” Int. J. Adv. Sci. Eng. Inf. Technol., vol. 10, no. 6, pp. 2200-2206, 2020, doi: 10.18517/ijaseit.10.6.12030.

R. L. Galvez, E. P. Dadios, A. A. Bandala, and R. R. P. Vicerra, “Object detection in x-ray images using transfer learning with data augmentation,” Int. J. Adv. Sci. Eng. Inf. Technol., vol. 9, no. 6, pp. 2147-2153, 2019, doi: 10.18517/ijaseit.9.6.9960.

M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510-4520, doi: 10.1109/CVPR.2018.00474.

Y. J. Cheng, Wei Lin, Lu Sun, and Yuan Zhen Liu, “Classification of skin diseases based on improved MobileNetV2,” in 2021 33rd Chinese Control and Decision Conference (CCDC), 2021, pp. 298-603, doi: 10.1109/CCDC52312.2021.9602387.

L. A. Wu, Y. Wu, and C. Wang, “MobileNet investigation: its application and reproducing edge detectors using depth-wise separable convolution,” in 2nd International Conference on Machine Learning and Computer Application, 2021, pp. 1-6.

A. O. Adedoja, P. A. Owolawi, T. Mapayi, and C. Tu, “Intelligent Mobile Plant Disease Diagnostic System Using NASNet-Mobile Deep Learning,” IAENG Int. J. Comput. Sci., vol. 49, no. 1, pp. 216-231, 2022.

Nillmani et al., “Four Types of Multiclass Frameworks for Pneumonia Classification and Its Validation in X-ray Scans Using Seven Types of Deep Learning Artificial Intelligence Models,” Diagnostics, vol. 12, no. 3, pp. 1-32, 2022, doi: 10.3390/diagnostics12030652.

M. M. Ahsan, K. D. Gupta, M. M. Islam, S. Sen, M. L. Rahman, and M. Shakhawat Hossain, “COVID-19 Symptoms Detection Based on NasNetMobile with Explainable AI Using Various Imaging Modalities,” Mach. Learn. Knowl. Extr., vol. 2, no. 4, pp. 490-504, 2020, doi: 10.3390/make2040027.

S. D. Bimorogo, “A Comparative Study of Pretrained Convolutional Neural Network Model to Identify Plant Diseases on Android Mobile Device,” Int. J. Adv. Trends Comput. Sci. Eng., vol. 9, no. 3, pp. 2824-2833, 2020, doi: 10.30534/ijatcse/2020/53932020.

T. Shanthi, R. S. Sabeenian, and R. Anand, “Automatic diagnosis of skin diseases using convolution neural network,” Microprocess. Microsyst., vol. 76, no. 1-8, p. 103074, 2020, doi: 10.1016/j.micpro.2020.103074.

E. Bisong, Regularization for Deep Learning. In: Building Machine Learning and Deep Learning Models on Google Cloud Platform. Berkeley: Apress, 2019.

M. Sokolova and G. Lapalme, “A systematic analysis of performance measures for classification tasks,” Inf. Process. Manag., vol. 45, no. 4, pp. 427-437, 2009, doi: 10.1016/j.ipm.2009.03.002.

A. Singh and R. Bhadani, Mobile Deep Learning with TensorFlow Lite, ML Kit and Flutter. Birmingham: Packt Publishing, 2020.

W. El-Shafai et al., “Efficient deep CNN model for COVID-19 classification,” Comput. Mater. Contin., vol. 70, no. 3, pp. 4373-4391, 2022, doi: 10.32604/cmc.2022.019354.

X. Ying, “An Overview of Overfitting and its Solutions,” J. Phys. Conf. Ser., vol. 1168, no. 2, pp. 1-6, 2019, doi: 10.1088/1742-6596/1168/2/022022.

X. Wang, H. Ren, and A. Wang, “Smish: A Novel Activation Function for Deep Learning Methods,” Electronics, vol. 11, no. 4, pp. 1-15, 2022, doi: 10.3390/electronics11040540.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).