Fine Tuned of DenseNET121 to Classify NTT Weaving Motifs on Mobile Application

Yohanes Eudes Hugo Maur (1), Albertus Joko Santoso (2), Pranowo (3)
(1) Magister Informatika, Universitas Atma Jaya, Yogyakarta, Indonesia
(2) Magister Informatika, Universitas Atma Jaya, Yogyakarta, Indonesia
(3) Magister Informatika, Universitas Atma Jaya, Yogyakarta, Indonesia
Fulltext View | Download
How to cite (IJASEIT) :
Maur, Yohanes Eudes Hugo, et al. “Fine Tuned of DenseNET121 to Classify NTT Weaving Motifs on Mobile Application”. International Journal on Advanced Science, Engineering and Information Technology, vol. 13, no. 6, Dec. 2023, pp. 2156-63, doi:10.18517/ijaseit.13.6.18314.
The problem of classifying Woven Fabric Motifs through pattern recognition can be addressed using Convolutional Neural Networks (CNNs). Existing CNN architectures like VGG, ResNet, MobileNet, and DenseNet offer diverse propagation methods. These architectures, trained on datasets like imagenet, have demonstrated competence in solving large-scale classification tasks. The CNN model trained on the ImageNet dataset, hereinafter referred to as the pre-trained model, can be utilized to address the classification issue of NTT woven fabric motifs. This involves retraining the model using a new output layer and dataset, a method known as Transfer Learning. In addition to Transfer Learning, this research employs Fine Tuning, which entails retraining several classification layers. The pre-trained model used in this research is DenseNet121. This model was chosen because it does not require too much storage space and has good classification performance so that it can be embedded in smartphones. The results of this study indicate that of the three pre-trained models tested (DenseNet121, MobileNetV2, and ResNet50V2), the pre-trained Model DenseNet121 is the model that has the highest accuracy and the smallest loss, namely 92.58% accuracy and 29.62% Loss. Tests on mobile devices also show that from 130 test data, this model gets an accuracy of 99.23%. Overall, the classification model of NTT woven fabric motifs embedded in mobile devices can be used as an alternative to help the community or people who want to learn about NTT woven fabric motifs.

M. Siombo, “Kearifan Lokal Dalam Proses Pembuatan Tenun Ikat Timor (Studi pada Kelompok Penenun di Atambua-NTT),” Bina Huk. Lingkung., vol. 4, Oct. 2019, doi: 10.24970/bhl.v4i1.88.

Y. Xu, Y. Zhou, P. Sekula, and L. Ding, “Machine learning in construction: From shallow to deep learning,” Dev. Built Environ., vol. 6, p. 100045, 2021, doi: https://doi.org/10.1016/j.dibe.2021.100045.

P. Wang, E. Fan, and P. Wang, “Comparative analysis of image classification algorithms based on traditional machine learning and deep learning,” Pattern Recognit. Lett., vol. 141, pp. 61-67, 2021, doi: https://doi.org/10.1016/j.patrec.2020.07.042.

X. Bai et al., “Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments,” Pattern Recognit., vol. 120, p. 108102, 2021, doi: https://doi.org/10.1016/j.patcog.2021.108102.

G. W. Lindsay, “Convolutional Neural Networks as a Model of the Visual System: Past, Present, and Future,” J. Cogn. Neurosci., vol. 33, no. 10, pp. 2017-2031, Sep. 2021, doi: 10.1162/jocn_a_01544.

B. Tugrul, E. Elfatimi, and R. Eryigit, “Convolutional Neural Networks in Detection of Plant Leaf Diseases: A Review,” Agriculture, vol. 12, no. 8. 2022. doi: 10.3390/agriculture12081192.

G. Sakkarvarthi, G. W. Sathianesan, V. S. Murugan, A. J. Reddy, P. Jayagopal, and M. Elsisi, “Detection and Classification of Tomato Crop Disease Using Convolutional Neural Network,” Electronics, vol. 11, no. 21. 2022. doi: 10.3390/electronics11213618.

M. Kí¼í§í¼kdemirci, G. Landeschi, M. Ohlsson, and N. Dell’Unto, “Investigating ancient agricultural field systems in Sweden from airborne LIDAR data by using convolutional neural network,” Archaeol. Prospect., vol. 30, no. 2, pp. 209-219, Apr. 2023, doi: https://doi.org/10.1002/arp.1886.

A. M. Mishra, S. Harnal, V. Gautam, R. Tiwari, and S. Upadhyay, “Weed density estimation in soya bean crop using deep convolutional neural networks in smart agriculture,” J. Plant Dis. Prot., vol. 129, no. 3, pp. 593-604, 2022, doi: 10.1007/s41348-022-00595-7.

Y. Xie et al., “Convolutional Neural Network Techniques for Brain Tumor Classification (from 2015 to 2022): Review, Challenges, and Future Perspectives,” Diagnostics, vol. 12, no. 8. 2022. doi: 10.3390/diagnostics12081850.

P. Oza, P. Sharma, S. Patel, and P. Kumar, “Deep convolutional neural networks for computer-aided breast cancer diagnostic: a survey,” Neural Comput. Appl., vol. 34, no. 3, pp. 1815-1836, 2022, doi: 10.1007/s00521-021-06804-y.

A. K. Sharma et al., “Dermatologist-Level Classification of Skin Cancer Using Cascaded Ensembling of Convolutional Neural Network and Handcrafted Features Based Deep Neural Network,” IEEE Access, vol. 10, pp. 17920-17932, 2022, doi: 10.1109/ACCESS.2022.3149824.

A. Kumar, A. R. Tripathi, S. C. Satapathy, and Y.-D. Zhang, “SARS-Net: COVID-19 detection from chest x-rays by combining graph convolutional network and convolutional neural network,” Pattern Recognit., vol. 122, p. 108255, 2022, doi: https://doi.org/10.1016/j.patcog.2021.108255.

I. H. Sarker, “Machine Learning: Algorithms, Real-World Applications and Research Directions,” SN Comput. Sci., vol. 2, no. 3, pp. 1-21, 2021, doi: 10.1007/s42979-021-00592-x.

M. A. Rasyidi and T. Bariyah, “Batik pattern recognition using convolutional neural network,” vol. 9, no. 4, pp. 1430-1437, 2020, doi: 10.11591/eei.v9i4.2385.

B. S. Negara, E. Satria, S. Sanjaya, and D. R. Dwi Santoso, “ResNet-50 for Classifying Indonesian Batik with Data Augmentation,” in 2021 International Congress of Advanced Technology and Engineering (ICOTEN), 2021, pp. 1-4. doi: 10.1109/ICOTEN52080.2021.9493488.

Y. Azhar, M. C. Mustaqim, and A. E. Minarno, “Ensemble convolutional neural network for robust batik classification,” IOP Conf. Ser. Mater. Sci. Eng., vol. 1077, no. 1, p. 12053, 2021, doi: 10.1088/1757-899x/1077/1/012053.

L. Alzubaidi et al., “Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data,” Cancers, vol. 13, no. 7. 2021. doi: 10.3390/cancers13071590.

W. Ge and Y. Yu, Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-Tuning. 2017. doi: 10.1109/CVPR.2017.9.

B. Sreenivasulu, A. Pasala, and G. Vasanth, “Adaptive Inception Based on Transfer Learning for Effective Visual Recognition,” Int. J. Intell. Eng. Syst., vol. 13, no. 6, pp. 1-10, 2020, doi: 10.22266/ijies2020.1231.01.

M. A. Iqbal Hussain, B. Khan, Z. Wang, and S. Ding, “Woven Fabric Pattern Recognition and Classification Based on Deep Convolutional Neural Networks,” Electronics, vol. 9, no. 6. 2020. doi: 10.3390/electronics9061048.

M. A. Rasyidi, R. Handayani, and F. Aziz, “Identification of batik making method from images using convolutional neural network with limited amount of data,” Bull. Electr. Eng. Informatics; Vol 10, No 3 June 2021, 2021, doi: 10.11591/eei.v10i3.3035.

Y. Harjoseputro, Y. D. Handarkho, H. Tresy, and R. Adie, “The Javanese Letters Classifier with Mobile Client- Server Architecture and Convolution Neural Network Method,” vol. 13, no. 12, pp. 67-80, 2019.

M. G. Lanjewar and K. G. Panchbhai, “Convolutional neural network based tea leaf disease prediction system on smart phone using paas cloud,” Neural Comput. Appl., vol. 35, no. 3, pp. 2755-2771, 2023, doi: 10.1007/s00521-022-07743-y.

K.-S. Lee et al., “Compressed Deep Learning to Classify Arrhythmia in an Embedded Wearable Device,” Sensors, vol. 22, no. 5, 2022, doi: 10.3390/s22051776.

Z. Al-Halah, S. K. Ramakrishnan, and K. Grauman, “Zero Experience Required: Plug & Play Modular Transfer Learning for Semantic Visual Navigation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2022, pp. 17031-17041.

P. Kora et al., “Transfer learning techniques for medical image analysis: A review,” Biocybern. Biomed. Eng., vol. 42, no. 1, pp. 79-107, 2022, doi: https://doi.org/10.1016/j.bbe.2021.11.004.

G. He, P. Xue, and J. Meng, “Few-shot Thangka image classification based on improved DenseNet,” J. Phys. Conf. Ser., vol. 1678, p. 12087, Nov. 2020, doi: 10.1088/1742-6596/1678/1/012087.

V. Gupta et al., “Cross-property deep transfer learning framework for enhanced predictive analytics on small materials data,” Nat. Commun., vol. 12, no. 1, p. 6595, 2021, doi: 10.1038/s41467-021-26921-5.

S. Rezaei, J. Tahmoresnezhad, and V. Solouk, “A transductive transfer learning approach for image classification,” Int. J. Mach. Learn. Cybern., vol. 12, no. 3, pp. 747 - 762, 2021, doi: 10.1007/s13042-020-01200-9.

J. Kobylarz, J. J. Bird, D. R. Faria, E. P. Ribeiro, and A. Ekí¡rt, “Thumbs up, thumbs down: non-verbal human-robot interaction through real-time EMG classification via inductive and supervised transductive transfer learning,” J. Ambient Intell. Humaniz. Comput., vol. 11, no. 12, pp. 6021 - 6031, 2020, doi: 10.1007/s12652-020-01852-z.

G. Huang, Z. Liu, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” CoRR, vol. abs/1608.0, 2016, [Online]. Available: http://arxiv.org/abs/1608.06993

K. He, X. Zhang, S. Ren, and J. Sun, “Identity Mappings in Deep Residual Networks.” arXiv, 2016. doi: 10.48550/ARXIV.1603.05027.

I. W. A. S. Darma, N. Suciati, and D. Siahaan, “Neural Style Transfer and Geometric Transformations for Data Augmentation on Balinese Carving Recognition using MobileNet,” Int. J. Intell. Eng. Syst., vol. 13, no. 6, pp. 349-363, 2020, doi: 10.22266/ijies2020.1231.31.

T. Anwar and S. Zakir, “Effect of Image Augmentation on ECG Image Classification using Deep Learning,” in 2021 International Conference on Artificial Intelligence, ICAI 2021, 2021, pp. 182-186. doi: 10.1109/ICAI52203.2021.9445258.

A. Rahman, Y. Lu, and H. Wang, “Performance evaluation of deep learning object detectors for weed detection for cotton,” Smart Agric. Technol., vol. 3, 2023, doi: 10.1016/j.atech.2022.100126.

M. D. Bloice, C. Stocker, and A. Holzinger, “Augmentor: An Image Augmentation Library for Machine Learning,” ArXiv, vol. abs/1708.0, 2017.

D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization.” 2014. [Online]. Available: http://arxiv.org/abs/1412.6980

B. D. Satoto, M. I. Utoyo, R. Rulaningtyas, and E. B. Koendhori, “Custom convolutional neural network with data augmentation and bayesian optimization for gram-negative bacteria classification,” Int. J. Intell. Eng. Syst., vol. 13, no. 5, pp. 524-538, 2020, doi: 10.22266/ijies2020.1031.46.

“Tensorflow.” https://www.tensorflow.org/

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).