Performance Analysis of ResNet50 and Inception-V3 Image Classification for Defect Detection in 3D Food Printing

Cholid Mawardi (1), Agus Buono (2), Karlisa Priandana (3), Herianto (4)
(1) Department of Computer Science Faculty of Mathematics and Natural Sciences, Institut Pertanian Bogor, Bogor, 16680, Indonesia
(2) Department of Computer Science Faculty of Mathematics and Natural Sciences, Institut Pertanian Bogor, Bogor, 16680, Indonesia
(3) Department of Computer Science Faculty of Mathematics and Natural Sciences, Institut Pertanian Bogor, Bogor, 16680, Indonesia
(4) Department of Mechanical & Industrial Engineering, Gadjah Mada University, Yogyakarta, 55281, Indonesia
Fulltext View | Download
How to cite (IJASEIT) :
Mawardi, Cholid, et al. “Performance Analysis of ResNet50 and Inception-V3 Image Classification for Defect Detection in 3D Food Printing”. International Journal on Advanced Science, Engineering and Information Technology, vol. 14, no. 2, Apr. 2024, pp. 798-04, doi:10.18517/ijaseit.14.2.19863.
In the future, 3D food printing may be a fruitful area for the food industry. The presence of this technology opens the public's eyes to a revolution in the food sector, where food processing is limited to the conventional. The existence of this technology will also make food production much more effective and efficient. However, there is a problem when the 3D Food Printing process is carried out; when object defects occur, there will be material waste. This research proposes deep learning-based defect detection for defect and non-defect objects. This model mainly uses CNN as a deep learning method. The dataset is taken from the image of the print process performed at the time of object creation to be trained and validated to check the effectiveness of the proposed model. The architecture used uses pre-trained CNN models namely Inception-V3 and ResNet50 with the hope of classifying images that have a higher viscosity of the material. Where the model has been tested with previous datasets and applied with 3D Food Printing datasets with a dataset division ratio of 85% which is training data and 15% is validation data. After testing the two proposed scenarios, the accuracy result obtained in the test model scenario 1, Inception-V3, is 84.62% and for the test model scenario 2, ResNet50, the accuracy is 93.83%. The outcomes also demonstrate that improved accuracy, loss, and classification time may be obtained by applying CNN in conjunction with data augmentation.

P. Pitayachaval, N. Sanklong, and A. Thongrak, “A Review of 3D Food Printing Technology,” MATEC Web Conf., vol. 213, pp. 1–5, 2018, doi: 10.1051/matecconf/201821301012.

J. Sun, Z. Peng, L. Yan, J. Y. H. Fuh, and G. S. Hong, “3D food printing-An innovative way of mass customization in food fabrication,” Int. J. Bioprinting, vol. 1, no. 1, pp. 27–38, 2015, doi:10.18063/IJB.2015.01.006.

V. Kadam, S. Kumar, A. Bongale, S. Wazarkar, P. Kamat, and S. Patil, “Enhancing surface fault detection using machine learning for 3d printed products,” Appl. Syst. Innov., vol. 4, no. 2, 2021, doi:10.3390/asi4020034.

G. X. G. Zeqing Jin, Zhizhou Zhang, “Autonomous in-situ correction of fused deposition modeling printers using computer vision and deep learning,” Manuf. Lett., vol. 22, pp. 11–15, 2019, doi:10.1016/j.mfglet.2019.09.005.

E. Polyzos, A. Katalagarianakis, D. Van Hemelrijck, and L. Pyl, “Delamination analysis of 3D-printed nylon reinforced with continuous carbon fibers,” Addit. Manuf., vol. 46, no. June 2023, 2021, doi: 10.1016/j.addma.2021.102144.

F. Baumann and D. Roller, “Vision based error detection for 3D printing processes,” MATEC Web Conf., vol. 59, pp. 3–9, 2016, doi:10.1051/matecconf/20165906003.

H. Baumgartl, J. Tomas, R. Buettner, and M. Merkel, “A deep learning-based model for defect detection in laser-powder bed fusion using in-situ thermographic monitoring,” Prog. Addit. Manuf., vol. 5, no. 3, pp. 277–285, 2020, doi: 10.1007/s40964-019-00108-3.

S. M. Rachmawati, M. A. Paramartha Putra, T. Jun, D. S. Kim, and J. M. Lee, “Fine-Tuned CNN with Data Augmentation for 3D Printer Fault Detection,” Int. Conf. ICT Converg., vol. 2022-Octob, pp. 902–905, 2022, doi: 10.1109/ICTC55196.2022.9952484.

Y. Tao et al., “A review on voids of 3D printed parts by fused filament fabrication,” J. Mater. Res. Technol., vol. 15, pp. 4860–4879, 2021, doi: 10.1016/j.jmrt.2021.10.108.

A. Saluja, J. Xie, and K. Fayazbakhsh, “A closed-loop in-process warping detection system for fused filament fabrication using convolutional neural networks,” J. Manuf. Process., vol. 58, no. November 2019, pp. 407–415, 2020, doi:10.1016/j.jmapro.2020.08.036.

A. S. M. Pattarapon Phuhongsung, Min Zhang, Sakamon Devahastin, “Defects in 3D/4D food printing and their possible solutions: A comprehensive review,” Compr. Rev. Food Sci. Food Saf., vol. 21, no. 4, pp. 3455–3479, 2022, doi: 10.1111/1541-4337.12984.

M. Z. Sangeeta Prakash, Bhesh R. Bhandari, Fernanda C. Godoi, Future Outlook of 3D Food Printing. 2019. doi: 10.1016/B978-0-12-814564-7.00013-4.

T. Pereira, S. Barroso, and M. M. Gil, “Food texture design by 3d printing: A review,” Foods, vol. 10, no. 2, pp. 1–26, 2021, doi:10.3390/foods10020320.

K. K. and T. A. and J. A. M. and C. Anandharamakrishnan, “Development of fiber-enriched 3D printed snacks from alternative foods: A study on button mushroom,” J. Food Eng., vol. 287, p. 110116, 2020, doi: 10.1016/j.jfoodeng.2020.110116.

C. Egg et al., “Optimization of the Formulation and Properties of 3D-Printed Complex Egg White Protein Objects,” 2020.

S. Garfo, M. A. Muktadir, and S. Yi, “Defect Detection on 3D Print Products and in Concrete Structures Using Image Processing and Convolution Neural Network,” J. Mechatronics Robot., vol. 4, no. 1, pp. 74–84, 2020, doi: 10.3844/jmrsp.2020.74.84.

M. A. Abdullah and T. F. Abbas, “Numerical Developing the Internet of Things To Remotely Monitor the Performance of a Three Dimensions Printer for Free-Form Surface,” J. Eng. Sci. Technol., vol. 18, no. 6, pp. 2809–2822, 2023.

T. Ludwig, A. Boden, and V. Pipek, “3D printers as sociable technologies: Taking appropriation infrastructures to the Internet of Things,” ACM Trans. Comput. Interact., vol. 24, no. 2, 2017, doi:10.1145/3007205.

R. Theagarajan, J. A. Moses, and C. Anandharamakrishnan, “3D Extrusion Printability of Rice Starch and Optimization of Process Variables,” Food Bioprocess Technol., vol. 13, no. 6, pp. 1048–1062, 2020, doi: 10.1007/s11947-020-02453-6.

Y. Liu et al., “Applicability of Rice Doughs as Promising Food Materials in Extrusion-Based 3D Printing,” Food Bioprocess Technol., vol. 13, no. 3, pp. 548–563, 2020, doi: 10.1007/s11947-020-02415-y.

J. Xie, A. Saluja, A. Rahimizadeh, and K. Fayazbakhsh, “Development of automated feature extraction and convolutional neural network optimization for real-time warping monitoring in 3D printing,” Int. J. Comput. Integr. Manuf., vol. 35, no. 8, pp. 813–830, 2022, doi:10.1080/0951192X.2022.2025621.

C. Mawardi, G. Suryadi, E. Elviana, and F. Yakob, “3D Printing Object Using Recycled Disposable Masks Filament,” 2022, doi:10.4108/eai.16-11-2022.2326056.

F. Zhuang et al., “A Comprehensive Survey on Transfer Learning,” Proc. IEEE, vol. 109, no. 1, pp. 43–76, 2021, doi:10.1109/JPROC.2020.3004555.

W. G. Pamungkas, M. Iqbal, P. Wardhana, Z. Sari, and Y. Azhar, “Leaf Image Identification: CNN with EfficientNet-B0 and ResNet-50 Used to Classified Corn Disease,” J. RESTI, vol. 5, no. 158, pp. 326–333, 2023, doi: 10.29207/resti.v7i2.4736.

I. Hestiningsih, A. N. A. Thohari, Kurnianingsih, and N. D. Kamarudin, “Mobile Skin Disease Classification using MobileNetV2 and NASNetMobile,” Int. J. Adv. Sci. Eng. Inf. Technol., vol. 13, no. 4, pp. 1472–1479, 2023, doi: 10.18517/ijaseit.13.4.18290.

R. L. Galvez, E. P. Dadios, A. A. Bandala, and R. R. P. Vicerra, “Object detection in x-ray images using transfer learning with data augmentation,” Int. J. Adv. Sci. Eng. Inf. Technol., vol. 9, no. 6, pp. 2147–2153, 2019, doi: 10.18517/ijaseit.9.6.9960.

M. S. Fairuz, M. H. Habaebi, and E. M. A. Elsheikh, “Pre-trained based CNN model to identify finger vein,” Bull. Electr. Eng. Informatics, vol. 8, no. 3, pp. 855–862, 2019, doi:10.11591/eei.v8i3.1505.

Asesh. A, Causal Inference - Time Series. In: Biele, C., Kacprzyk, J., Kopeć, W., Owsiński, J.W., Romanowski, A., Sikorski, M. (eds) Digital Interaction and Machine Intelligence. MIDI 2021. Lecture Notes in Networks and Systems, vol 440. Springer, Cham. doi:10.1007/978-3-031-11432-8_4.

L. Wen, X. Li, and L. Gao, “A transfer convolutional neural network for fault diagnosis based on ResNet-50,” Neural Comput. Appl., vol. 32, no. 10, pp. 6111–6124, 2020, doi: 10.1007/s00521-019-04097-w.

A. Khan, A. Sohail, U. Zahoora, and A. S. Qureshi, “A survey of the recent architectures of deep convolutional neural networks,” Artif. Intell. Rev., vol. 53, no. 8, pp. 5455–5516, 2020, doi: 10.1007/s10462-020-09825-6.

C. Feng, H. Zhang, S. Wang, Y. Li, H. Wang, and F. Yan, “Structural Damage Detection using Deep Convolutional Neural Network and Transfer Learning,” KSCE J. Civ. Eng., vol. 23, no. 10, pp. 4493–4502, 2019, doi: 10.1007/s12205-019-0437-z.

S. Y. Chaganti, I. Nanda, K. R. Pandi, T. Gnrsn Prudhvith, and N. Kumar, “Image Classification using SVM and CNN,” 2020 Int. Conf. Comput. Sci. Eng. Appl. ICCSEA 2020, no. March 2020, 2020, doi:10.1109/ICCSEA49143.2020.9132851.

D. Krstinić, M. Braović, L. Šerić, and D. Božić-Štulić, “Multi-label Classifier Performance Evaluation with Confusion Matrix,” pp. 01–14, 2020, doi: 10.5121/csit.2020.100801.

D. Granziol, S. Zohren, and S. Roberts, “Learning Rates as a Function of Batch Size: A Random Matrix Theory Approach to Neural Network Training,” J. Mach. Learn. Res., vol. 23, pp. 1–65, 2022.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).