A Study on a Webtoon Recommendation System With Autoencoder and Domain-specific Finetuing Bert with Synthetic Data Considering Art Style and Synopsis
How to cite (IJASEIT) :
B. Yecies, A. Shim, J. (Jie) Yang, and P. Y. Zhong, “Global transcreators and the extension of the Korean webtoon IP-engine,” Media, Culture & Society, vol. 42, no. 1, pp. 40–57, Sep. 2019, doi:10.1177/0163443719867277.
Spherical Insights, “Global Webtoons Market Size, Share & Trends, COVID-19 Impact Analysis Report, By Type (Romance, Comedy, Action, and Others), By Revenue Model (Subscription Based, and Advertisement Based), and By Region (North America, Europe, Asia-Pacific, Latin America, and the Middle East and Africa), Analysis and Forecast 2021 – 2030,” Spherical Insights, 2022. [Online]. Available: https://www.sphericalinsights.com/reports/webtoons-market.
J. W. Seo and H. Park, “A study on the strengths of cultural content company in the long-tail market: Case of kakao-page,” Journal of Convergence Information, vol. 10, no. 11, pp. 117-130, 2020.
E. Gündoğan and M. Kaya, “A novel hybrid paper recommendation system using deep learning,” Scientometrics, vol. 127, no. 7, pp. 3837–3855, Jun. 2022, doi: 10.1007/s11192-022-04420-8.
J. Wei, J. He, K. Chen, Y. Zhou, and Z. Tang, “Collaborative filtering and deep learning based recommendation system for cold start items,” Expert Systems with Applications, vol. 69, pp. 29–39, Mar. 2017, doi: 10.1016/j.eswa.2016.09.040.
N. Reimers and I. Gurevych, “Sentence-bert: Sentence embeddings using siamese bert-networks,” arXiv preprint, arXiv:1908.10084, 2019.
Y. Gu et al., “Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing,” arXiv preprint, arXiv:2007.15779v6, 2021.
Naver Webtoon, [Online]. Available: https://comic.naver.com/webtoon. [Accessed: Apr. 29, 2024].
H. D. Alrubaie, H. K. Aljobouri, and Z. J. Aljobawi, “Efficient Feature Selection Using CNN, VGG16 and PCA for Breast Cancer Ultrasound Detection,” Revue d’Intelligence Artificielle, vol. 37, no. 5, pp. 1255–1261, Oct. 2023, doi: 10.18280/ria.370518.
L. A. Gatys, A. S. Ecker, and M. Bethge, “Image Style Transfer Using Convolutional Neural Networks,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2414–2423, Jun. 2016, doi: 10.1109/cvpr.2016.265.
J. Kim and Y. Kang, “Automatic Classification of Photos by Tourist Attractions Using Deep Learning Model and Image Feature Vector Clustering,” ISPRS International Journal of Geo-Information, vol. 11, no. 4, p. 245, Apr. 2022, doi: 10.3390/ijgi11040245.
I. Garg, P. Panda, and K. Roy, “A Low Effort Approach to Structured CNN Design Using PCA,” IEEE Access, vol. 8, pp. 1347–1360, 2020, doi: 10.1109/access.2019.2961960.
C. Yuan and H. Yang, “Research on K-Value Selection Method of K-Means Clustering Algorithm,” J, vol. 2, no. 2, pp. 226–235, Jun. 2019, doi: 10.3390/j2020016.
L. Zhang et al., “MDERank: A Masked Document Embedding Rank Approach for Unsupervised Keyphrase Extraction,” arXiv preprint, arXiv:2110.06651, 2021.
Z. S. Liu, V. Kalogeiton, and M. P. Cani, “Multiple Style Transfer via Variational Autoencoder,” arXiv preprint, arXiv:2110.07375, 2021.
A. Rietzler, S. Stabinger, P. Opitz, and S. Engl, “Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification,” Proc. 12th Language Resources and Evaluation Conf., pp. 4933-4941, 2020.
J. Zheng, H. Hong, X. Wang, and J. Su, “Fine-tuning Large Language Models for Domain-specific Machine Translation,” arXiv preprint , arXiv:2402.15061, 2023.
L. Wang, N. Yang, X. Huang, L. Yang, R. Majumder, and F. Wei, “Improving Text Embeddings with Large Language Models,” arXiv preprint, arXiv:2401.00368v3, 2024.
B. Alt et al., “Domain-Specific Fine-Tuning of Large Language Models for Interactive Robot Programming,” arXiv preprint, arXiv:2312.13905, 2023.
K. Kim, C. Lee, J. Ryu, and J. Lim, “BERT-based Data Augmentation Techniques for Korean Coreference Resolution,” Proc. Annual Conf. Human and Language Technology, pp. 249-253, 2020.
D. T. Vu, G. Yu, C. Lee, and J. Kim, “Text Data Augmentation for the Korean Language,” Applied Sciences, vol. 12, no. 7, p. 3425, Mar. 2022, doi: 10.3390/app12073425.
Y. Shi, T. ValizadehAslani, J. Wang, P. Ren, Y. Zhang, M. Hu, L. Zhao, and H. Liang, “Improving imbalanced learning by pre-finetuning with data augmentation,” Proceedings of the Fourth International Workshop on Learning with Imbalanced Domains: Theory and Applications, vol. 183 of Proceedings of Machine Learning Research, pp. 68-82, PMLR, 2022
Z. Fayyaz, M. Ebrahimian, D. Nawara, A. Ibrahim, and R. Kashef, “Recommendation Systems: Algorithms, Challenges, Metrics, and Business Opportunities,” Applied Sciences, vol. 10, no. 21, p. 7748, Nov. 2020, doi: 10.3390/app10217748.
D. Valcarce, A. Bellogín, J. Parapar, and P. Castells, “Assessing ranking metrics in top-N recommendation,” Information Retrieval Journal, vol. 23, no. 4, pp. 411–448, Jun. 2020, doi: 10.1007/s10791-020-09377-x.
Korea Creative Content Agency, "2023 Webtoon User Status Survey," 2023.
A. R. Sulthana et al., “Improvising the Performance of Image-Based Recommendation System Using Convolution Neural Networks and Deep Learning,” Soft Computing, vol. 24, pp. 14531-14544, 2020.
L. H. Q. Bao, H. H. B. Khoa, and N. Thai-Nghe, “An Ensemble Model for Combining Deep Matrix Factorization and Image-Based Recommendation Systems,” SN Computer Science, vol. 5, no. 6, Jun. 2024, doi: 10.1007/s42979-024-02978-z.
L. Zhang, Y. Bian, P. Jiang, and F. Zhang, “A Transfer Residual Neural Network Based on ResNet-50 for Detection of Steel Surface Defects,” Applied Sciences, vol. 13, no. 9, p. 5260, Apr. 2023, doi: 10.3390/app13095260.
H. Y. Song and S. Park, “An Analysis of Correlation between Personality and Visiting Place using Spearman’s Rank Correlation Coefficient,” KSII Transactions on Internet and Information Systems, vol. 14, no. 5, May 2020, doi: 10.3837/tiis.2020.05.005.
S. Park et al., "KLUE: Korean Language Understanding Evaluation," arXiv preprint, arXiv:2105.09680, 2021.
T. Kim, J. Oh, N. Y. Kim, S. Cho, and S.-Y. Yun, “Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation,” Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, pp. 2628–2635, Aug. 2021, doi: 10.24963/ijcai.2021/362.
Z. Nguyen and A. Annunziata, “Enhancing Q&A with Domain-Specific Fine-Tuning and Iterative Reasoning: A Comparative Study,” arXiv preprint, arXiv:2404.11792, 2024.
C. C. S. Balne, S. Bhaduri, T. Roy, V. Jain, and A. Chadha, “Parameter Efficient Fine Tuning: A Comprehensive Analysis Across Applications,” arXiv preprint, arXiv:2404.13506, 2024.
B.-H. Kim, S.-K. Lim, K-S. Kim, and C-S. Lee, “A Study on a Webtoon Recommendation System With Autoencoder Considering Art Style and Synopsis,” Proc. ICNGCT, pp. 29-33, Jun. 2024.
Y. E. H. Maur, A. J. Santoso, and - Pranowo, “Fine Tuned of DenseNET121 to Classify NTT Weaving Motifs on Mobile Application,” International Journal on Advanced Science, Engineering and Information Technology, vol. 13, no. 6, pp. 2156–2163, Dec. 2023, doi: 10.18517/ijaseit.13.6.18314.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).