Two-Stream Network for Korean Natural Language Understanding

Hwang Kim (1), Jihyeon Lee (2), Ho-Young Kwak (3)
(1) Department of Computer Engineering, Graduate School, Jeju National University, 102 Jejudaehakro, Jeju, 63243, Republic of Korea
(2) Department of Computer Engineering, Jeju National University, 102 Jejudaehakro, Jeju, 63243, Republic of Korea
(3) Department of Computer Engineering, Jeju National University, 102 Jejudaehakro, Jeju, 63243, Republic of Korea
Fulltext View | Download
How to cite (IJASEIT) :
Kim, Hwang, et al. “Two-Stream Network for Korean Natural Language Understanding”. International Journal on Advanced Science, Engineering and Information Technology, vol. 14, no. 1, Feb. 2024, pp. 224-30, doi:10.18517/ijaseit.14.1.19046.
This study pioneers a dual-stream network architecture tailored for Korean Natural Language Understanding (NLU), focusing on enhancing comprehension by distinct processing of syntactic and semantic aspects. The hypothesis is that this bifurcation can lead to a more nuanced and accurate understanding of the Korean language, which often presents unique syntactic and semantic challenges not fully addressed by generalized models. The validation of this novel architecture employs the Korean Natural Language Inference (koNLI) and Korean Semantic Textual Similarity (koSTS) datasets. By evaluating the model's performance on these datasets, the study aims to determine its efficacy in accurately parsing and interpreting Korean text's syntactic structure and semantic meaning. Preliminary results from this research are promising. They indicate that the dual-stream approach significantly enhances the model's capability to understand and interpret complex Korean sentences. This improvement is crucial in NLU, especially for language-specific applications. The implications of this study are far-reaching. The methodology and findings could pave the way for more sophisticated NLU applications tailored to the Korean language, such as advanced sentiment analysis, nuanced text summarization, and more effective conversational AI systems. Moreover, this research contributes significantly to the broader field of NLU by underscoring the importance and efficacy of developing language-specific models, moving beyond the one-size-fits-all approach of general language models. Thus, this study is a testament to the potential of specialized approaches in language understanding technologies.

SKTBrain, KoBERT. 2021. [Online] Available: SKTBrain/KoBERT.

J. Y. Choi, H. S. Rim, "E-commerce databased Sentiment Analysis Model Implementation using Natural Language Processing Model," Journal of Korea Convergence Society, Vol. 11, No. 13, pp.33-39, 2020, doi:10.15207/JKCS.2020.11.11.033.

Muhammad Abdul-Mageed, AbdelRahim Elmadany, El Moatez Billah Nagoudi, "ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic,", arXiv 2020: 2101.01785 [cs.CL], doi:10.48550/arXiv.2101.01785.

Zhiheng Huang, Peng Xu, Davis Liang, Ajay Mishra, Bing Xiang, "TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding," Mar, arXiv, 2020:2003.07000 [cs.CL], doi:10.48550/arXiv.2003.07000.

A. Aggarwal, A. Chauhan, D. Kumar, M. Mittal, and S. Verma, “Classification of Fake News by Fine-tuning Deep Bidirectional Transformers based Language Model,” ICST Transactions on Scalable Information Systems, p. 163973, Jul. 2018, doi: 10.4108/eai.13-7-2018.163973.

Deepa, Ms D. "Bidirectional Encoder Representations from Transformers (BERT) Language Model for Sentiment Analysis task: Review," Turkish Journal of Computer and Mathematics Education, Vol.12, No. 7, pp. 1708-1721, Apr. 2021.

Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D Manning, Percy S. Liang, J. Leskovec, "Deep Bidirectional Language-Knowledge Graph Pretraining," Advances in Neural Information Processing Systems, 2022.

K. Seok, D. Hwang, and J. H. Kim, "Syntactic and se mantic analysis for the discrimination of similar documents," Journal of Korean Content Association, Vol. 14, No. 3, pp. 40–51, 2014

Soukaina Fatimi, Chama EL Saili, Larbi Alaoui, "A Framework for Semantic Text Clustering," International Journal of Advanced Computer Science and Applications, Vol. 11, No. 6, 2020, pp.451-459.

K. Jacksi, R. Kh. Ibrahim, S. R. M. Zeebaree, R. R. Zebari, and M. A. M. Sadeeq, “Clustering Documents based on Semantic Similarity using HAC and K-Mean Algorithms,” 2020 International Conference on Advanced Science and Engineering (ICOASE), Dec. 2020, doi: 10.1109/icoase51841.2020.9436570.

C. Little, D. Mclean, K. Crockett, and B. Edmonds, “A Semantic and Syntactic Similarity Measure for Political Tweets,” IEEE Access, vol. 8, pp. 154095–154113, 2020, doi: 10.1109/access.2020.3017797.

A. M. Rinaldi, C. Russo, and C. Tommasino, “A semantic approach for document classification using deep neural networks and multimedia knowledge graph,” Expert Systems with Applications, vol. 169, p. 114320, May 2021, doi: 10.1016/j.eswa.2020.114320.

K. Simonyan and A. Zisserman, "Two-stream convolutional networks for action recognition in videos," Advances in neural information processing systems, pp. 568– 576, 2014.

S. H. Kim, J. H. Kim, "Two-Stream Fall Detection using Temporal and Spatial Contexts," Conference on KCC, pp. 421 – 423, Dec. 2020

D. Zhu, B. Du, and L. Zhang, “Two-Stream Convolutional Networks for Hyperspectral Target Detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 8, pp. 6907–6921, Aug. 2021, doi: 10.1109/tgrs.2020.3031902.

Z.-M. Wang, M.-H. Li, and G.-S. Xia, “Conditional Generative ConvNets for Exemplar-Based Texture Synthesis,” IEEE Transactions on Image Processing, vol. 30, pp. 2461–2475, 2021, doi: 10.1109/tip.2021.3052075.

Yutong Chen, Ronglai Zuo, Fangyun Wei, Yu Wu, Shujie LIU, Brian Mak, "Two-Stream Network for Sign Language Recognition and Translation," Advances in Neural Information Processing Systems 2022.

S. Arora et al., “ESPnet-SLU: Advancing Spoken Language Understanding Through ESPnet,” ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2022, doi: 10.1109/icassp43922.2022.9747674.

R. Li, H. Chen, F. Feng, Z. Ma, X. Wang, and E. Hovy, “Dual Graph Convolutional Networks for Aspect-based Sentiment Analysis,” Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, doi: 10.18653/v1/2021.acl-long.494.

P. Zhang, R. Zhao, B. Yang, Y. Li, and Z. Yang, “Integrated Syntactic and Semantic Tree for Targeted Sentiment Classification Using Dual-Channel Graph Convolutional Network,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 32, pp. 1109–1124, 2024, doi: 10.1109/taslp.2024.3350877.

T. Liu, Y. Hu, B. Wang, Y. Sun, J. Gao, and B. Yin, “Hierarchical Graph Convolutional Networks for Structured Long Document Classification,” IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 10, pp. 8071–8085, Oct. 2023, doi: 10.1109/tnnls.2022.3185295.

B. Lindemann, T. Müller, H. Vietz, N. Jazdi, and M. Weyrich, “A survey on long short-term memory networks for time series prediction,” Procedia CIRP, vol. 99, pp. 650–655, 2021, doi: 10.1016/j.procir.2021.03.088.

Z. Su and J. Jiang, “Hierarchical Gated Recurrent Unit with Semantic Attention for Event Prediction,” Future Internet, vol. 12, no. 2, p. 39, Feb. 2020, doi: 10.3390/fi12020039.

H. Fan, M. Jiang, L. Xu, H. Zhu, J. Cheng, and J. Jiang, “Comparison of Long Short Term Memory Networks and the Hydrological Model in Runoff Simulation,” Water, vol. 12, no. 1, p. 175, Jan. 2020, doi: 10.3390/w12010175.

Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998, doi: 10.1109/5.726791.

S.-H. Kim and Y.-H. Park, “Adaptive Convolutional Neural Network for Text-Independent Speaker Recognition,” Interspeech 2021, Aug. 2021, doi: 10.21437/interspeech.2021-65.

Park, S.U, "Analysis of the Status of Natural Language Processing Technology Based on Deep Learning," The Korea Journal of Big Data, Vol. 6 No. 1, pp.63-81. doi:10.36498/KBIGDT.2021.6.1.63.

Y. Belinkov and J. Glass, “Analysis Methods in Neural Language Processing: A Survey,” Transactions of the Association for Computational Linguistics, vol. 7, pp. 49–72, Apr. 2019, doi: 10.1162/tacl_a_00254.

D. W. Otter, J. R. Medina, and J. K. Kalita, “A Survey of the Usages of Deep Learning for Natural Language Processing,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 2, pp. 604–624, Feb. 2021, doi: 10.1109/tnnls.2020.2979670.

Amirsina Torfi, Rouzbeh A. Shirvani, Yaser Keneshloo, Nader Tavaf, Edward A. Fox, "Natural Language Processing Advancements By Deep Learning: A Survey," arXiv:2003.01200. 2020 doi:10.48550/arXiv. 2003.01200.

I. Lauriola, A. Lavelli, and F. Aiolli, “An introduction to Deep Learning in Natural Language Processing: Models, techniques, and tools,” Neurocomputing, vol. 470, pp. 443–456, Jan. 2022, doi: 10.1016/j.neucom.2021.05.103.

A. S. Pillai and R. Tedesco, “Introduction to Machine Learning, Deep Learning, and Natural Language Processing,” Machine Learning and Deep Learning in Natural Language Processing, pp. 3–14, Aug. 2023, doi: 10.1201/9781003296126-2.

S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning, "A large annotated corpus for learning natural language inference," arXiv preprint arXiv:1508.05326, 2015.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, "Dropout: a simple way to prevent neural networks from overfitting," The Journal of Machine Learning Research, Vol. 15, No. 1, pp. 1929– 1958, 2014. [Online] Available:

H. Y. Park & K. J. Kim. "Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Model," Journal of Intelligence and Information Systems, Vol. 25(4), pp 141-154. 2019.

J. Ham, Y. J. Choe, K. Park, I. Choi, and H. Soh, "Kornli and korsts: New benchmark datasets for korean natural language understanding," arXiv preprint arXiv:2004.03289, 2020.

Kingma, Diederik P., and Jimmy Ba. "Adam: A method for stochastic optimization." arXiv preprint arXiv:1412.6980, 2014.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).