Comparative Analysis of Autoregressive and Diffusion-Based Language Models for Complex Molecular Structure Generation

Yihyun Kim (1), Hyeri Yun (2), Jaechoon Jo (3)
(1) Department of Biomedical Informatics, Korea University College of Medicine, Seoul, Republic of Korea
(2) Department of Biomedical Informatics, Jeju National University, Jeju, Republic of Korea
(3) Department of Computer Education, Jeju National University, Jeju, Republic of Korea
Fulltext View | Download
How to cite (IJASEIT) :
[1]
Y. Kim, H. Yun, and J. Jo, “Comparative Analysis of Autoregressive and Diffusion-Based Language Models for Complex Molecular Structure Generation”, Int. J. Adv. Sci. Eng. Inf. Technol., vol. 15, no. 3, pp. 977–982, Jun. 2025.
Recent advancements in biomedical informatics have opened new avenues for integrating chemical structure data with natural language, enabling innovative approaches in de novo molecular design. In this study, we compare two paradigms for text-guided molecule generation: an autoregressive model, MolT5, built on a T5 framework employing self-supervised pre-training with corrupted span replacement followed by fine-tuning for both molecule captioning and generation, and a diffusion-based model, TGM-DLM, which maps textual descriptions into latent embeddings and iteratively refines SMILES sequences via a denoising process. Evaluated on the ChEBI-20 dataset—partitioned into simple and complex molecular structures—our analysis using metrics such as BLEU, exact match, Levenshtein distance, validity, MACCS, RDK, and Morgan fingerprint similarity reveals that while TGM-DLM exhibits superior performance in capturing the overall architecture of complex molecules, MolT5 achieves higher rates of chemical validity. By leveraging these complementary approaches, our work provides a nuanced assessment of the trade-offs between structural fidelity and chemical correctness in molecular generation. The diffusion-based TGM-DLM model shows particular promise in addressing the complex challenges of intricate molecular configurations, as substantiated by quantitative improvements across multiple evaluation criteria. Conversely, the autoregressive MolT5 model's robustness in preserving chemical integrity underscores its potential for applications where molecular reliability is paramount. These comparative insights not only enhance our understanding of model architectures in multi-modal molecular design but also pave the way for future innovations in computational chemistry and drug discovery.

A. Vaswani et al., "Attention is all you need," in Proc. 31st Int. Conf. Neural Inf. Process. Syst., Long Beach, CA, USA, 2017, pp. 6000-6010.

H. Gong et al., "Text-guided molecule generation with diffusion language model," in Proc. AAAI Conf. Artif. Intell., vol. 38, no. 1, 2024, pp. 1234-1242, doi: 10.1609/aaai.v38i1.27761.

X. Li et al., "Diffusion-LM improves controllable text generation," in Adv. Neural Inf. Process. Syst., vol. 35, 2022, pp. 4328-4343.

C. Edwards et al., "Translation between molecules and natural language," in Proc. EMNLP, 2022, pp. 375-413, doi:10.18653/v1/2022.emnlp-main.26.

C. Edwards et al., "Text2Mol: Cross-Modal Molecule Retrieval with Natural Language Queries," in Proc. EMNLP, 2021, pp. 595-607, doi:10.18653/v1/2021.emnlp-main.47.

S. Liu et al., "Multi-modal molecule structure-text model for text-based retrieval and editing," Nat. Mach. Intell., vol. 5, pp. 1447-1457, 2023, doi: 10.1038/s42256-023-00759-6.

R. Gómez-Bombarelli et al., "Automatic chemical design using a data-driven continuous representation of molecules," ACS Cent. Sci., vol. 4, no. 2, pp. 268-276, Feb. 2018, doi: 10.1021/acscentsci.7b00572.

S. Gao et al., "G-MATT: Single-step Retrosynthesis Prediction using Molecular Grammar Tree Transformer," AIChE J., vol. 70, no. 2, 2024, doi: 10.1002/aic.18244.

C. A. Lipinski, F. Lombardo, B. W. Dominy, and P. J. Feeney, "Experimental and computational approaches to estimate solubility and permeability in drug discovery and development settings," Adv. Drug Deliv. Rev., vol. 46, no. 1-3, pp. 3-26, 2001, doi: 10.1016/S0169-409X(96)00423-1.

A. Alakhdar, B. Poczos, and N. Washburn, "Diffusion Models in De Novo Drug Design," J. Chem. Inf. Model., vol. 64, no. 19, pp. 7238-7256, Oct. 2024, doi: 10.1021/acs.jcim.4c01107.

Y. C. Lo et al., "Machine learning in chemoinformatics and drug discovery," Drug Discov. Today, vol. 23, no. 8, pp. 1538-1546, Aug. 2018, doi: 10.1016/j.drudis.2018.05.010.

Z. Wu et al., "Exploring the trade-offs: Unified large language models vs. local fine-tuned models for highly-specific radiology NLI task," IEEE Trans. Big Data, vol. 11, no. 3, pp. 1027-1041, Jun. 2025, doi: 10.1109/tbdata.2025.3536928.

R. Goyal, P. Kumar, and V. P. Singh, "A systematic survey on automated text generation tools and techniques: Application, evaluation, and challenges," Multimedia Tools Appl., vol. 82, no. 28, pp. 43089-43144, Nov. 2023, doi: 10.1007/s11042-023-15224-0.

M. Sako, N. Yasuo, and M. Sekijima, "DiffInt: A Diffusion Model for Structure-Based Drug Design with Explicit Hydrogen Bond Interaction Guidance," J. Chem. Inf. Model., vol. 65, no. 1, pp. 71-82, Jan. 2025, doi: 10.1021/acs.jcim.4c01385.

B. Zagidullin et al., "Comparative analysis of molecular fingerprints in prediction of drug combination effects," Brief. Bioinform., vol. 22, no. 6, 2021, doi: 10.1093/bib/bbab291.

N. Brown et al., "GuacaMol: Benchmarking Models for de Novo Molecular Design," J. Chem. Inf. Model., vol. 59, no. 3, pp. 1096-1108, Mar. 2019, doi: 10.1021/acs.jcim.8b00839.

G. Sliwoski et al., "Computational methods in drug discovery," Pharmacol. Rev., vol. 66, no. 1, pp. 334-395, Jan. 2014, doi: 10.1124/pr.112.007336.

M. Vogt, "Exploring chemical space - Generative models and their evaluation," Artif. Intell. Life Sci., vol. 3, Dec. 2023, doi:10.1016/j.ailsci.2023.100064.

J. Sieg et al., "MolPipeline: A Python package for processing molecules with RDKit in Scikit-learn," J. Chem. Inf. Model., vol. 64, no. 24, pp. 9027-9033, Dec. 2024, doi: 10.1021/acs.jcim.4c00863.

I. Beltagy, K. Lo, and A. Cohan, "SciBERT: A pretrained language model for scientific text," in Proc. 2019 Conf. Empir. Methods Nat. Lang. Process. 9th Int. Joint Conf. Nat. Lang. Process. (EMNLP-IJCNLP), 2019, doi: 10.18653/v1/D19-1371.

Y. Li et al., "Generative Models for Molecular Design," J. Chem. Inf. Model., vol. 60, no. 12, pp. 5635-5636, Dec. 2020, doi:10.1021/acs.jcim.0c01388.

S. Ishida et al., "Large language models open new way of AI-assisted molecule design for chemists," J. Cheminform., vol. 17, no. 1, Mar. 2025, doi: 10.1186/s13321-025-00984-8.

H. H. Loeffler et al., "Reinvent 4: Modern AI-driven generative molecule design," J. Cheminform., vol. 16, no. 1, Feb. 2024, doi:10.1186/s13321-024-00812-5.

N. Lee et al., "Vision language model is NOT all you need: Augmentation strategies for molecule language models," in Proc. CIKM, 2024, pp. 1153-1162, doi: 10.1145/3627673.3679607.

U. V. Ucak, I. Ashyrmamatov, and J. Lee, "Improving the quality of chemical language model outcomes with atom-in-SMILES tokenization," J. Cheminform., vol. 15, no. 1, May 2023, doi:10.1186/s13321-023-00725-9.

M. A. Skinnider, "Invalid SMILES are beneficial rather than detrimental to chemical language models," Nat. Mach. Intell., vol. 6, no. 4, pp. 437-448, Apr. 2024, doi: 10.1038/s42256-024-00821-x.

H. Safizadeh et al., "Improving Measures of Chemical Structural Similarity Using Machine Learning on Chemical-Genetic Interactions," J. Chem. Inf. Model., vol. 61, no. 9, pp. 4156-4172, Sep. 2021, doi: 10.1021/acs.jcim.0c00993.

D. Boldini et al., "Effectiveness of molecular fingerprints for exploring the chemical space of natural products," J. Cheminform., vol. 16, no. 1, Mar. 2024, doi: 10.1186/s13321-024-00830-3.

C. Bilodeau et al., "Generative models for molecular discovery: Recent advances and challenges," WIREs Comput. Mol. Sci., vol. 12, no. 5, 2022, doi: 10.1002/wcms.1608.

J. Deng et al., "Artificial intelligence in drug discovery: Applications and techniques," Brief. Bioinform., vol. 23, no. 1, 2021, doi:10.1093/bib/bbab430.

H. Chen et al., "Comprehensive exploration of diffusion models in image generation: A survey," Artif. Intell. Rev., vol. 58, no. 4, Jan. 2025, doi: 10.1007/s10462-025-11110-3.

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Authors who publish with this journal agree to the following terms:

    1. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
    2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
    3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).