AI Usage in Manuscript Preparation Policy

Introduction

The emergence of generative artificial intelligence (AI) tools in research and publishing presents new opportunities—and significant ethical considerations. The International Journal on Advanced Science, Engineering and Information Technology (IJASEIT) supports innovation in manuscript preparation but prioritizes transparency, accountability, and human authorship. This policy sets clear expectations for the ethical use of AI in authorship, editing, peer review, and publication, in alignment with the COPE Core Practices, the Principles of Transparency and Best Practice in Scholarly Publishing, and evolving best practices in scholarly communication.

Description

AI tools (e.g., ChatGPT, Grammarly, DeepL, Copilot, Claude, etc.) are increasingly used to generate or refine text, summarize literature, check grammar, translate language, or write code. While these technologies can enhance productivity, their use raises concerns about:

  • Authorship accountability

  • Data integrity and source citation

  • Plagiarism and originality

  • Bias and factual accuracy

  • Undisclosed automation

This policy applies to all contributors to IJASEIT—authors, reviewers, and editors—and outlines permissible use, mandatory disclosures, and prohibited practices.

Policy

1. Permissible Uses of AI by Authors

Authors may use AI tools for assistance, not authorship. Permissible uses include:

  • Grammar and language correction (e.g., Grammarly, Microsoft Editor)

  • Spell-checking and formatting

  • Code debugging or support (when explicitly disclosed)

  • Translation or rephrasing, with post-editing by the author

  • Drafting queries or summarizing literature (as a starting point only)

Authors must review, verify, and take full responsibility for all AI-generated content before submission.

2. Prohibited Uses of AI

Authors must not:

  • List an AI tool as a co-author

  • Delegate authorship tasks (e.g., formulating arguments, writing conclusions, interpreting data) to AI without full review and rewriting

  • Use AI tools to generate entire sections (e.g., abstract, methods, results, discussion) without verification

  • Submit AI-generated images, tables, or figures without full description and justification

  • Use AI to fabricate, manipulate, or hallucinate references, data, or results

Manuscripts that show evidence of undisclosed or irresponsible AI use may be rejected or retracted, and further actions taken in accordance with COPE guidance.

3. Mandatory Disclosure Requirements

Authors must include a clear AI usage disclosure in the manuscript’s acknowledgments or a dedicated “AI Usage Statement” section. This must state:

  • The name and version of the AI tool(s) used

  • A brief description of how the tool(s) were used

  • Confirmation that authors verified and are accountable for the content

Example statement:
“ChatGPT (OpenAI, Version GPT-4) was used to improve grammar and sentence structure in the Introduction and Discussion sections. The authors reviewed and are fully responsible for the final content.”

If no AI was used, authors must state:
“No generative AI tools were used in the preparation of this manuscript.”

4. Reviewers and Editors

Reviewers and editors must adhere to the following rules:

  • Do not use AI tools to generate or summarize peer review reports, even partially.

  • Do not input unpublished manuscript content into AI tools due to confidentiality and data protection concerns.

  • Use of AI for grammar correction or language suggestions is allowed only on publicly accessible material (e.g., abstracts) and must not involve uploading full manuscripts.

  • All evaluations must be the result of independent human judgment.

Violations of reviewer confidentiality or use of AI inappropriately during peer review may result in disqualification and reporting to relevant academic bodies.

5. Accountability and Integrity

  • Authors are responsible for ensuring the integrity of all manuscript content, regardless of AI involvement.

  • Editors will verify AI disclosures and may request clarification or revision before acceptance.

  • Reviewers are expected to report suspicious use of AI in manuscripts, including fabricated references or unnatural phrasing.

6. Detection and Verification

IJASEIT may use tools or manual checks to:

  • Detect AI-generated content or fabricated references

  • Assess consistency and accuracy in submitted work

  • Evaluate potential ethical breaches

Authors found to have misused AI or failed to disclose its use appropriately may face manuscript rejection, publication retraction, and notification to their institution.

7. Education and Support

IJASEIT encourages ethical AI use by:

  • Providing guidance and examples on acceptable AI assistance

  • Offering editorial advice for disclosure practices

  • Monitoring developments in AI ethics for updates to this policy

8. Policy Review

Given the evolving nature of AI technologies, this policy will be reviewed and updated regularly to reflect new developments, risks, and ethical standards in scholarly publishing.