ABOUT GENIUS
Statement on the Use of Generative Artificial Intelligence (GenAI)
GENIUS JOURNAL: General Nursing Science Journal (eISSN: 2723-7729)
1. Scope and Purpose
This statement establishes the ethical framework for the use of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) in the authorship, submission, peer review, and editorial processes of GENIUS JOURNAL: General Nursing Science Journal.
This policy is guided by the publication ethics and core practices of the Committee on Publication Ethics (COPE) and Elsevier, reflecting our commitment to integrity, transparency, and accountability in scientific publishing.
2. Definition
Generative Artificial Intelligence (GenAI) refers to machine learning models capable of producing text, images, code, or other creative content in response to user prompts. Examples include text-based models such as ChatGPT, GPT-4, Claude, Gemini (Bard), LLaMA, and Mistral; image-generation tools such as DALL·E, Midjourney, and Stable Diffusion; and productivity-integrated AI systems like GrammarlyGO, Microsoft Copilot, Notion AI, and Jasper.
Large Language Models (LLMs) are a subset of GenAI specifically trained on large-scale textual data to generate and interpret natural language, increasingly integrated into academic and productivity workflows.
3. Principles and Standards
3.1 Ethical Use in Accordance with COPE and Elsevier Guidelines
All parties—authors, reviewers, and editors—must ensure that AI use complies with COPE and Elsevier publication ethics, particularly regarding authorship, transparency, data integrity, and peer review.
GenAI may support minor, mechanical tasks (e.g., grammar correction or summarization), but it must not replace human intellectual contributions, nor be used to fabricate, falsify, or misrepresent data or content.
3.2 Transparency and Disclosure
Authors are required to clearly disclose any use of AI tools in their manuscripts. The disclosure must specify the name and version of the tool or model used (e.g., ChatGPT GPT-4 by OpenAI, Claude 3 by Anthropic, Microsoft Copilot), as well as the specific purpose (e.g., improving readability, summarizing, or formatting).
Failure to disclose the use of AI constitutes a breach of academic integrity.
3.3 Authorship and Accountability
GenAI tools cannot be credited as authors since they do not meet authorship criteria defined by COPE and Elsevier, which require:
-
Substantial contribution to the conception or design of the work,
-
Involvement in drafting or critically revising the manuscript, and
-
Responsibility for the accuracy and integrity of the published work.
Human authors retain full responsibility for the content, including any portions generated or refined with AI assistance.
3.4 Peer Review Integrity
Reviewers and editors may use AI tools for non-decisive support tasks (such as summarizing manuscripts or checking language clarity), but such use must be disclosed to the editorial board.
All evaluative judgments, critical analyses, and final recommendations must be made solely by humans.
AI must not be used to generate confidential peer review reports or replace the reviewer’s own analytical commentary.
4. Acceptable Uses of GenAI
With proper disclosure, the following uses of GenAI are acceptable:
-
Language and Style Enhancement: Improving grammar, fluency, and clarity (e.g., ChatGPT, GrammarlyGO).
-
Data Visualization: Creating figures, graphs, or tables based on author-provided data that have been verified for accuracy (e.g., DALL·E, Excel Copilot).
-
Formatting Support: Assisting with citation formatting, layout structuring, or summarizing references.
-
Conceptual Brainstorming: Supporting initial idea development or outlining, provided that all analyses, interpretations, and conclusions are created independently by the authors.
5. Prohibited Uses of GenAI
The following uses of GenAI are strictly prohibited and constitute ethical violations:
-
Fabrication or Manipulation: Producing false or fictitious data, references, or findings using AI tools.
-
Plagiarism: Using AI-generated content without acknowledgment or presenting it as original work.
-
Undisclosed Use: Failing to declare AI assistance in the manuscript.
-
Misrepresentation: Claiming AI-generated output as a product of human reasoning or intellectual effort.
6. Required Disclosure Statement
All manuscripts must include a Generative AI Use Disclosure Statement, typically placed in the Acknowledgment or Methods section. This statement must include:
-
The name(s) and version(s) of any GenAI or LLM tools used,
-
The purpose and stage of research where they were applied, and
-
A confirmation that all intellectual content, including analysis and interpretation, was created by the authors.
Example statement:
“Generative AI tools, including ChatGPT (GPT-4, OpenAI) and GrammarlyGO, were used to enhance grammar and language clarity during manuscript preparation. All ideas, analyses, and interpretations were conducted solely by the authors.”
7. Editorial Oversight and Compliance
Editors will review all AI-use disclosures as part of the manuscript evaluation process.
If undisclosed or suspicious AI-generated content is identified, the manuscript may undergo additional review, plagiarism screening, or author clarification.
GENIUS JOURNAL: General Nursing Science Journal reserves the right to contact the authors’ affiliated institutions in cases of serious ethical violations.
8. Consequences of Misuse
In line with COPE and Elsevier guidelines, violations of this policy may result in:
-
Rejection of the manuscript during peer review,
-
Retraction of the article after publication,
-
Notification to the authors’ institution or funding body, and
-
Suspension or prohibition of future submissions in severe or repeated cases.
9. References
-
Elsevier. (2023). Generative AI Policies for Journals. Retrieved from https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals
-
Committee on Publication Ethics (COPE). (2023). Position Statement on Authorship and AI Tools. Retrieved from https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools
-
Committee on Publication Ethics (COPE). (2024). Discussion Document on AI and Peer Review. Retrieved from https://publicationethics.org/news/cope-publishes-guidance-on-ai-in-peer-review
-
Committee on Publication Ethics (COPE). (2023). Discussion Paper: Ethical Considerations in the Use of Generative AI in Publishing. Retrieved from https://publicationethics.org/topic-discussions/artificial-intelligence-ai-and-fake-papers














