Disclosing the Use of Generative AI: Best practices for authors in manuscript preparation

The rapid proliferation of generative and other AI-based tools in research writing has ignited an urgent need for transparency and accountability. Esteemed scientific journals such as Nature and reputable organizations like the Committee on Publication Ethics (COPE) have unequivocally emphasized the paramount significance of meticulously documenting AI tool usage in research. It has become imperative for authors and publishers to adopt best practices for disclosing the use of these tools in manuscript preparation. Such practices not only enhance the transparency and reproducibility of research but also ensures ethical considerations are adequately addressed.

The transparency of methods, data sources, and limitations is not just an academic exercise but a moral and scientific obligation. It ensures the integrity of research findings, facilitates reproducibility, and safeguards against unintended consequences. The responsible development and deployment of AI technologies hinge on the willingness of authors to share their insights, methodologies, and ethical considerations. In this article, we delve into the importance of disclosing the use of generative and other Al tools in manuscript preparation. We will explore essential best practices for authors, offering guidance on how to navigate the intricate landscape of AI disclosure.

Why Disclosing the Use of Generative and Other AI Tools Matters

Disclosing AI tools used for manuscript preparation is of paramount importance for several critical reasons:

1. Transparency and Reproducibility: Transparent disclosure of AI tools is crucial for scientific research, enabling replication and verification. It allows for building upon prior work, refining methodologies, and potentially uncovering errors or biases.

2. Peer Review and Evaluation: Open AI tool disclosure assists reviewers in assessing research validity, including AI model suitability, data sources, and methodologies, ensuring research quality.

3. Ethical Considerations: Manuscript disclosure addresses AI’s ethical implications, like privacy, fairness, bias, and societal impacts, promoting responsible AI development.

4. Community Building: Research is a collaborative effort, and the sharing of knowledge and resources is crucial for the growth of any scientific discipline. Transparent disclosure fosters a sense of research community, encouraging collaboration and speeding up innovation.

5. Trust and Credibility: Transparent disclosure of generative and other AI tool usage enhances research and researcher credibility, instilling trust among peers, the public, and stakeholders.

6. Preventing Misuse: AI technologies can be powerful tools, but they can also be misused. Mandatory disclosure deters unethical AI applications, making it harder for malicious users to exploit AI technology.

Disclosing AI Tools in Research Articles

No doubt that disclosing the use of AI tools in manuscript preparation are crucial to ensure transparency, replicability, and responsible research in the field; however, the question of how and where to disclose this information in research articles has been a subject of debate among publishers and researchers. This debate stems from the need to strike a balance between providing comprehensive information for transparency and fair assignment of credit.

Why Bots Cannot Be Authors

The ethical stance against designating LLMs and related AI tools as authors in research manuscripts is grounded in the principles of responsibility, accountability, transparency, and the understanding of AI’s role as a tool in the research process. Authorship carries with it a responsibility to stand behind the research, take accountability for its content, and address any issues or concerns raised by readers, reviewers, or the wider research community. AI tools, being non-legal entities, cannot fulfill this responsibility as they lack the capacity for moral judgment and accountability.

“An attribution of authorship carries accountability for the work, which cannot be effectively applied to LLMs”.
(Magdalena Skipper, editor-in-chief of Nature)

This view aligns with the broader ethical framework of research integrity and is supported by organizations like COPE, which emphasize the importance of upholding these principles in scholarly publishing.

“AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements”
(COPE Position Statement, 2023: para. 2).

Crediting AI Tools in the Acknowledgments Section

Recognizing LLMs or other AI tools in the acknowledgments section of a research manuscript is a practical way to credit the contributions of these tools without conferring authorship status. This practice aligns with widely accepted guidelines, including those provided by the International Committee of Medical Journal Editors (ICMJE), which state that contributors whose roles do not meet authorship criteria may be acknowledged individually or collectively. This approach has garnered support from some of the reputable publishers. For example, Magdalena Skipper, the editor-in-chief of Nature, has stated that researchers using AI tools while preparing their article “should document their use in the methods or acknowledgments sections”. Sabina Alam, the director of publishing ethics and integrity at Taylor & Francis, also supports this approach.

“Authors are responsible for the validity and integrity of their work, and should cite any use of LLMs in the acknowledgments section.”
(Sabina Alam)

However, acknowledging AI tools in the acknowledgments section of a manuscript raises concerns similar to the reasons why these should not be credited as authors. This is primarily due to the absence of free will in AI tools, rendering them incapable of providing consent for acknowledgment. While being mentioned in the acknowledgments section may not carry the same level of accountability as being listed as an author, it nonetheless carries ethical and legal implications that warrant the need for consent. Additionally, individuals may decline acknowledgment if they disagree with the study’s conclusions and wish to disassociate themselves from it, which is not applicable in the case of AI tools. In short, these tools cannot be considered accountable or responsible in the way human beings can be.

Disclosing the Use of Generative and Other AI Tools in the Body of the Article

Revealing the utilization of LLMs and other AI tools in research articles typically involves disclosing this information within the body of the text, akin to how other research tools are acknowledged. In the context of software applications, proper citation practices, including in-text citations and references, are followed. However, articulating the use of AI tools and elucidating their role in research requires careful consideration due to their intricate capabilities.

Nevertheless, the approach of solely mentioning the use of AI tools within the text raises certain challenges. These issues are particularly noticeable concerning the discoverability of articles that have employed these tools. Challenges encompass factors such as the absence of indexing for non-English content and limited access to full-text articles, especially in cases of paywalled content. Moreover, inconsistencies in how researchers disclose the use of AI tools can impact the openness and transparency of research. For instance, variations in reporting practices may occur when LLMs are engaged in tasks that defy quantification, such as the conceptualization of ideas. Significantly, even with this level of disclosure, readers may still find it challenging to discern which portions of the text were generated by AI-based tools.

Adopting general norms of software citation, i.e. including in-text citations and referencing, can effectively address both challenges associated with the use of LLMs in research articles. APA style has already offered a structured format for describing the use of LLMs and other AI tools, incorporating in-text citations, and providing proper references. As per this template, disclosure practices can vary depending on the type of article. For instance, in research articles, disclosure is advised within the methods section, while in literature reviews, essays, or response papers, it is suggested in the introduction. Here’s the format recommended by APA for describing the use of ChatGPT, along with in-text citation and referencing:

In-text Citation:

“When prompted with “Is the left brain right brain divide real or a metaphor?” the ChatGPT-generated text indicated that although the two brain hemispheres are somewhat specialized, “the notation that people can be characterized as ‘left-brained’ or ‘right-brained’ is considered to be an oversimplification and a popular myth” (OpenAI, 2023).

Reference:

OpenAI (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

Source: Ayubi, E. (2023, April 7). How to cite ChatGPT. APA Style

 

However, incorporating details — such as the specific version, model, date of use, and user’s name — provides a more robust picture of the conditions under which the AI tools contributed to the research. This approach allows for better tracking, accountability, and transparency, acknowledging the dynamic nature of LLMs and AI tools, and their responses to different inputs and contexts.

For the purpose of verification, it is advisable to document and reveal interactions with AI-based text generation tools, which should encompass particular prompts and the dates of queries. This information can be provided as supplementary material or within appendices for transparency and validation purposes. Authors can also include Complex AI models, extensive code, or detailed data preprocessing steps in supplementary materials. Also, acknowledge limitations and potential biases of AI technologies, if any, in the discussion section. Discuss how these limitations may impact the interpretation and generalizability of the results.

Collaborative Efforts to Enforce AI Tool Disclosure

Certainly, considering the diverse applications of LLMs and AI tools across various research domains, it may be beneficial to establish more comprehensive guidelines or specific criteria governing their utilization. Professional associations or editorial boards of journals need to take the lead in formulating more consistent and uniform guidelines. A notable example of this proactive approach was demonstrated by the organizers of the 40th International Conference on Machine Learning (ICML). They highlighted in their conference policies that “Papers containing text generated from a large-scale language model (LLM) like ChatGPT are not permitted, unless this generated text is integrated as a component of the paper’s experimental analysis”.

Thus, the roles of various stakeholders, including journals, funding agencies, and the scientific community, are pivotal in enforcing rules mandating the disclosure of AI tool usage in research. Funding agencies can also explicitly request grantees to disclose their use of generative AI tools and technologies in their research proposals. Furthermore, they can conduct compliance checks during the grant review process to ensure researchers’ adherence to these disclosure guidelines.

By raising awareness of the significance of disclosure, the scientific community can foster a culture of transparency within the research ecosystem. Researchers can actively advocate for responsible research practices and encourage their peers to adhere to disclosure guidelines. Additionally, the scientific community can exert pressure on journals and funding agencies, urging them to rigorously enforce rules related to AI tool disclosure. By working collectively, the scientific community can play a pivotal role in maintaining the integrity and credibility of scientific research.

Frequently Asked Questions

 

To disclose the use of AI, specify the AI tools, models, and versions used in your research in the methods section of your manuscript. You may also acknowledge AI tool usage in the acknowledgments section, providing details like the model, version, date of use, and user’s name for thorough transparency. Following the guidelines provided by the publishers of your target journal is an essential step in disclosing the use of AI. These guidelines will outline the specific requirements and preferred format for disclosing AI tool usage in your manuscript.

Check the guidelines provided by your target journal or publisher and ensure that your declaration aligns with it. These guidelines may vary from journal to journal. Depending on the article type, consider disclosing AI tool usage in the methods section for research articles or in the introduction for literature reviews, essays, or response papers. You may follow general norms of software citation by including in-text citations and references. Additionally, for verification, document interactions with AI-based tools, including specific prompts and query dates, and provide this information as supplementary material or in appendices to enhance transparency and validation.

AI cannot be listed as an author in scientific publications. While AI, like large language models (LLMs), can assist in research and writing, authorship implies responsibility and accountability for the content, which AI lacks. Ethical and professional standards in scientific writing reserve authorship for human individuals who can take ownership of their work, make ethical judgments, and fulfill responsibilities associated with research.

Ensuring transparency in AI systems is of paramount importance in today’s technology-driven world. To achieve this, comprehensive disclosure is essential, encompassing the AI system’s configuration, algorithms, parameters, and data sources. Additionally, favor AI models that offer explainability, enabling users to understand the rationale behind AI decisions. External audits and adherence to publishers’ guidelines and ethical practices further solidify the commitment to transparency, fostering trust and accountability in AI applications.

Rate this article

Rating*

Your email address will not be published.

You might also like
X

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • Q&A Forum
  • 10+ eBooks
  • 10+ Checklists
  • Research Guides
[contact-form-7 id="40123" title="Global popup two"]





    Researchers' Poll

    What features do you prefer in a plagiarism detector? (Select all that apply)