J Korean Assoc Oral Maxillofac Surg 2023; 49(3): 105~106
Could ChatGPT help you to write your next scientific paper?: concerns on research ethics related to usage of artificial intelligence tools
Joo-Young Park, DDS, PhD Associate Editor of JKAOMS
Department of Oral and Maxillofacial Surgery, Seoul National University Dental Hospital, School of Dentistry, Seoul National University, Seoul, Korea
Joo-Young Park
Department of Oral and Maxillofacial Surgery, Seoul National University Dental Hospital, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea
TEL: +82-2-6256-3133
E-mail: bbyoung1@snu.ac.kr
ORCID: https://orcid.org/0000-0002-0333-6349
; Published online June 30, 2023.
© Korean Association of Oral and Maxillofacial Surgeons. All rights reserved.

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Body

More and more people try to use artificial intelligence (AI) tools such as ChatGPT (Chat Generative Pretrained Transformer; OpenAI) for scientific writing1. Whether you agree or disagree with the AI tool usage, it is becoming irresistible that ChatGPT can help to generate a lot of research and scientific articles and you might need to use this powerful tool to increase your productivity and the quality of your works. However, there has been sharp disagreement over ChatGPT being listed as author on research papers2-5. As soon as ChatGPT was released as a free-to-use tool in November 2022, at least four articles credit the AI tool as a co-author6-9. The ethical issue on the authorship was raised among the society of scientific article publishing groups and journal editors, researchers, and publishers are now debating the place of such AI tools in the published literature, and whether it’s appropriate to cite the chatbot as an author.

ChatGPT can write academic essays, summarize research papers, and can even answer questions to pass medical exams. It has produced research abstracts that scientists found it hard to spot that a non-human AI had written them. However, it could also make spam, ransomware, and other malicious outputs4. The Chatbot can cite low quality studies containing false numbers, but sound convincing enough to trick human readers. The most worrisome fact is that journal publishers, peer reviewers, and readers of the journal do not have any censoring machinery or screening tools to detect those errors.

Therefore, the major high-impact journal publishers started to announce their policies since January this year especially about the AI usage and declaration in the submitted articles.(Table 1) The journal Science warned researchers in their Editorial article that submitting any manuscripts that have been produced using these tools amounts to scientific misconduct2. Science’s editor-in-chief, Holden Thorp, announced an editorial policy statement that all paper submissions must be the original work of authors, and that content produced by AI is a form of plagiarism. Authors may use the tool only if they have fully disclosed it and Science has approved it. The journal Nature has also introduced similar rules and will not accept any papers listing ChatGPT or any other AI software as authors but hasn’t banned these types of tools completely. Nature mentioned that researchers using large language model (LLM) tools should document this use in the methods or acknowledgements sections3-5. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM. Elsevier, which publishes about 2,800 journals, including Cell and the Lancet, has taken a similar stance to the previous two journals, Science and Nature. Its guidelines allow the use of AI tools “to improve the readability and language of the research article, but not to replace key tasks that should be done by the authors, such as interpreting data or drawing scientific conclusions,” said Elsevier’s Andrew Davis, adding that authors must declare if and how they have used AI tools.

Hopefully, there are screening solutions currently being developed by the big publishers as well. Springer Nature is currently developing its own software to detect text generated by AI. Meanwhile Science said it would consider using detection software built by other companies. Although Journal of the Korean Association of Oral and Maxillofacial Surgeons (JKAOMS) has not opened its official policy for AI usage for the manuscript writing, the editorial office recommends that the authors and readers should be careful to use the LLM Chatbot and please clarify any usage of the tool. Furthermore, please do not list AI tools in the author list. JKAOMS will soon update its policy and keep an open mind regarding the future usage of the LLM tools in scientific writing.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Tables

The major scientific journal publishers’ policy on usage of artificial intelligence (AI) tool

Journal publisher Journal’s policy statement Date of statement
AAAS (American Association for the Advancement of Science; publisher of Science) We would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism. 26 January 2023 (Science 2023;379:313)2
Springer Nature (publisher of Nature) ChatGPT doesn’t meet the standard for authorship.
Authors using LLMs (large language models) in any way while developing a paper should document their use in the methods or acknowledgements sections.
24 January 2023 (Nature 2023;613:612)5
Elsevier (publisher of Cell and Lancet) The use of AI tools can improve the readability and language of the research article but cannot replace key tasks that should be done by the authors, such as interpreting data or drawing scientific conclusions. AI and AI-assisted tools cannot be credited as an author on published work. March 2023 (authorship policy updated; https://www.elsevier.com/about/policies/publishingethics)

References
  1. Huang J, Tan M. The role of ChatGPT in scientific communication: writing better scientific review articles. Am J Cancer Res 2023;13:1148-54.
    Pubmed KoreaMed
  2. Thorp HH. ChatGPT is fun, but not an author. Science 2023;379:313. https://doi.org/10.1126/science.adg7879.
    Pubmed CrossRef
  3. Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature 2023;613:620-1. https://doi.org/10.1038/d41586-023-00107-z.
    Pubmed CrossRef
  4. Gaggioli A. Ethics: disclose use of AI in scientific manuscripts. Nature 2023;614:413. https://doi.org/10.1038/d41586-023-00381-x.
    Pubmed CrossRef
  5. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature 2023;613:612. https://doi.org/10.1038/d41586-023-00191-1.
    Pubmed CrossRef
  6. Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, et al; ChatGPT. Performance of ChatGPT on USMLE: potential for ai-assisted medical education using large language models. medRxiv. https://doi.org/10.1101/2022.12.19.22283643 [Epub ahead of print].
    Pubmed KoreaMed CrossRef
  7. O'Connor S. Open artificial intelligence platforms in nursing education: tools for academic progress or abuse? Nurse Educ Pract 2023;66:103537. https://doi.org/10.1016/j.nepr.2022.103537.
    Pubmed CrossRef
  8. ChatGPT Generative Pre-trained Transformer; Zhavoronkov A. Rapamycin in the context of Pascal's Wager: generative pre-trained transformer perspective. Oncoscience 2022;9:82-4. https://doi.org/10.18632/oncoscience.571.
    Pubmed KoreaMed CrossRef
  9. Osmanovic-Thunström A, Steingrimsson S; GPT Generative Pretrained Transformer. an GPT-3 write an academic paper on itself, with minimal human input? HAL. https://hal.science/hal-03701250 [Epub ahead of print].


Current Issue

31 October 2024
Vol.50 No.5 pp.241~306

This Article


Social Network Service

Services

Indexed in