ChatGPT is Now an Author of Academic Papers! Scientists Disapprove
ChatGPT listing as author on research papers made scientists unhappy and raised a debate situation
Now OpenAI’s ChatGPT is an author of academic papers. The artificial intelligence-powered chatbot ChatGPT has taken the world by storm and has taken the next step towards achievement by making its formal debut in the scientific literature — racking up at least four authorship credits on published papers and preprints.
Journal editors, academics, and publishers are currently discussing the role of such AI tools in the literature that has already been published and if it is appropriate to credit the chatbot as an author. The chatbot, which was made available as a free tool in November by the software firm OpenAI in San Francisco, California, is causing publishers to scramble to develop standards for it. By emulating the linguistic statistical patterns seen in a sizable database of text collected from the Internet, ChatGPT is a large language model (LLM) that produces believable phrases. Academic fields are already being affected by the bot; in particular, it is raising concerns about the future of research and university essays. Publishers and preprint servers agree that these AI tools such as ChatGPT do not fulfill the criteria for a study author, because they cannot take responsibility for the content and integrity of scientific papers. But some publishers say that an artificial intelligence contribution to writing papers can be acknowledged in sections other than the author list.
In a preprint1 published on the medical repository medRxiv in December of last year, ChatGPT is one of 12 writers discussing the use of the technology for medical education. According to co-founder Richard Sever, assistant director of Cold Spring Harbor Laboratory Press in New York, the team behind the repository and its sibling site, bioRxiv, is debating whether it is appropriate to use and credit AI technologies like ChatGPT when publishing papers. He notes that traditions may alter. He added, “We need to distinguish the formal role of an author of a scholarly manuscript from the more general notion of an author as the writer of a document.” Authors take on legal responsibility for their work, so only people should be listed, he says. “Of course, people may try to sneak it in — this already happened at medRxiv — much as people have listed pets, fictional people, etc. as authors on journal articles in the past, but that’s a checking issue rather than a policy issue.” While Victor Tseng, the preprint’s corresponding author and medical director of Ansible Health in Mountain View, California, denied responding to a request for comment.
The editor-in-chief of Nature and Science said that ChatGPT doesn’t meet the standard for authorship. “An attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs,” says Magdalena Skipper, editor-in-chief of Nature in London. Authors using LLMs in any way while developing a paper should document their use in the methods or acknowledgments sections, if appropriate, she says. “We would not allow AI to be listed as an author on a paper we published, and use of AI-generated text without proper citation could be considered plagiarism,” says Holden Thorp, editor-in-chief of the Science family of journals in Washington DC.