Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

ChatGPT turns two: how the AI chatbot has changed scientists’ lives

In the two years since ChatGPT was released to the public, researchers have been using it to polish their academic writing, review the scientific literature and write code to analyse data. Although some think that the chatbot, which debuted widely on 30 November 2022, is making scientists more productive, others worry that it is facilitating plagiarism, introducing inaccuracies into research articles and gobbling up large amounts of energy.
AI is complicating plagiarism. How should scientists respond?
The publishing company Wiley, based in Hoboken, New Jersey, surveyed 1,043 researchers in March and April about how they use generative artificial intelligence (AI) tools such as ChatGPT, and shared the preliminary results with Nature. Eighty-one per cent of respondents said that they had used ChatGPT either personally or professionally, making it by far the most popular such tool among academics. Three-quarters said they thought that, in the next 5 years, it would be important for researchers to develop AI skills to do their jobs.
“People were using some AI writing assistants before, but there was quite a substantial change with the release of these very powerful large language models,” says James Zou, an AI researcher at Stanford University in California. The one that caused an earth-shattering shift was that underlying the chatbot ChatGPT, which was created by the technology firm OpenAI, based in San Francisco, California.
To mark the chatbot turning two, Nature has compiled data on its usage and talked to scientists about how ChatGPT has changed the research landscape.
• 60,000: the minimum number of scholarly papers published in 2023 that are estimated to have been written with the assistance of a large language model (LLM)1. This is slightly more than 1% of all articles in the Dimensions database of academic publications surveyed by the research team.
• 10%: the minimum percentage of research papers published by members of the biomedical science community in the first half of 2024 estimated to have had their abstracts written with the help of an LLM2. Another study estimated the percentage to be higher — 17.5% — for the computer science community in February3.
• 6.5–16.9%: the percentage of peer reviews submitted to a selection of top AI conferences in 2023 and 2024 that are estimated to have been substantially generated by LLMs4. These reviews assess research papers or presentations proposed for the meetings.
All of these figures, which were obtained by evaluating patterns and keywords in text that are characteristic of LLMs, are probably conservative estimates, says Debora Weber-Wulff, a computer scientist and plagiarism researcher at the HTW Berlin University of Applied Sciences. Her work shows that detection tools have been failing to determine whether a paper has been written with the assistance of AI5.
In the past two years, scientists have found that using ChatGPT to draft paper abstracts, as well as grant applications and letters of support for students, gives them more capacity to focus on complex tasks. “The things that deserve our time are the hard questions and the creative hypotheses,” says Milton Pividori, a medical informatician at the University of Colorado School of Medicine in Aurora.
Could AI help you to write your next paper?
LLMs can be especially useful for overcoming language barriers, researchers say. “It democratizes writing and it helps folks that have English as a second language,” says Gabe Gomes, a chemist at Carnegie Mellon University in Pittsburgh, Pennsylvania. An analysis posted ahead of peer review on the preprint server SSRN in November found that the quality of writing in papers by authors whose first language is not English improved after the release of ChatGPT and did so by more than the writing in papers by authors who are fluent in English6.
Since its release in 2022, ChatGPT has seen several upgrades. GPT-4, released in March 2023, impressed users with its capacity to generate human-like texts. The latest model, o1, which was announced in September and is available to paying customers and certain developers on a trial basis, can, OpenAI says, “reason through complex tasks and solve harder problems than previous models in science, coding, and math”. Data scientist Kyle Kabasares at the Bay Area Environmental Research Institute in Moffett Field, California, used o1 to recreate some code from his PhD project. When prompted with information from the methods section of Kabasares’s research paper, the AI system wrote code in one hour that had taken him almost a year of his graduate studies to construct.
One area in which ChatGPT and similar AI systems have so far been less successful is in doing literature reviews, Pividori says. “They don’t really help us to be more productive,” he says, because a researcher needs to fully read and understand papers that are relevant to their field. “If the paper is not central to your research, you could maybe use AI tools to summarize it,” he says. But LLMs have been shown to hallucinate7 — that is, make up information. For instance, they might talk about figures that don’t exist in a paper.
Privacy is another concern that has cropped up for researchers using LLMs. When scientists input original data that haven’t yet been published into one of these AI tools to write a paper, for example, there is a risk that the content could be used to train updated versions of these models. “These are black boxes,” says Weber-Wulff. “You have no idea what’s going to be happening with the data that you upload there.”
Do AI models produce more original ideas than researchers?
To avoid that risk, some researchers opt to use small, local models instead of ChatGPT. “You run it in your computer, and nothing is shared externally,” Pividori says. He adds that certain ChatGPT subscription plans ensure that your data won’t be used to train the model.
One big question that researchers have been pursuing in the past year is whether ChatGPT can go beyond the role of a virtual assistant and become an AI scientist. Some preliminary efforts suggest that it’s possible. Zou is leading the development of a virtual laboratory in which different LLMs play the part of scientists in an interdisciplinary team, with a human scientist giving high-level feedback. “They work together to formulate new research projects,” he says. Last month, Zou and his colleagues posted the results of one of these projects to the preprint server bioRxiv, ahead of peer review8. The virtual lab designed nanobodies — a type of small antibody — capable of binding to variants of the coronavirus SARS-CoV-2, which caused the COVID-19 pandemic. Human researchers validated the work with experiments and identified two promising candidates for further investigation.
Gomes and his colleagues are also excited by the prospect of using ChatGPT in the lab. They harnessed the tool to design and carry out several chemical reactions with the help of a robot apparatus late last year. “The expectation is that these models will be able to discover new science,” Gomes says.

en_USEnglish