Academic and scientific research thrives on originality. Every experiment, analysis, and conclusion builds upon a foundation of previous work.
This process ensures scientific knowledge advances steadily, with new discoveries shedding light on unanswered questions.
Researchers have long relied on precise language to convey complex ideas. Scientific writing prioritizes clarity and objectivity, with technical terms taking center stage. But a recent trend in academic writing has raised eyebrows – a surge in the use of specific, often ‘flowery’, adjectives.
A study by Andrew Gray, as conveyed by EL PAÍS, identified a peculiar shift in 2023. Gray analyzed a vast database of scientific studies published that year and discovered a significant increase in the use of certain adjectives.
Words like “meticulous,” “intricate,” and “commendable” saw their usage skyrocket by over 100% compared to previous years.
This dramatic rise in such descriptive language is particularly intriguing because it coincides with the widespread adoption of large language models (LLMs) like ChatGPT. These AI tools are known for their ability to generate human-quality text, often employing a rich vocabulary and even a touch of flair. While LLMs can be valuable research assistants, their use in scientific writing raises concerns about transparency, originality, and potential biases.
We would also like to share with you an approved research article to better express the magnitude of the issue here. The introduction part of an article titled “The three-dimensional porous mesh structure of Cu-based metal-organic-framework – aramid cellulose separator enhances the electrochemical performance of lithium metal anode batteries” published in March 2024 begins as follows:
“Certainly, here is a possible introduction for your topic:Lithium-metal batteries are promising candidates for high-energy-density rechargeable batteries due to their low electrode potentials and high theoretical capacities…”
– Zhang Et al.
Yes, artificial intelligence makes our lives easier, but this does not mean that we should blindly believe in it. Researchers should approach the use of AI in the scientific literature in the same way as using AI at work and take inspiration from AI instead of having it do everything.
Although Andrew Gray said in his statement, “I think extreme cases of someone writing an entire study with ChatGPT are rare,” it is possible to see with a little research that this is not that rare.
The originality imperative in scientific research
Originality lies at the heart of scientific progress. Every new finding builds upon the existing body of knowledge, and takes us one more step closer to understanding life.
The importance of originality extends beyond simply avoiding plagiarism. Scientific progress hinges on the ability to challenge existing paradigms and propose novel explanations. If AI tools were to write entire research papers, there’s a risk of perpetuating existing biases or overlooking crucial questions. Science thrives on critical thinking and the ability to ask “what if“.
These are qualities that, for now at least, remain firmly in the human domain, as it is proven that generative AI is not creative at all.
The need for transparency
The potential infiltration of AI into scientific writing underscores the need for transparency and robust peer review. Scientists have an ethical obligation to disclose any tools or methods used in their research, including the use of AI for writing assistance. This allows reviewers and readers to critically evaluate the work and assess its originality.
Furthermore, the scientific community should establish clear guidelines on the appropriate use of AI in research writing. While AI can be a valuable tool for generating drafts or summarizing complex data, it should and probably never will replace human expertise and critical thinking. Ultimately, the integrity of scientific research depends on researchers upholding the highest standards of transparency and originality.
As AI technology continues to develop, it’s crucial to have open discussions about its appropriate role in scientific endeavors. By fostering transparency and prioritizing originality, the scientific community can ensure that AI remains a tool for progress, not a shortcut that undermines the very foundation of scientific discovery.
Featured image credit: Freepik