• Search by category

  • Show all
A medical writer reviewing documents opposite an artificial intelligence robot, illustrating the comparison between medical writing and artificial intelligence in clinical documentation | Niche

Medical Writing vs Artificial Intelligence: Threat, Tool, or False Debate?

January 27, 2026

Artificial intelligence (AI) systems based on large language models (LLMs) have rapidly transformed scientific communication, enabling the generation of coherent text, summarization of large datasets, and automated drafting of research documents like manuscripts and trial protocols. Since 2022, major journals such as Nature, Science, The Lancet and JAMA have published editorials acknowledging the disruptive impact of LLMs on research workflows and publication practices [1–3]. There have also been similar developments in the field of regulatory writing [4]. These tools promise efficiencies in writing and data interpretation, but they also introduce challenges related to accuracy, transparency, and research integrity. The potential impact of these tools on scientific writing has us (at Niche) introduce our own guidelines for use in-house and publish a guide on its use in early 2023 [5].

Medical research depends on methodological rigor, reproducibility, and transparent reporting. Yet the academic environment increasingly rewards publication volume, citations, and impact-factor–driven metrics—factors that have long distorted research priorities [6–8]. In the pharma industry setting, AI tools offer the opportunity to shorten delivery timelines and reduce resource burdens (and so costs). Against this backdrop, AI offers a means to accelerate research outputs, raising concerns about the potential erosion of the ‘human’ contribution and lack of transparency of the AI ‘black box.’

The Appeal of Artificial Intelligence

Large language models have the potential to assist with data interpretation, literature reviews, conceptual framing, and document drafting. Surveys of researchers show that AI is already delivering significant reductions in the time spent preparing documents and summarizing findings, particularly for non-native English speakers and junior researchers [9]. Writers can also use LLMs to review their own work. Early evaluations of LLM-generated scientific abstracts found that human reviewers often rated articles written by AI to be readable and of good quality; the outputs are often difficult to distinguish from human writing [10].

Beyond linguistic improvements, during the conceptual phase AI can help review vast amounts of information, enabling quicker identification of research gaps and/or potential methodological considerations. For researchers under pressure to complete projects and publish findings (or with tight budgets), such acceleration is highly attractive.

The ‘publish or perish’ culture remains a defining characteristic of academic medicine. Such incentives tied to high publication volumes and citations is on record of leading to research waste, salami-slicing, and redundancy within the literature [6,8]. AI has the potential to intensify these unethical practices by enabling rapid production of manuscripts. Researchers may feel the temptation to use AI to increase publication throughput, potentially prioritizing quantity over scientific rigour.

Challenges and Risks

When LLM’s use ‘raw’ data, mined from the internet using commonly available tools like ChatGPT it has the potential to incorporate historic bias, misinformation and error in place of contemporary opinion [11]. Among the well-documented limitations with LLMs has been their ability to generate inaccurate statements, fabricated references, and unsupported claims. Analyses published in Nature Medicine and JAMA highlight that AI-generated biomedical content may contain subtle factual inaccuracies that are difficult to identify [3,10]. When incorporated into manuscripts, such errors risk propagating misinformation into the scientific record. Furthermore, the internal mechanisms and training data used in commercial LLMs are often not transparent. Concern led the International Committee of Medical Journal Editors (ICMJE) to emphasise how AI tools cannot be credited as authors and that researchers must remain fully accountable for all content [12]. The use of ‘black-box’ tools confounds accountability and reproducibility—cornerstones of scientific integrity.

The Journal Landscape

It is clear that AI lowers the effort required to produce flowing scientific text, it also creates opportunities for unscrupulous paper mills by unethical parties. Academic publishers and editors have expressed concern that AI will exacerbate problems related to fraudulent or low-quality submissions [2]. This places strain on the peer review and editorial oversight process, which is already believed to be close to breaking under the pressure of submissions.

Biomedical journals already face a rising number of submissions coupled with a shrinking reviewer pool. A 2022 Nature survey reported increasing difficulty securing reviewers due to workload, burnout and the growing sense of dissatisfaction with the peer review system [13,14]. It is only to be expected that AI will serve to exacerbate the strain by amplifying the rate of manuscript generation, while improving the written quality of submissions, making it difficult to identify inferior research and potentially increasing low-quality or speculative reports. Serving as a regular reviewer for several journals I have observed a spike in requests to review articles where the data has clearly been generated by automated database trawling and written with the aid of AI. I can report that journals don’t always respond appropriately when these concerns are raised.

In an attempt to address the editorial onslaught, journals are considering AI tools for triage, plagiarism detection, topic prediction, and even preliminary assessment of methodological quality [15]. While these will certainly improve processing efficiency, they also introduce the potential to embed biases from historical citation patterns and editorial preferences. Some publishers are already experimenting with AI to support peer review by generating summaries or highlighting potential submission weaknesses. COPE and major publishers caution, however, that AI cannot replace human judgment and should never have autonomous authority in review decisions [16]. It seems imperative that peer review remains a human-led process, essential for contextualizing data and observations, evaluating methods used, and safeguarding scientific standards.

The Future

After a rocky period [17,18], professional medical writers have established their key role in forming and contributing to scientific endeavours, ensuring adherence to reporting guidelines such as CONSORT, PRISMA, and ARRIVE [19–21], providing methodological clarity, and facilitating communication between researchers, statisticians, and clinicians. Initial assessments appear to highlight how the rise of AI necessitates changes in this role of medical writers.

Some research groups and industry teams are already experimenting with AI to generate draft documents, raising concerns that medical writers will eventually be displaced [4]. There is no denying that when given well thought out and comprehensive prompts, LLMs can rapidly generate structured text that appears polished, potentially reducing reliance on professional writing support for basic drafting. It seems likely then that if medical writers are to survive they will need to shift their functions from language orientated scientists to AI engineers. New skills and responsibilities will involve:

  • Creating effective AI prompts (prompt engineering)
  • Verifying AI-generated outputs
  • Ensuring compliance with reporting guidelines
  • Safeguarding interpretive accuracy
  • Detecting hallucinations or fabricated references
  • Documenting and disclosing AI use
  • Coordinating rapid human team reviews on multiple projects

It might be argued that these changes are analogous to how statisticians integrated ever more complex computational tools without losing professional relevance—by becoming stewards of quality, directing strategic approaches to analysis and maintaining methodological rigour.

Ethical and Regulatory Considerations

Leading publishers, COPE, and the ICMJE require disclosure of AI use in manuscript preparation [12,16]. Although the extent of enforcement of these requirements currently varies, transparency is essential to maintaining credibility. AI cannot meet authorship criteria, which require accountability, decision-making contributions, and taking responsibility for the work’s integrity. Two philosophical approaches have emerged:

  • Detection and Restriction (“Witch Hunt”): Journals use AI-detection tools to identify AI-generated text. However, detection algorithms remain imperfect with high rates of false-positives (and false-negatives), introducing the potential to unfairly target authors whose writing style resembles AI.
  • Integration and Transparency: Where AI is accepted as a tool akin to statistical software or grammar-checking programs, with the understanding that its use is transparent and content is verified by the authors. Many experts argue that integration is inevitable and “resistance is futile.”

It is clear that however imperfect there is no way we will ever get the genie back in the bottle [22]. Its involvement in medical research is here to stay, and it will continue to insinuate itself into the scientific record.

Toward a Sustainable Future

To ensure AI enhances rather than undermines scientific integrity, the research community will need to introduce novel strategies that address the current and future limitations of AI tools to the benefit of the scientific literature. Consideration include:

  • Harmonised Standards: Building on COPE and ICMJE guidance, unified policies for AI use should define acceptable practices, responsibilities, and verification requirements.
  • Strengthening Peer Review: Reviewer incentives—such as recognition systems, academic credit, or workload redistribution—should be expanded to counteract reviewer shortages.
  • Training and Education: Researchers, clinicians, medical writers, and editors must receive training in AI literacy, responsible use, and critical evaluation of AI-generated content.
  • Rebalancing Incentive Structures: Institutions and funders should prioritise methodological rigour, reproducibility, and social value over publication volume.
  • Transparency as a Norm: AI use should be routinely disclosed, just as statistical software and editorial assistance are currently reported.

Conclusion

AI-based language models are reshaping medical writing and publishing. There are many questions that will need to be answered. While they provide powerful tools for enhancing clarity, efficiency, and accessibility, they also carry substantial risks: hallucinations, erosion of accountability, and amplification of existing biases and distortions in academic incentives. Journal editors face new pressures to manage growing submission volumes. Medical writers will most likely need to adapt to evolving professional roles. It remains unclear how the market for medical writers will respond to an industry that need less deep understanding of underlying scientific disciplines and therapeutic targets. Will future medical writers be little more than highly organised project managers capable of facilitating accelerated multi-project delivery – pretty much what they already do but on steroids.

OK, I admit, speculating on the long-term value or limits of LLMs in scientific and medical writing is increasingly futile, because both the technology and its applications are advancing at a pace that outstrips our ability to predict them. Our guide suggested that only a year ago [5], many believed LLMs could offer drafting assistance at best; today, they already support literature synthesis, methodological reasoning, data interrogation, and regulatory-aligned document structuring. The trajectory is not linear but exponential: each new model rapidly expands the scope, speed, and fidelity with which AI can contribute to scientific endeavours. In such an environment, fixed forecasts quickly become obsolete. Rather than guessing what LLMs will or will not be able to do, the more productive focus is on continuous evaluation, responsible integration, and developing frameworks that allow both humans and machines to contribute where they add the most value.

Whether the scientific community chooses to police AI use aggressively or integrate it transparently will profoundly influence the future of medical communication. With robust standards, transparency, and sustained human oversight, AI has the potential to support—not supplant—the quality and integrity of medical research.

References

  1. Nature Editorial. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023;613:612.
  2. Thorp HH. ChatGPT is fun, but not an author. Science. 2023;379(6630):313.
  3. Shen Y, Heacock L, Elias J, et al. ChatGPT and other large language models are double-edged swords for biomedicine. Nat Med. 2023;29:2150–3.
  4. Vieira K. (2025). AI in Regulatory Medical Writing: 7 Powerful Ways It’s Revolutionizing Pharma Documentation. https://themedwriters.com/ai-in-regulatory-medical-writing/
  5. Niche Science & Technology Ltd. (2023) ChatGPT and Artificial Intelligence in Medical Writing: An Insider’s Insight.
  6. Ioannidis JPA. Meta-research: Why research on research matters. PLoS Biol. 2018;16(3):e2005468.
  7. Casadevall A, Fang FC. Reforming science: methodological and cultural reforms. Infect Immun. 2020;88(6):e00064-20.
  8. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.
  9. van Dis EA, Bollen J, Zuidema W, van Rooij R, Bockting CL. ChatGPT: five priorities for research. Nature. 2023;614:224–6.
  10. Gao CA, Howard FM, Markov NS, et al. Comparing scientific abstracts generated by ChatGPT to original abstracts using blinded human reviewers. JAMA Netw Open. 2023;6(3):e231345.
  11. Hardman T. (2024). Love: an AI's inquiry into the essence of human connection.
  12. International Committee of Medical Journal Editors. Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Updated 2023.
  13. Conroy G. How ChatGPT and other AI tools could disrupt scientific publishing. Nature. 2023 Oct;622(7982):234-236.
  14. Hardman T. (2024). Are science publishers monster predators?
  15. Stokel-Walker C. AI-assisted peer review is coming. Nature. 2023;614:610–2.
  16. Committee on Publication Ethics (COPE). COPE position statement: Authorship and AI tools. 2023.
  17. Gøtzsche PC, et al (2009). What Should Be Done To Tackle Ghostwriting in the Medical Literature? PLoS Medicine 6(2):e1000023. DOI: 10.1371/journal.pmed.1000023.
  18. McHenry L. (2010). Of Sophists and Spin-Doctors: Industry-Sponsored Ghostwriting and the Crisis of Academic Medicine. Mens Sana Monographs. 8(1):129–145. DOI: 10.4103/0973-1229.58824.
  19. Schulz KF, Altman DG, Moher D; CONSORT Group. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomized trials. BMJ. 2010;340:c332.
  20. Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.
  21. Kilkenny C, Browne W, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines. PLoS Biol. 2010;8(6):e1000412.
  22. Hardman T. (2024). Artificial intelligence: Pandora’s box?

About the author

Tim Hardman
Managing Director
View profile
The Managing Director of Niche Science & Technology Ltd., a 30+ person bespoke services CRO based in the UK, Dr Tim Hardman founded the company in 1998. With over 40 years of experience in clinical research, Dr Hardman is highly regarded for his expertise in translational science, clinical pharmacology, and the strategic design and implementation of clinical studies. Dr Hardman began his career with a solid foundation in pharmacology, earning his doctorate in the field and gaining early experience in academic and clinical research settings. His career path saw him working in the field of regulatory science, where he developed a deep understanding of clinical trial design, data interpretation, and regulatory requirements across various therapeutic areas. Dr Hardman’s expertise spans early-phase studies, first-in-human trials, and advanced regulatory submissions, helping numerous clients bring innovative therapies from concept to clinical reality.

Social Shares

Subscribe for updates

* indicates required

Related Articles

Get our latest news and publications

Sign up to our news letter

© 2025 Niche.org.uk     All rights reserved

HomePrivacy policy Corporate Social Responsibility