It is one of the small ironies of modern scientific communication that highly trained medical writers, professionals who may have spent decades refining style, precision, and the subtle art of narrative clarity, now find themselves wondering how to write like an AI (I really mean large language models). Who wouldn’t want to be able to construct a well-reasoned, fully referenced 5,000-word review article in 36 seconds?
If writing were merely a word optimisation problem, a contest of order, fluency, and speed—then the machines already have us beat [1]. But, for humans at least writing is not simply the predictable arrangement of words. It involves a complex interplay of invention, judgment, perspective, memory, uncertainty, and play. Writing is hard work.
Machine Prose
Large language models (LLMs) such as ChatGPT, write by qualifying statistical patterns from the vast landscape of human writing. They assemble sentences by predicting the most probable sequence of words given the input, sometimes described as producing a sophisticated form of “stochastic parroting” [2]. The result is writing that is smooth, grammatically correct, and unsettlingly neat. Analyses of AI-generated medical content, reports consistently high surface-level quality, with balanced sentence lengths, regular structure, and a polished ‘middle-ground’ tone [3,4].
This predictability is not incidental, LLMs are programmed to optimise an output of linguistic averages. They gravitate toward consensus phrasing, symmetrical paragraphing, and an almost compulsive avoidance of stylistic risk. They rarely indulge in metaphor unless prompted. Seldom do they break structural conventions. And they produce prose that feels uncannily even. Its aspirations are beige. This is what gives AI text its characteristic flavour: technically impressive yet faintly detached from the excitement and pleasure of lived experience. This is the very reason why some have suggested that authors develop ‘lean’ texts by running their documents through AI tools (and perhaps the very reason they shouldn’t) [5].
Computational linguists argue that the “averaged-out” quality is precisely what differentiates AI generated text from human writing. You have to work hard to get your own documents to match AI text in terms of its entropy. As an author, you will most likely use more variation in sentence rhythm, and more irregular conceptual transitions [6]. Where we might occasionally leap, digress, or refocus for reasons shaped by our emotions, machines simply adhere to their statistical directive [7]. They also seem to love the ‘en’ dash. Although AI-generated text is statistically optimised, its sources may be questionable, and its conclusions easily biased. Equally, in an educational sense employing AI to write for you is technically cheating. There is no growth without effort, no pain no gain (thank you Jane Fonda) and using AI appears to suppress your writing and critical abilities [8].
Identifying AI
Because AI-generated text has become increasingly common in academic and regulatory contexts, institutions have begun using AI-detection systems to assess authorship. Yet these systems are far from perfect. A 2023 evaluation of several leading detectors found that they frequently misclassified expert human writing, especially scientific writing, as being generated by machine [9].
Ironically, the more polished, practiced and structurally precise you become writing, the more likely your work will be mistaken for AI. Scientific prose, already inclined to a meticulous clarity and uniformity, is particularly vulnerable to being identified as being AI generated. For medical writers, this dynamic creates a challenging tension: to sound professional without sounding algorithmic. Determining the contribution that an AI may have made to a document can be particularly problematic when it comes to medical writing tests. At Niche we employ several techniques to verify originality with new job applicants. But you can never really know whether a document has been authored by a real person.
AI Impossible
For all its speed, flexibility and power, AI lacks personal experience, emotional grounding, and true contextual reasoning. It cannot capture the complex interplay of a difficult regulatory meeting, the uncertainties that a wide-ranging discussion explored when building a target product profile, or the mixture of frustration and scientific curiosity when a trial misses its primary endpoint. These experiences shape our writing in ways AI cannot reproduce. If you are anything like me your writing will also be influenced by lessons learned at the feet of past Editors (thank you Dr Kerr) [10].
Large language models also struggle with epistemic uncertainty. They often present information with unwarranted confidence or artificial evenness. This can be particularly problematic in areas where evidence is evolving or contradictory. Scholars studying AI hallucinations have highlighted this tendency to overstate or misstate facts while maintaining a veneer of authoritative polish [11]. And you never get the same result twice. In scientific communication, where responsible reporting of uncertainty is a virtue, a misplaced air of confidence is worrisome.
For now, writers can be differentiated from AI by the inclusion of the human elements: ambiguity, reflection, lived experience, and the intellectual humility. These properties characterise real scientific reasoning, they signal not only an author’s authenticity but also their mastery, creativity and professionalism. Nevertheless, it might be challenging to include personal anecdotes in a clinical study report.
The Value of Irregularity
One of the key defining features of human writing is its refusal to remain perfectly symmetrical. Humans break patterns deliberately. Authors often change tone mid-paragraph when a moment of inspiration demands it. They introduce metaphors that reflect personal engagement with the developing text. Authors shift their emphasis for emotional reasons, not statistical ones. This apparent indiscipline and irregularity, far from being a flaw, is what makes your writing compelling.
AI rarely produces such moments without explicit direction. Studies comparing human and AI prose show that humans display nonlinear reasoning, asynchronous rhythms, and idiosyncratic narrative structures that AI strongly resists [12]. These irregularities are not noise; they are fingerprints of human thought.
Medical writers should perhaps resist the professional drive of AI-style prose by introducing some occasional asymmetry into their own writing. It’s not sloppy writing, but deliberate texture. A surprising turn of phrase. A reflective digression. A sentence that carries the rhythm of a real conversation rather than a predicted one. Look at it as writing how you speak (when appropriate).
Ethics, Accountability, and Proper Use
There is no getting this genie back in its bottle [13]. Given the ubiquity of AI tools, major editorial bodies have clarified their expectations. The International Committee of Medical Journal Editors (ICMJE) clearly states that AI cannot be listed as an author because it cannot take responsibility for the integrity of its work [14]. The World Association of Medical Editors (WAME) similarly emphasises that any use of generative AI must be clearly reported with human authors remaining fully accountable for interpretation, accuracy, and ethical soundness [15]. Both professional authors and AI have the potential to include biases and lie, but only humans can be responsible [13].
For medical writers, who often engage with confidential clinical data requiring regulatory reasoning, and nuanced interpretation, these principles are fundamental. AI can accelerate drafting, support literature summarisation, or assist with stylistic editing – but only you can judge, decide, weigh risks, or interpret evidence (I decided to use that en dash!). That boundary must remain clear.
Write Faster
Speed in medical writing isn't necessarily about typing faster; it's about optimizing the entire process to reduce cognitive load, minimize rework, and maximize efficiency. You will never write faster than an AI but as we have commented previously (before the advent of AI) you can write faster [16,17]. In brief:
Process optimization and project management:
- Create a detailed outline or template. This should include required headings, subheadings, and placeholder text for tables and figures.
- Use templates and style guides (for abbreviations, drug names, etc.). Consistency saves time on formatting and editing later.
- Adopt reusable components (e.g., a standard methodology section) and re-use them across multiple documents, ensuring consistency.
- Dedicate specific, uninterrupted blocks of time to specific tasks will maintain high concentration levels.
Leverage technology:
The future is not about competing with LLMs; it's about collaborating with them:
- Where possible use LLMs for brainstorming outlines, generating simple first drafts of non-critical text (e.g., emails, meeting minutes), or rephrasing clunky sentences. Crucially, you must act as the expert reviewer and fact-checker.
- Manage your source materials ruthlessly from the outset and confirm all included data and sources.
- Use tools like TextExpander or the built-in AutoCorrect/AutoText in Microsoft Word to create shortcuts for frequently used phrases.
Separate drafting from editing:
Your first goal is to get ideas down, not to make them perfect. Write a "vomit draft" without self-editing. Once the structure and content are there, then you go back and refine the language. You get the best outcomes when you know your data inside and out. The biggest delays often come from not being fully conversant with the source data. Spend quality time with the data before you write.
Reclaiming Writing as a Creative Act
In the end, the most compelling reason not to fully “write like an AI” stems from the joy you get from it. There is enormous satisfaction in shaping an argument, in finding the precise word to capture a complex idea, in discovering a better metaphor halfway through a sentence, or in allowing oneself to explore the occasional leap of imagination. And crafting, slowly, deliberately, with personality, humour, hesitancy, and unexpectedbrilliance, is a reward in itself.
As I have previously noted, words have functioned as spoken and written spells: capable of elevating or deceiving, uniting or dividing, enlightening or manipulating. Recognising and respecting this power is crucial for anyone engaged in public discourse, healthcare, education, and leadership [18]. For medical writers working in a world increasingly filled with machine-generated prose, reclaiming the idiosyncratic pleasure of writing is not merely an artistic choice but a professional one. In contrast, AI-written text feels competent and professional, yet slightly airless. Sentences flow with symmetrical grace. Paragraphs unfold like carefully segmented bricks. The argumentation progresses steadily forward, rarely pausing for a moment of personal reflection or an unexpected pivot of thought. AI does not ramble. It does not change its mind mid-sentence. It does not get lost in the emotional gravity of a difficult clinical dilemma or linger over a moment of scientific awe. Machines generate text, humans crafts stories.
References
- Hardman T. (2025). Medical Writing vs Artificial Intelligence.
- Bender EM, Gebru T, McMillan-Major A, Mitchell S. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 2021;610–623.
- Jeblick K, et al. ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports. arXiv:2212.14882.
- Herbold, S., Hautli-Janisz, A., Heuer, U. et al. A large-scale comparison of human-written versus ChatGPT-generated essays. Sci Rep 13, 18617 (2023).
- Hardman T. (2025). Lean medical writing.
- Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S. et al. Testing of detection tools for AI-generated text. Int J Educ Integr 19, 26 (2023). https://doi.org/10.1007/s40979-023-00146-z
- Hardman T. (2025). Artificial intelligence: Pandora’s box?
- Deep PD, Chen Y. (2025). The Role of AI in Academic Writing: Impacts on Writing Skills, Critical Thinking, and Integrity in Higher Education. Societies 15(9), 247;
- Achiam J, et al. GPT-4 Technical Report. arXiv preprint arXiv:2303.08774. 2023.
- Hardman T. (2018). Six steps to assessing your writing.
- Huang L, et al. A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. ACM Transactions on Information Systems, Vol. 1, No. 1, Article 1. Publication date: January 2024.
- Caliskan A, Bryson JJ, Narayanan A. Semantics derived automatically from language corpora contain human-like biases. Science. 2017;356(6334):183–186.
- Niche Science & Technology Ltd (2025). Artificial intelligence in medical writing: An Insider’s Insight.
- International Committee of Medical Journal Editors (ICMJE). Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Updated 2024.
- WAME Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publications 2023
- Hardman T. (2018). How Long?
- Hardman T. (2018). Medical Writer?
- Hardman T. (2025). Spoken Spells: The of Power Words.