• Search by category

  • Show all

AI-Generated Figures in Academic Publishing

April 7, 2026

For me, generating test has never been an issue. My biggest challenge has always been how best to illustrate my point (and yes, I used AI to create the image above). It is a critical communication skill for scientists and it just got easier. The integration of generative artificial intelligence (AI) into scientific workflows continues at pace [1][2]. It has moved rapidly beyond text generation and has begun to reshape how we present information visually. Conceptual diagrams, graphical abstracts, and even simulated microscopy images are now easy to generate [3][4][5].

This shift represents a significant transformation in how scientific knowledge is communicated, bringing both opportunities and threats [6][7]. The adoption of AI tools to generate images seems to have outpaced the development of coherent governance frameworks for responsible use [3][6][8]. Journals, publishers, and research communities are now engaged in an ongoing effort to define acceptable use, establish transparency requirements, and mitigate the risks associated with automated image generation [6][9][10][11][12].

Emerging AI Figure Generation

Generative AI tools such as diffusion and multimodal large language models can readily translate textual prompts into detailed scientific illustrations [1][13][14]. These tools are being used increasingly to create diagrams and schematics (ever more complex conceptual diagrams, illustrations of biological pathways), and data derived images (microscopy, cells, and graphical summaries) that have, in the past, required specialist illustration skills [3][4][5][15]. As such, they democratise access to high-quality visualisation. Their adoption can only serve to enhance the clarity of interdisciplinary communications. Win-win!

Yet, these benefits are accompanied by epistemic concerns. Unlike traditional data visualisation techniques, AI-generated figures may not always be directly traceable to empirical data or reproducible workflows [7][16]. This introduces ambiguity regarding their evidentiary status, whether they correctly represent data or their outputs are no more than illustrative abstraction [17][18]. It is particularly important for data-derived images that might be mistaken for empirical evidence.

Journal Policies: A Fragmented Landscape

It is testament to how fast the field is moving that policies governing AI-generated imagery are inconsistent [3][6][7]. Major publishers, like Nature Portfolio, Elsevier, PLOS, and Cell Press, have taken divergent approaches, ranging from outright prohibition to conditional acceptance with disclosure. For example, Nature journals have taken a relatively conservative stance, explicitly prohibiting AI-generated artwork in certain contexts due to unresolved copyright and legal concerns [6][12][19]. In contrast, Elsevier and PLoS currently permit AI-generated content provided that authors clearly disclose its use and retain full responsibility for accuracy and integrity [9][10][11]. Currently it seems that AI‑generated figures are not equally controversial across fields:

  • Biomedical sciences: highest scrutiny
  • Physics/astronomy: concerns about synthetic data
  • Social sciences: conceptual diagrams widely accepted
  • Computer science: often more permissive

A recent analysis noted that the current situation is fragmented and insufficiently harmonised, arguing that disclosure alone is inadequate without mechanisms for accountability and verification [3][7]. Moreover, empirical studies suggest that even where policies exist, compliance is low. One large-scale analysis found that only 0.1% of papers explicitly disclosed AI use despite widespread adoption, highlighting a significant transparency gap [8][20].

Core Ethical and Scientific Concerns

The debate around AI-generated figures is underpinned by several recurring ethical and scientific concerns [2][6][7][17].

  • Reproducibility: Reproducibility is a foundational principle of scientific integrity, yet AI-generated figures complicate this expectation [16]. Traditional figures derived from empirical data can be regenerated from raw datasets and documented methods. AI-generated images, however, may rely on opaque model architectures, proprietary training data, and nondeterministic generation processes [1][13][14]. Obtaining duplicate results are impossible unless prompts, models, and parameters are fully documented.
  • Risk of visual misinformation: AI systems can generate visually convincing but scientifically inaccurate images. This risk is especially acute in visually complex fields such as molecular biology or materials science. AI can produce highly plausible but inaccurate or entirely fabricated imagery, which may mislead readers and reviewers, distorting scientific understanding [17][18][20].
  • Authorship and Attribution: AI tools do not meet authorship criteria, yet they contribute materially to figure creation. This raises questions about how to acknowledge AI involvement without misrepresenting responsibility. Journals generally require that authors retain full accountability for accuracy, but the boundaries between human and machine contribution remain blurred [2,6,21].

Finally, there remains the unresolved issue of copyright and data provenance. Many generative models are trained on large, heterogeneous datasets that may include copyrighted or ethically sensitive material, raising unresolved legal and ethical issues [6][12][19].

Tools and Technological Ecosystem

A 2026 review of scientific visualisation tools identifies a growing suite of AI native platforms intended to support authors of scientific papers. These include systems capable of generating biological pathway diagrams, microscopy style images, and graphical abstracts that require minimal user input [2][4][5][15]. Some tools integrate domain specific constraints to ensure scientific plausibility, while others provide code based outputs to maintain numerical accuracy. Platforms such as SciDraw, Scillus, Illustrae, and PaperBanana enable users to generate diagrams aligned with disciplinary conventions [15].

More general-purpose tools, such as diffusion-based image generators, are also widely available, although they often require more substantial human involvement to create images that meet scientific standards [1][13][14]. These tools excel at conceptual illustration but may struggle with precision, scale, and adherence to domain-specific visual norms [4][5].

To the ethical and responsible scientist, AI tools should be viewed as assistive technologies rather than autonomous creators. Human oversight remains essential to ensure that generated figures accurately reflect the underlying science [2][6][7][17].

Practical Guidelines for Responsible Use

Emerging guidance converges on several practical principles for the responsible use of AI-generated figures in academic publishing [6][7][17].

  • Transparency is paramount: Authors are generally expected to disclose the use of AI tools, including the name of the system, the nature of its contribution, and the extent of human modification. This disclosure should typically be provided in figure legends, methods sections, or acknowledgements [6–12].
  • Accountability remains with the author: These tools leave no stable forensic signature and currently there is little way of auditing that journals can use to reliably screen submissions at scale. Thus, irrespective of how a figure is generated, authors remain responsible for verifying its accuracy, appropriateness, and compliance with journal standards [6][7][9][10][21].
  • Reproducibility should be ensured where possible: This should involve documenting prompts, model versions, and post-processing steps, enabling reviewers and readers to understand how the figure was generated [1][7][13][14][16][17].
  • Quality control is essential: AI-generated figures should be subject to the same level of scrutiny as any other scientific output, including full peer review and editorial assessment [6][7][16–18].

Appropriateness must be considered. Not all figures are suitable for AI generation; data-driven visualisations, for example, require direct linkage to empirical data and should not be replaced by synthetic approximations [5][8][12][18]. And finally, we shouldn't only focus on fully generated images. We should also be concerned about the use of AI tfor image 'enhancement' (denoising, in-painting, automated segmentation). These tools blur the line between legitimate pre- and post-processing an unethical manipulation.

Toward Standardisation and Future Directions

The current trajectory suggests that AI-generated figures will become an increasingly common feature of scientific publications. However, whether or not they become acceptable for use will depend on the development of coherent and enforceable standards [3][6][7]. There is growing recognition that journal policies must evolve from simple disclosure requirements toward more robust frameworks incorporating verification, documentation, and accountability [3][7][8].

Standardisation efforts may include harmonised reporting guidelines, integration of AI metadata into publication workflows, and the development of tools for detecting or auditing AI-generated imagery [3][7][8][20]. At the same time, the scientific community must engage in a broader epistemological reflection on the role of visual representation in science. As AI blurs the boundary between illustration and evidence, maintaining clarity about what figures represent, and how they were created, will be essential to preserving trust in the scientific record [2][4][5][6][7][17].

Conclusion

Most scientists appreciate the value of the ‘killer’ figure. An image that captures the attention of our audience. Techniques available to scientists have evolved from hand drawing, to photography, to dot-matrix printouts, to ever-more complex PowerPoint slides. It seems that AI is the next step [4][5].

Figures generated by AI represent both a powerful innovation and a significant challenge for academic publishing. While they offer clear benefits in terms of speed and accessibility, they also introduce risks related to reproducibility, transparency, and visual integrity [6][7][16][17]. Science requires traceable provenance for any tool used in reporting findings but most generative models provide no dataset disclosure.

The landscape is scattered with fragmented policies, uneven compliance, and evolving best practices. The responsible integration of AI into scientific visualisation will depend on a combination of clear guidelines, rigorous oversight, and a continued commitment to the principles of scientific integrity [3][6][7][8][16][17].

Ultimately, AI should augment, not replace, the critical judgement and accountability of researchers. The future of scientific communication will likely be shaped not by whether AI is used, but by how thoughtfully and transparently it is integrated into the research process [2][6][7][17].

References  

  1. Team G, Anil R, Borgeaud S, Wu Y, et al. Gemini: A family of highly capable multimodal models. arXiv preprint. 2023;arXiv:2312.11805.
  2. van Dis EAM, Bollen J, Zuidema W, van Rooij R, Bockting CL. ChatGPT: Five priorities for research. 2023;614(7947):224–6.
  3. Marushchenko E, et al. Patchwork policies: Mapping the divergent AI-image rules in scientific journals.Matter 9: 3; 2026.
  4. Bindslev H. Scientific Visualization: The Visual Extraction of Knowledge from Data. Springer; 2008.
  5. Rougier NP, Droettboom M, Bourne PE. Ten simple rules for better figures. PLoS Comput Biol. 2014;10(9):e1003833.
  6. Using AI responsibly in scientific publishing. Nat Methods. 2026;23:271.
  7. Lin Z. Towards an AI policy framework in scholarly publishing. Trends Cogn Sci. 2024 Feb;28(2):85-88.
  8. He Y, Bu Y. Academic journals’ AI policies fail to curb the surge in AI-assisted academic writing. 2025.
  9. PLOS policy on the use of generative AI in published research. 2024
  10. Cell Press. Cell Press policy on AI-generated content in manuscripts. 2024
  11. The use of AI and AI-assisted technologies in writing for Elsevier. 2024
  12. Artificial intelligence (AI) policy. 2024
  13. Ramesh A, Dhariwal P, Nichol A, Chu C, Chen M. Hierarchical text-conditional image generation with CLIP latents. arXiv preprint. 2022;arXiv:2204.06125.
  14. Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B. High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. p. 10684–95.
  15. SciDraw: AI-powered scientific illustration platform. 2025.
  16. Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005;2(8):e124.
  17. Skulmowski, A., Engel-Hermann, P. The ethics of erroneous AI-generated scientific figures. Ethics Inf Technol27, 31 (2025).
  18. Bik EM, Casadevall A, Fang FC. The prevalence of inappropriate image duplication in biomedical research publications. 2016;7(3):e00809–16.
  19. Samuelson P. Generative AI meets copyright. 2023;381(6654):158–61.
  20. Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult to Detect. Wired.
  21. Thorp HH. ChatGPT is fun, but not an author. 2023;379(6630):313.

About the author

Tim Hardman
Managing Director
View profile
Dr Tim Hardman is the Founder and Managing Director of Niche Science & Technology Ltd., the UK-based CRO he established in 1998 to deliver tailored, science-driven support to pharmaceutical and biotech companies. With 25+ years’ experience in clinical research, he has grown Niche from a specialist consultancy into a trusted early-phase development partner, helping both start-ups and established firms navigate complex clinical programmes with agility and confidence.

Tim is a prominent leader in the early development community. He serves as Chairman of the Association of Human Pharmacology in the Pharmaceutical Industry (AHPPI), championing best practice and strong industry–regulator dialogue in early-phase research. He ia also a Board member and ex-President of the European Federation for Exploratory Medicines Development (EUFEMED) from 2021 to 2023, promoting collaboration and harmonisation across Europe.

A scientist and entrepreneur at heart, Tim is an active commentator on regulatory innovation, AI in clinical research, and strategic outsourcing. He contributes to the Pharmaceutical Contract Management Group (PCMG) committee and holds an honorary fellowship at St George’s Medical School.

Throughout his career, Tim has combined scientific rigour with entrepreneurial drive—accelerating the journey from discovery to patient benefit.

Social Shares

Subscribe for updates

* indicates required

Get our latest news and publications

Sign up to our news letter

© 2025 Niche.org.uk     All rights reserved

HomePrivacy policy Corporate Social Responsibility