I wondered whether I should entitle this article Hubris in the age of AI.
In 2015, I wrote Instant Expertise: Cognitive Limits to Rapid Learning, an article that explored a growing illusion: that the internet’s vast information resources could compress the long apprenticeship of scientific mastery into a few hours of online searching. Even then, the warning signs were clear [1]. The article’s hypothesis was that people were beginning to mistake access to information as understanding, and the ability to retrieve facts for the wisdom that comes only from years of immersion, practice, and reflection.
It now feels like a snapshot taken just before a seismic shift. The last decade has not merely intensified the trend; it has transformed it. The ‘instant expert’ of 2015, armed with search engines and online summaries, was just a prototype. The modern version, empowered by large language models (LLMs) and generative AI, is something altogether different: more confident, more fluent, and often more profoundly mistaken.
Do you remember when expertise was shaped not by what you could retrieve instantly, but by the slow, deliberate work of learning where knowledge lived and how to interpret it?
The Changing Geography of Knowledge
In the 1980s, scientific expertise had a physical geography. Knowledge lived in journals, books, and personal archives of photocopied papers. No sane person imagined they could hold the whole of a discipline in their head; instead, they learned where knowledge resided. The craft of science included the craft of archiving and navigation. You knew which shelf held the seminal volume, which colleague had the obscure monograph, and which review article contained the figure that clarified everything. Expertise was distributed across people, places, and paper.
By the 2000s, the explosion of scientific output had outpaced even the most diligent archivist. Search engines, online databases, and full‑text repositories became essential tools. The scientist’s skill set shifted from knowing where information lived to knowing how to search for it (effectively). Boolean operators, citation networks, and keyword strategies became part of the tacit knowledge of research [2]. Yet even then, the process retained friction. You still had to read the papers, compare findings, and synthesise insights. The search was only the beginning, understanding required some hard thinking [3].
The 2015 article argued that this shift had already created a new illusion: the ‘instant expert,’ emboldened by the speed of search and the fluency of online summaries. The instant expert believed that because they could retrieve information quickly, they understood it deeply. They mistook fluency for mastery, and they underestimated/discounted the value of apprenticeship, the slow accumulation of tacit knowledge, the ability to detect nuance, and the judgement that comes only from long exposure to the messy, contradictory reality of scientific research [4].
When Machines Do the Synthesising
The emergence of LLMs and generative AI has collapsed the distance between question and answer. Where search engines once returned lists of sources, LLMs now return synthesised narratives. They don’t require the user to read widely, compare perspectives, or evaluate evidence [5][6]. They present a coherent, confident response that feels like understanding, even when it is not.
This shift has profound implications for scientific literacy, epistemic humility, and the development of expertise. Cognitive scientists have long warned that easy access to information inflates people’s perception of their own knowledge. Infamously, the described ‘Google effect,’ demonstrated that individuals offload memory to external systems and become more confident even as their internal knowledge weakens [7]. More recent studies have extended this concern to AI systems, demonstrating that LLM‑generated explanations can create an illusion of comprehension, leading users to believe they understand complex topics more deeply than they do [8].
The problem is not simply that LLMs can be wrong. It is that they are convincingly wrong. They generate fluent, coherent, and authoritative prose, even when the underlying reasoning is flawed or the citations are fabricated. This fluency amplifies overconfidence. Research has shown that people are more likely to trust explanations that are linguistically smooth, irrespective of their accuracy [9]. The used-car-salesman of science, LLMs are the perfect exploiters of cognitive bias.
Skills Lost in the Age of Instant Answers
The loss of skill associated with widespread LLM use is subtle but significant. One of the first casualties is the ability to search effectively. When a single prompt produces a synthesised answer, users lose the habit of exploring the literature, comparing sources, or tracing ideas back to their origins. Studies on digital cognitive offloading show that when individuals rely on external systems for information retrieval, their internal search strategies deteriorate over time [10]. The skill of constructing a precise query, once central to scientific work, gets lost.
A second casualty is the ability to read deeply. Reading scientific literature is not merely about extracting facts; it is about absorbing the structure of arguments, recognising methodological limitations, and developing a sense for what constitutes robust evidence. LLMs short‑circuit this process by presenting a distilled narrative that hides the underlying complexity. Research on comprehension shows that summarisation tools can reduce engagement with primary sources, leading to less depth of understanding and poorer retention [11]. When the machine does the synthesis, the user loses the opportunity to build the mental scaffolding that supports true expertise.
A third casualty is critical thinking. The apprenticeship model of science trains researchers to question assumptions, evaluate evidence, and recognise when something “feels wrong.” These skills develop only through repeated exposure to conflicting data, ambiguous results, and methodological nuance. LLMs, however, present a smooth surface. They rarely express uncertainty unless prompted, and they often resolve contradictions by ignoring them. This can erode the user’s ability to detect flaws or inconsistencies. Studies have shown that reliance on AI‑generated explanations can reduce analytical vigilance, making users less likely to question incorrect outputs [12].
A fourth casualty is epistemic humility. When answers arrive instantly and confidently, users become more confident too. The 2015 article warned that the instant expert was already prone to overconfidence, mistaking access for understanding. LLMs amplify this tendency dramatically. They provide not just information but the appearance of mastery. Research on metacognition shows that people struggle to assess their own understanding when information is presented fluently, leading to inflated self‑assessments and reduced willingness to seek further learning [13]. The modern instant expert is thus doubly misled: they believe they understand because the machine sounds like it understands.
The lost opportunities are equally significant. In the old model, searching the literature exposed you to serendipity [4]. You encountered adjacent papers, contradictory findings, or unexpected insights. LLMs remove this ‘philosophical wandering.’ They give you the destination without the journey. They compress the landscape of knowledge into a single path, eliminating the side roads where new ideas often emerge. Scholars have noted that this narrowing effect may reduce creativity and hinder interdisciplinary discovery [14].
Hubris Upon Hubris: The New Instant Expert
It feels like we are heading towards a new kind of epistemic fragility: a tissue‑paper‑thin understanding of ever more complex issues that collapses under pressure. Users will speak fluently about topics they have never studied, supported by machine‑generated synthesis that mimics expert reasoning without possessing any underlying conceptual grounding. This is not merely hubris, it is hubris amplified by automation. The user believes they understand because the machine sounds authoritative. The machine sounds authoritative because it has been trained to imitate authority. The feedback loop is complete.
In this sense, the modern instant expert represents hubris upon hubris. The first layer is the belief that access to information is equivalent to understanding. The second is the conviction that machine‑generated synthesis is equivalent to expertise. The third, a certainty that fluency is equivalent to truth. Each layer reinforces the others, creating a self‑sustaining illusion of mastery.
Is the solution to reject AI? These tools have extraordinary potential to accelerate discovery, support research, and democratise knowledge. The challenge is to use them without surrendering the skills that make expertise possible. A recent article in Psychology Today noted that adults are losing skills to AI while children will never build them [15]. We have to preserve the apprenticeship model, cultivate scepticism, and teach the next generation that synthesis is not understanding, fluency is not mastery, and confidence is not competence. True expertise still requires time, humility, and the willingness to wrestle with complexity.
References
- TC Hardman (2015) Instant Expertise: Cognitive Limits to Rapid Learning.
- TC Hardman (2017) Conducting effective literature searches.
- TC Hardman (2025) Scientific Documentation: Mindful Archives vs. Instant Access.
- TC Hardman (2025) Artificial Intelligence vs. PubMed: The Future of Literature Searching.
- TC Hardman (2025) Medical Writing vs Artificial Intelligence: Threat, Tool, or False Debate?
- TC Hardman (2026) Medical Writing 2026: Adapting to AI and Rising Complexity.
- Sparrow B, Liu J, Wegner DM. Google effects on memory: Cognitive consequences of having information at our fingertips. Science. 2011;333(6043):776–8.
- Ji Z, Lee N, Frieske R, et al. Survey of hallucination in natural language generation. ACM Comput Surv. 2023;55(12):1–38.
- Oppenheimer DM. Consequences of erudite vernacular utilized irrespective of necessity: problems with using long words needlessly. Appl Cogn Psychol. 2006;20(2):139–56.
- Storm BC, Stone SM. Saving-enhanced memory: The benefits of saving on the learning and remembering of new information. Psychol Sci. 2015;26(2):182–8.
- Schmid RF, Telaro G. Concept mapping as an instructional strategy for high school biology. J Educ Res. 1990;84(2):78–85.
- Logg JM, Minson JA, Moore DA. Algorithm appreciation: People prefer algorithmic to human judgment. Organ Behav Hum Decis Process. 2019;151:90–103.
- Rozenblit L, Keil F. The misunderstood limits of folk science: an illusion of explanatory depth. Cogn Sci. 2002;26(5):521–62.
- Bender EM, Gebru T, McMillan-Major A, Shmitchell S. On the dangers of stochastic parrots: Can language models be too big? FAccT. 2021;610–23.
- Cook T (2026) Adults Lose Skills to AI. Children Never Build Them.