To the disappointment of the advertising industry, it seems that many of us are capable of ‘tuning out’ making it difficult to insert their ‘selling messages’ into our social narrative. Marketing shifted focus to creating seemingly educational content to deliver their promotional messaging – aka content driven marketing. The goal, earn trust while slipping in brand narratives and sales messages. This methodology is so subtle that readers often mistake marketing for enlightenment. It’s no accident that pharmaceutical companies adopted Good Publication Practice guidelines to avoid appearing to disguise marketing as science [1].
Going digital
The advent of the digital age has transformed the way we communicate, share information, and consume content. Everyone now has access to powerful channels of communication. Everyone can be an author, everyone has a story to tell. In certain circles your perceived ‘worth’ is set by your social presence and many have successfully commercialised their own social narrative.
Platforms like LinkedIn have become central to professional networking, brand building (business and individual), and information sharing. To keep their brands visible in crowded social media feeds, influencers, individuals and marketers alike are under unprecedented pressure to produce a diatribe of content. Everyone is desperate to have their voice heard above the background noise, retaining their place in the collective consciousness. The ‘old ways’ whereby your marketing team spends weeks developing key messages to be included in carefully crafted ‘articles’ (message vehicles) and disseminated through high return channels are over. Your multi-million campaign will be swamped into irrelevance by Kimberly posting about how she will be attending the super sexy ‘BioTwit’ conference in Berlin – seven times a day (Kimberley, please, please stop!).
The focus on quantity of posts has long come at the expense of quality, leading to an overload of poorly researched, generic, or recycled articles. Many individuals and business models follow Joseph Stalin’s philosophy, "Quantity has quality of its own" (1941) rather than Benjamin Franklin’s, “The bitterness of poor quality remains long after the sweetness of low price is forgotten.”
The advent of AI
Just imagine how a tool that could generate a new article every minute would answer the dreams of marketers everywhere… Yep! The rise of large language model tools (LLMs) like ChatGPT have poured rocket fuel on the fire – its click bait on steroids. We have entered the ‘AI slop’ era. Coined in the 2020s, the term has a pejorative connotation akin to spam emails [2].
Creating long-form content has never been easier. A 12-year old with a laptop and an internet connection can flood platforms with advice on how to deal with your mid-life crisis. AI tools can produce articles, social media posts, and comments in seconds, reducing the need for human input and increasing the volume of shallow content [3]. The text created is coherent and contextually relevant, and many don’t care that it lacks depth or originality while drowning out valued content [4]. The adoption of AI by content creators, marketers, and individual users has seen marked increases in posts released as automated or semi-automated posts [5]. Catchy headlines and captivating thumbnails are the currency of popularity (also AI generated). Posts will lure readers in with titles such as "10 Ways to Revolutionize Your Career" or "The Secret to Success That Nobody Tells You," only to provide superficial content that recycles trite and worn-out ideas. Sensational or misleading headlines are designed to attract clicks [6]. Feel free to get some advice on sexy titles from our Insider’s Insight [7].
Posts will often include vague statements that apply to almost anyone and fail to address specific challenges (no better than a latter-day newspaper horoscope) [8]. Claims by self-anointed ‘experts’ are designed to resonate (buzzword warning) with your deepest desires but are rarely supported by data, case studies or real-world examples. Research has shown that the proliferation of shallow content is driven by the algorithms used by social media platforms, which prioritize engagement metrics such as likes, shares, and comments [9].
Research shows that low‑quality information can go viral, driven by the audiences limited attention spans and cognitive overload [10]. Algorithms amplify shallow posts by prioritizing content that generates high engagement, irrespective of quality [11]. It is now a race to the bottom, where users and content creators are incentivised by ‘views’ to produce content that is easily consumable but lacks substance [12]. Don’t forget the overuse of buzzwords like synergy, innovation or disruption, often quoted in the absence of any meaningful application.
As LLMs have become more sophisticated they gained the ability to do more than simply enhance your research, accelerate drafting and overcome writers’ block. Obviously, every ‘author’ using ChatGPT claims, that they only use these tools to do the groundwork (after which they claim to provide the originality and audience relevance). They insist that the core ideas, critical analysis, structure, and final voice remain distinctly those of the author [13]. Believe it or not, this is something high school teachers are hearing a lot. One wonders whether authors feel the same pride for their creations as the great authors did [14]? Would Charles Dickens have toured the world reading from his Great Expectations if it had been written ‘with the aid’ of an AI? Possibly, I had a network connection proudly admit to me last week that he created a series of LinkedIn posts in a matter of minutes while having no actual clinical experience in the field.
Consequences
Widespread adoption of the bait-and-switch approach has consequences. I am not the only person clicking on articles hoping to find in-depth analysis, unique perspectives or actionable takeaways only to be disappointed. I am not the only one left scrolling through a sea of underwhelming content, fostering a growing sense of disillusionment. My issue is not so much irritation that my questions are not being answered (I am more than capable of using Google), it is just a depressing realisation that we could all do better.
The rise of shallow social media posts has significant psychological and societal implications regarding misinformation, polarisation, and decreased trust in online content, leading to scepticism and disengagement [15]. The use of AI-generated content erodes the authenticity of social media interactions, as users may struggle to distinguish between human and machine-generated posts [16].
Repeated exposure to shallow, low-quality content leads to cognitive overload, reduced attention spans, and increased frustration [17]. This phenomenon, known as "information fatigue," is associated with decreased well-being and productivity [18]. By lacking factual accuracy and quoting fallacious references ‘imagined’ by the AI author contributes to the spread of misinformation and disinformation [8]. The prioritisation of engagement over quality can lead to the formation of echo chambers, where users are exposed to content that reinforces their existing beliefs and biases [19, 20].
What’s to be done?
Are there any strategies you can adopt to help you regain control, avoid repetitive, vague or superficial content and find more meaningful insights from LinkedIn or other platforms? We all recognise that social media platforms prioritise content that generates the most clicks, shares, and engagement. You can always unfollow accounts that consistently post low-quality or clickbait content (though this feels a little incongruous on a networking platform). You might also feedback to your colleagues and fellow creators, asking them to prioritise depth, authenticity, and well-researched material (good luck with that).
Resist the urge to click on sensational headlines unless you’re confident the source is reliable. Subscribe to trusted sources like newsletters, blogs, or platforms known for producing thoughtful, high-quality content. If you really want regular positive insights into the psychology of life you can always try www.psychologytoday.com. At Niche we try to provide valuable content in our Insider’s Insights. Alternatively, use Google to find long-form articles on topics that interest you, rather than relying on social media algorithms. Remember to ‘share’ and amplify articles or posts (like mine) that genuinely provide value.
Do we know what meaningful content looks like and can we actually create it ourselves? The issues with irritating content are the same whether or not you use ChatGPT. Quick and dirty documents rarely referencecase studies, citations, surveys, or data analysis to support claims and provide fresh perspectives. Neither do they aim to encourage meaningful discussions, preferring to initiate superficial interactions like likes and shares. It is only possible to ‘humanise’ your posts by sharing insights from personal experience or your organisation’s unique expertise to stand out.
Quo Vadis
There is hope, of sorts. The current situation is little more than a passing phase and we may already have achieved ‘peak irritation’. Don’t get me wrong, the pressures to win ‘share of voice’ among the raging cacophony is sure to continue. The internet has an insatiable appetite and that will only grow. Pressures on marketeers to pump out ever more ‘content’ will only serve to drive the production of more and more meaningless clickbait. The temptation to augment efforts through computer generated content will be too great to resist the genie’s promise. However, AI is coming to our rescue.
Unlike the Kimberley’s of the world, attempting grab their few minutes in the spotlight, AI does not see ‘the audience’ as a commodity to be exploited. Algorithms recognising that shallow, low-quality posts are a growing source of irritation (it’s not just me) [16]. Research suggests that planned interventions aimed at improving the quality of online content will reduce irritation and enhance user satisfaction [21]. These technologies have the potential to enhance creativity and productivity whereby platforms begin to incentivise the creation and sharing of high-quality content by rewarding (in the form of prioritising ‘airtime’) depth, originality, and accuracy [11]. They are also watching what you are watching, thereby ensuring that everything you see is novel and building on your previous interactions (nothing at all worrying about that) [5]. It seems that ‘the noise’ will eventually be allocated to the dumpster. To Kimberly I can only say, make hay while the sun shines.
References
- DeTora LM, et al. (2022). Good Publication Practice (GPP) Guidelines for Company-Sponsored Biomedical Research: 2022 Update. Ann Intern Med. 2022 Sep;175(9):1298-1304.
- AI Slop. Wikipaedia
- Floridi, L. (2020). The ethics of AI. Oxford University Press.
- Brown, TB, et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901.
- Brynjolfsson, E, McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W.W. Norton & Company.
- Chen, Y, et al. (2015). Online social interactions: A natural experiment on word of mouth versus observational learning. Journal of Marketing Research, 52(2), 238-257.
- Niche Science & Technology (2017) Putting you best foot forward: An Insider's Insight into what makes a great title. https://www.niche.org.uk/asset/insider-insight/Insider-Titles.pdf
- Pennycook, G, Rand, DG. (2019). Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences, 116(7), 2521-2526.
- Bakshy, E, et al. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130-1132.
- Avram M, et al. (2020). Exposure to Social Engagement Metrics Increases Vulnerability to Misinformation. arXiv:2005.04682 [cs.CY]
- Napoli, P. M. (2019). Social Media and the Public Interest: Media Regulation in the Disinformation Age. Columbia University Press.
- Tufekci, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. Colorado Technology Law Journal, 13(2), 203-218.
- Hwang AH-C et al (2024). "It was 80% me, 20% AI": Seeking Authenticity in Co-Writing with Large Language Models arXiv:2411.13032 [cs.HC]
- Draxler F, et al. (2023). The AI Ghostwriter Effect: When Users Do Not Perceive Ownership of AI-Generated Text But Self-Declare as Authors. arXiv:2303.03283 [cs.HC]
- Fletcher, R, Nielsen, RK. (2018). Are people incidentally exposed to news on social media? A comparative analysis. New Media & Society, 20(7), 2450-2468.
- Chesney, R, Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753-1819.
- Lanier, J. (2018). Ten Arguments for Deleting Your Social Media Accounts Right Now. Henry Holt and Co.
- Bawden, D, Robinson, L. (2009). The dark side of information: Overload, anxiety, and other paradoxes and pathologies. Journal of Information Science, 35(2), 180-191.
- Pariser, E. (2011). The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. Penguin Books.
- Qiu, X, et al. (2017). Limited individual attention and online virality of low-quality information. arXiv:1701.02694 [cs.SI]
- Resnick, P, et al. (2013). Bursting your (filter) bubble: Strategies for promoting diverse exposure. Proceedings of the 2013 Conference on Computer Supported Cooperative Work, 95-100.











