By Dr Warren Doudle, Director of the Security & Intelligence Research Group, Edith Cowan University, Australia
Abstract
In just a few short years, artificial intelligence has gone from science fiction curiosity to ever-present productivity partner. Its promise-more automation, better problem-solving, deeper insights, higher efficiency-once sounded like a straight path to progress.
But there is a darker side to our growing dependence on machine-generated answers. Over-reliance on AI can breed complacency, dilute human expertise, and trap us in a loop of regurgitated content. The more AI ingests its own output, the more it risks becoming like the infamous “human centipede” from the cult 2009 horror film: each segment feeding on the waste of the segment before it. This has profound implications for cyber security, where effective defence still depends on sharp human judgment, rigorous verification and the ability to detect subtle anomalies in a sea of plausible-sounding data.
In data terms, it’s garbage in, garbage out on a permanent circular track.
From reading the classics to crawling the internet
The story starts out innocently enough. Early AI models were trained on material we widely regarded as rich and authoritative literature, academic journals, high-quality reference works. Feed the system the best of human thought, and it learns style, context, nuance. That was the theory.
Once that well was tapped, the net had to be cast wider. Developers turned to mainstream media, digital archives, blogs, and eventually the messier corners of the internet and social media. These repositories hold far more information than any human could read in a lifetime but also orders of magnitude more misinformation, vitriol and noise.
For every well-educated, articulate content creator, there are many more voices pushing half-baked ideas, conspiracies and outright hate. The models don’t “know” the difference in the way humans do; they ingest it all and smooth it out statistically.
Then comes the twist: AI begins to cannibalise its own output. As more of the internet is populated by AI-generated text, the next generation of models is trained on content produced by previous systems. Each cycle blends, paraphrases and re-presents the same material, further blurring the line between original and recycled.
At the same time, humans are becoming more reliant on these tools. If the machine can draft the email, summarise the article and write the essay, why struggle with the source material? We get faster answers but we also get lazier. Our capacity to spot errors, omissions and bias is eroded just as the systems are churning out more of them.
The human centipede parallel
In The Human Centipede, people are surgically connected so that each depends on the previous for “nourishment”. It’s a grotesque image, but it works as a metaphor for what happens when AI systems keep training on their own output.
Every step in that chain:
Distorts facts – Small inaccuracies in one generation become “truths” in the next. Over time, minor errors compound into significant falsehoods.
Kills originality – Instead of offering new perspectives, AI becomes an echo chamber of rearranged, half-remembered content.
Amplifies bias – A slight skew in the training data gets magnified each time the model re-feeds on itself.
The result is a closed feedback loop of content: polished, fluent, plausible and increasingly detached from reality.
Dumbing down in Academia and Cyber Security
One of the most alarming manifestations of this cycle is in universities.
Students are now using AI tools to write essays, structure arguments, solve problem sets and even “conduct” research. Many skip the hard work of grappling with primary sources, testing ideas and wrestling with complex texts. Instead, they prompt an AI, skim the answer and paste. The same patterns are emerging in cyber security teams. Analysts, engineers and managers are increasingly leaning on AI tools to summarise threat reports, generate incident communications and even suggest detection logic.
This can have several consequences:
Nuance is lost When a system summarises a 40-page article into three neat paragraphs, the difficult parts often disappear. Subtle arguments, caveats and methodological limitations are smoothed over.
Critical skills erode
If answers arrive instantly in clean prose, there is less incentive to interrogate the logic, check the facts or compare multiple sources. The ability to discern credible sources from questionable ones quietly atrophies.
Everything starts to sound the same
AI outputs trained on similar patterns produce essays that are eerily alike-formulaic structure, generic phrasing, and “wank words” arranged in respectable-looking prose. Assessors are increasingly reading assignments that are technically correct but say very little.
In a cyber context, this sameness can be dangerous. If playbooks, advisories and incident reports are largely stitched together from generic AI outputs, organisations risk converging on similar blind spots. Adversaries, by contrast, are experimenting aggressively with AI to craft more convincing lures, mutate malware and probe defences at scale. A homogenous defensive mindset facing a highly adaptive threat landscape is not a good match.
Graduates risk leaving with degrees but limited capacity for independent, critical and creative thought. The traditional graduate attributes that once distinguished scholars from the crowd are being diluted by homogenous, AI-mediated work.
Prompting without validation
The irony is that AI can be a powerful amplifier of human intellect-if it is used correctly.
The problem is how end users, both students and professionals, actually behave. We are wired for convenience and instant gratification. Presented with a fluent, well-formatted answer, most people assume it must be right and move on. Very few take the time to verify sources, cross-check with independent material, or push back on the reasoning.
Over time, this reliance leads to a quiet form of mental atrophy. If you can ask the machine for everything from basic definitions to strategic advice there is less and less left for you to figure out. Skills like problem-solving, creativity and logical reasoning simply don’t get exercised.
There is also a broader systemic risk. If everyone in a profession leans on the same small group of commercial AI tools, diversity of thought collapses. Organisations may decide they no longer need the “best and brightest” if the real work is just refining prompts and tidying outputs. Until, of course, the AI starts writing its own prompts and we discover how fragile the human contribution has become.
We have all seen Terminator and Idiocracy. The future probably won’t look exactly like either film but the underlying warning is the same outsourcing thinking has consequences.
Implications for Cyber Security Practice
For cyber security professionals, the temptation to “let the machine think” is especially strong. Why hand-craft a phishing simulation when an AI can generate dozens of variants? Why spend hours writing an incident report when a model can turn bullet points into polished narrative?
Used thoughtfully, these tools can save time and expand what small teams can achieve. But if defenders stop cross-checking underlying indicators, verifying technical details, or testing whether the narrative actually matches the evidence, they become vulnerable to subtle errors and fabricated detail. Over time, this erodes the investigative mindset that underpins good detection and response.
There is also a cultural risk. If junior staff are trained to “ask the AI first”, they may never fully develop the pattern-recognition skills and scepticism that experienced analysts rely on when something just doesn’t look right on the screen.
A call to arms reclaiming authentic thinking
AI is not going away. Used responsibly, it can be a valuable ally: helping us explore more ideas, sift large datasets, draft first cuts and surface blind spots. The question is not whether we use AI, but how.
To avoid a “human centipede” future of AI endlessly consuming its own output, we need to act on three fronts.
We need to set higher standards for training data
Institutions, companies and AI developers must take data integrity seriously. That means robust filtering, clear provenance, and active measures to limit heavy reliance on synthetic (AI-generated) text as training material particularly in tools used for security operations, risk assessment and threat intelligence.
The need to teach validation as a core skill
In universities and workplaces, we need to move beyond a token line about “checking sources”. Critical thinking, evidence-based reasoning and source verification must be explicitly taught, practised and assessed. Using AI should be framed as the start of the process, not the end. For cyber teams, that includes sanity-checking AI-generated detections, narratives and code against lab tests, logs and known-good references.
Reward original thought and the ability to create
Assessment and performance systems need to value genuine analysis, reflection and creativity, not just polished output. for example, recognising analysts who uncover non-obvious attack paths or novel detection opportunities rather than just producing clean AI-assisted reports. If an AI can produce a near-perfect generic essay or report, then our tasks, marking guides and hiring practices need to change to focus on the uniquely human contribution.
Avoiding the Closed Loop
AI offers an unparalleled opportunity to expand our knowledge and augment our capabilities. But the lure of convenience can blind us to its pitfalls. Unchecked reliance on machine-generated text will erode creativity, dull critical analysis and turn us into passive consumers of recycled content.
Like the human centipede, each step in a closed circuit of self-consumption degrades the entire system. If we allow AI to keep feeding on its own output while we switch off our brains, we shouldn’t be surprised when the quality of thinking in classrooms, boardrooms and public debate slides.
The good news is that this future is not inevitable. By setting high standards, questioning AI outputs, and committing to human oversight, we can keep the machine as a tool rather than a master. The choice is simple: remain active thinkers, or become well-fed pets of systems that no longer need us to think at all.

