The Battle for Knowledge
Knowledge is precious and vulnerable
If the recent past was characterised by an unprecedented influx of data – data that, with the assistance of the internet, was transformed into information – then the present moment is something else entirely. We are now fully engaged in a power struggle over knowledge itself. Who controls the words controls the narrative. Who controls the narrative controls the minds and culture. And who controls the minds controls what is seen as true, legitimate, or even thinkable.
It would be easy to argue that the oversimplification of vocabulary has reduced people’s ability to think. That shrinking language leads to shrinking thought, and that this plays out at the level of entire populations. That argument is not wrong. But it is incomplete.
There is another dynamic at work: the overcomplication of thinking. The relentless bombardment of decisions. The exhaustion that comes not from ignorance, but from having to constantly choose, interpret, configure, and assess.
[Side rant: I cannot even start my “smart” oven without cognitive burden. How do I turn the oven on? (One dark button.) What do I want to do? (Normal cooking, or something pre-programmed? Another button.) What type of heating do I need? (Multiple, scrolling options.) What temperature? How much time? Where is the start button? The oven is simultaneously “smart,” but so very dumb. And the experience is exhausting.]
This anecdote is a small-scale example of a broader condition: decision fatigue as a feature of modern knowledge systems. When everything requires configuration, interpretation, and judgment, cognitive bandwidth is consumed before any meaningful thinking begins. Even when you don’t need meaningful thoughts - just a pre-heated oven.
Oversimplification dulls thought; overcomplication overwhelms it. Both are tools of control.
The result is not only fatigue, but misdirection. This happens within, around, and to science. Attention pulled toward what is easiest to measure or understand, rather than what is most consequential.
When we look at contemporary discussions of research security, for example, the focus is typically narrow, measurable, and ultimately insufficient. We hear about dual affiliations. We worry about undeclared ones – rightly so, as those are often the ones that matter most. We hear about dual-use technologies: research that could be a boon for any country, but that might also give a strategic advantage to an economic or geopolitical competitor. Chip manufacturing is the canonical example.
These are real concerns. But they represent a very specific lens. Research security, as it is commonly framed, focuses on inputs and outputs: who is involved, where funding comes from, and how results might be misused. What it rarely interrogates is research as knowledge infrastructure – as a system that shapes narratives, norms, and legitimacy.
Now consider other communities that explicitly study power and control. And notice what is missing.
Look at intelligence communities. MI5 and MI6 focus on cyberattacks, espionage, art theft, economic sabotage. Again, all important. MI5 and the FBI have woken up to foreign attacks on universities. But research? Scholarly publishing? Peer review manipulation? Citation networks? Almost entirely absent.
Research appears boring. Slow. Esoteric. It does not feel like a lever of power.
That is precisely why it is one.
The published research paper holds value because it holds vetted knowledge. Knowledge that has passed, however imperfectly, through peer review. Knowledge that has been edited, contextualised, and debated. Knowledge that other researchers cite, critique, and build upon. This vetting is tenuous, yes. It is flawed, yes. But it is still one of the few large-scale mechanisms we have for producing trusted knowledge.
And trusted knowledge is powerful.
Once something is “in the literature,” it acquires authority. It becomes referenceable. It becomes teachable. It becomes policy-relevant. It can be mobilised to justify decisions, shape regulations, or reinforce narratives about what is normal, safe, innovative, or inevitable.
Control the production of that knowledge, distort its evaluation, or flood the system with strategically generated noise, and you do not need to censor anything outright. You simply change what rises to the top, what is amplified, and what is quietly sidelined.
This is the battle for knowledge as it exists today. Not a single front, but a diffuse struggle across infrastructures, incentives, and attention. It plays out in peer-review systems stretched beyond capacity, in metrics that reward volume over validity, in AI systems trained on unexamined corpora of “authoritative” text.
The battle for knowledge plays out in the fatigue of editors, reviewers, and readers who must constantly decide what to trust, what to ignore, and what they no longer have the energy to question. For universities that are responsible for their researchers. For valiant librarians as the underappreciated guardians of knowledge.
Research integrity, research security, and public trust in science are not separate issues. They are different entry points into the same conflict.
Forensic Scientometrics (FoSci blog, FoSci Paris Declaration) exists because the battle for knowledge control through science is already underway – largely unnoticed, poorly theorised, and unevenly contested. If knowledge is both precious and vulnerable, then defending it requires more than good intentions. It requires forensic attention to how knowledge is produced, validated, circulated, and weaponised. And it requires expertise, research, intelligence, and counter-intelligence.
The battle for knowledge is not coming. We are already in it. FoSci exists to ensure it is not fought blindly.
Welcome to 2026.




