Meandering through the International War Memorial in London in late 2023, I wrestled with the unsettling thought that information warfare does not leave science untouched. And one way it comes through science is with disinformation. But what should be done about this uncomfortable thought?
When discussing disinformation, much has been written about social media, politics, and elections. But what about the world of scholarly research and publishing? What about the unusual patterns we’ve uncovered—the ones where editors are bribed, peer reviews are fake, patents are fabricated, and citation cartels thrive? Some of these patterns might simply reflect individual ambition, paper mills grinding out junk science for career advancement. But what if there’s more? What if destabilizing science itself serves broader political and economic agendas?
Science isn’t isolated from the messy world we live in — it never has been. From the infiltration of the Manhattan Project by spies like Karl Fuchs to today’s complex web of paper mills, fraudulent journals, and governmental incentives for patents and publications, the landscape has shifted. And with it, the tactics used to manipulate trust.
But what do we do about the manipulations we see? This questioning led us to realise we needed words, and that led to thinking about a taxonomy to organise those words.

“We swim in language. We think in language. We live in language.”, said James Carroll, to which Michiko Kakutani added years later: “And this is why authoritarian regimes throughout history have co-opted everyday language in an effort to control not just how people communicate but also how they think….”(The Death of Truth, pg 88)
Will White and I were preparing for the 2024 World Conference on Research Integrity (WCRI) in Athens, Greece. We would unveil a framework we called the Taxonomy of Disinformation, an attempt to clarify language and structure to what we saw unfolding in scholarly communication.
We developed the Taxonomy of Disinformation because words matter. If we can’t name the tactics used to corrode trust in science, how can we fight them? Our taxonomy doesn’t just catalog bad behaviors; it frames them in a way that helps researchers, policymakers, and the public see the bigger picture — not just isolated incidents, but patterns that threaten the foundation of scientific integrity.
This work is ongoing. It's uncomfortable. It's messy. And often, we don't even have the words — yet. But by beginning to name what we see, we push back against confusion, against manipulation, and against the quiet erosion of trust that authoritarian forces throughout history have sought to exploit.
Words matter. Truth matters. So does science.
What is our Taxonomy of Disinformation?
At the heart of it, uncovering and naming disinformation is like solving a mystery – it just requires the oldest questions of investigation: Who did it? Where did it happen? How was it done?
Sure, sometimes it’s a professor – but probably not Professor Plum in the library with a candlestick (no matter how much we love Clue/do).
To make sense of disinformation in science, we break it down into three parts:
Actors – the people, organizations, or governments behind the disinformation.
Outlets – where disinformation appears: journals, conferences, media, institutions.
Methods – the tricks used to spread it: gaming peer review, hijacking media attention, weaponising the legal system.
Actors can’t act without outlets. Outlets amplify methods. Methods erode trust.
This taxonomy isn’t about blame for its own sake – it’s about building a sharper, clearer map of how disinformation worms its way into science, so we can start pushing the corruption back out.
Disclaimer
We built this Taxonomy of Disinformation to name patterns, not name and shame people or institutions. However, I suspect it will try to be weaponised. (Oh, and we have noted ways that may happen but are not publishing them here.)
Science is messy. Honest mistakes happen. New ideas feel uncomfortable. Not everything that looks strange is a sign of corruption, and not every corruption seems strange.
This taxonomy is a tool, not a weapon. It’s here to help us see more clearly, ask better questions, and spot real manipulation, not to hand out easy labels or political talking points.
If you’re tempted to use this framework to score points, shut people down, or twist the story, don’t. You’re part of the problem.
Use this with care — or maybe don’t use it at all.
Thank you for putting this taxonomy together. The idea of a taxonomy will help guide the debate about mis & dis information. As a researcher in this space (national security and public safety domains) I have written several policy notes on this issue.
One issue I did not resolve is whether disinformation is a substitute for calling something a lie. And, is misinformation a confounding of facts (accidentally / intentionally)?
I am bringing this perspective up for several reasons. Are we losing the ability to express criticism in a constructive way and meaningful intention. I see the dilution of language, which appears to be leading to being conservative in our critique and also failing to speak up and clear articulate ways.