Fraud, Blackmail, and the Weaponization of Integrity
When Guardians of Science Face Retaliation
When I explain the research ecosystem—how it functions, where it breaks, and how bad actors exploit it—I often meet a mix of curiosity and disbelief. Most people understand corruption in politics, crime in business, or fraud in the art world. But when I suggest research and knowledge itself—a pursuit of truth—can also be a battleground for deception, extortion, and career-ending threats, the reaction is usually shock.
For those who may not know, there is a loosely connected group of individuals—sometimes called sleuths, forensic metascientists, or, in this blog's case, FoSci-fians—who dedicate their time to investigating research misconduct, scientific manipulations, and the like. Some people do this independently; others are part of organizations or universities with a vested interest in maintaining research integrity. They examine issues like falsified data, fake peer reviews, image manipulation, and complex citation rings. Their work is crucial in exposing fraud and nefarious practices, but it also puts them in the crosshairs of those who have something to hide—and sometimes, those who want to use them as a weapon.

The Research Integrity Battlefield
In the research ecosystem, there are many players: researchers, institutions, funding bodies, and journals, all relying on a fragile web of trust. But, as in any human endeavor, some will act dishonestly. And sometimes, institutions or individuals trying to do good get caught in webs of deception and manipulation.
Take, for example, the case of an academic sleuth investigating a high-profile case of research inconsistencies.
For those unfamiliar, PubPeer is a post-publication discussion forum where researchers can comment on published papers, highlighting potential flaws or even outright misconduct.
At a recent ASCE meeting, I was asked what I thought would happen to PubPeer in the future. My response: PubPeer is invaluable as a tool for transparency, but I fear it could increasingly be weaponized—not to improve science but to threaten scientists. Science is inherently messy, and open discussion helps refine understanding. Fraud should be exposed. But we must be cautious. In the wrong hands, platforms meant to ensure integrity can become tools of extortion.
From Investigation to Intimidation
This brings us to a disturbing example: an investigator who has built a reputation for carefully exposing misconduct. One day, Sylvain Bernès receives an email from someone urging him to investigate a particular research group. He chooses not to pursue it—perhaps the case lacks merit, or maybe he doesn't have the bandwidth. It doesn’t matter as he conducts these investigations on his own time. But then comes the second email:
"Dear Sylvain, I am waiting for your response. If not, I will put all your papers on PubPeer in order to obtain their retractions."
In other words: Investigate who I tell you to, or I'll fabricate accusations against you.
This isn't about integrity anymore—it's a direct career threat, an act of blackmail using the same tools designed to uphold research credibility. Suddenly, the investigator becomes the target.
Academic Dishonesty Becomes a Security Threat
The intersection of academic misconduct and security threats came to the forefront in 2024 at the University of Sydney. After uncovering widespread contract cheating—where students purchased model answers from external providers—the university's investigation triggered a bomb threat that forced campus evacuation. The investigators were looking into not only those who purchased the goods but also the providers. And the providers did not like their racket being disrupted.
This incident reveals how a wrong but seemingly small-scale misconduct activity can escalate into genuine security threats. The providers of these illicit services, often operating internationally, have evolved from targeting vulnerable students (particularly international ones) to employing increasingly aggressive tactics, including blackmail, intimidation of academic staff, and now, threats of violence.

What Happens Next?
This is the dilemma we face in the world of forensic scientometrics and research integrity: How do we ensure that tools meant to protect science aren't twisted into weapons? How do we protect both whistleblowers and legitimate researchers from bad-faith attacks?
Science needs transparency. It also needs safeguards—not just against fraud, but against those who exploit the fight for integrity itself.
"Dear Sylvain, I am waiting for your response. If not, I will put all your papers on PubPeer in order to obtain their retractions."
Oh dear.
Several points arise.
1. PubPeer does not work that way. It is not magic, nor is it a conduit straight into the black heart of nefarious career damage.
To do this, you would first have to (a) find something to say about all said papers, (b) go to the trouble of writing them all up and submitting to PubPeer, (c) hope the PubPeer moderators had lost their damned minds and forgot their usual standards of probity, (d) somehow navigate everyone laughing at your transparent ploy, (e) hope that the relevant journals / editors / co-authors et al. actually paid attention to whatever you'd manage to concoct, and then finally (f) wait probably one or two years. Long PubPeer records of REAL AND OBVIOUS problems persistently fail to reach the point where any formal research integrity action is taken!
Conclusion: this is not an *actionable* threat, even if it was a *real* threat.
2. I would never, ever, ever answer an email that literally contained the phrase "I don't like these researchers." I doubt anyone else would either. That's not how this works, and it's incredibly suspicious. I don't give a thimbleful of cold lemur piss who you like or don't like, and mentioning it instantly disqualifies you from being a reliable interlocutor.
3. Genuine requests for help look nothing like this. There's almost no information here. Real emails lay out inconsistencies and facts, or enquire about what might be done about a RI problem. They do NOT discuss motives and they certainly do not say 'I would be happy if the first author loses his job'.
Working in a space like this, there is often a conferral of trust *at some point* - whoever is in the 'whistleblower' role eventually starts to explain who they are, why they know what they know, and investigation becomes somewhat mutual.
In other words: when you bring something like this to any table, the process dictates you become involved in what essentially amounts to a research project on a local level - me and you, we talk. I know your name and where you work and what you do (and vice versa). This is a terrible situation to try to get away with hoodwinking some poor muffin in research integrity into doing your dirty work for you!
(It is possible, of course, to stay anonymous the whole time during an investigation... but that has happened to me *twice* out of dozens of emails like this. One of those cases is in the All Time Top 10 list, the other the researcher ended up being sanctioned for misconduct-adjacent reasons. In both cases, it was very, very, very obvious that there were massive amounts of real problems at play and the researcher involved was scared to death. I cannot fully express how utterly dissimilar both situations were to someone showing up yelling 'hurt who I say to hurt, or I'll hurt you!')
All of this aside, the point is still well taken. Every system is porous, every system is manipulable. Vigilance is always required on all levels when outcomes are serious. Even if this is not a good example of someone who is adept at exploitation.