Fudging or Fraud?
Where Research Misconduct becomes a Crime
The term “publish or perish” in academia describes the insidious dilemma that links professional recognition and awards to the publication of peer-reviewed research papers in journals. Competition for results can be cutthroat, and every delay in publication runs the risk that someone else will publish the results you’ve spent months or years achieving. Given that a track record of published articles is required for everything in academic life – from job applications to teaching appointments to grant funding – it is only human nature that academics would be tempted to bolster their track records.
The practices of research misconduct vary in their sophistication and scale. Individual researchers might falsify images of results or engage in statistical manipulation of surveys (including selective reporting and “p-hacking”, which aims to have results be statistically significant and therefore more notable or impressive). Grant applicants might add “ghost researchers” – academics with impressive publication histories but who will do no work on the grant – to improve their chances of success. Organisations might turn a blind eye to paper mills or predatory publishers, who churn out massive numbers of publications with little academic value.

In criminal law, fraud is generally defined as obtaining some kind of benefit through lying, deception, or manipulation. From that perspective, a researcher who wins a grant or a tenured position because of faked research is committing an act of fraud. Benefits derived by fraud do not necessarily need to be monetary, so those who publish fraudulent or manipulated results for general reputational “acclaim” are just as guilty. That raises an obvious but interesting question – why aren’t more researchers in front of judges for fraud-by-fudging?
The first answer is cultural. For some time, research misconduct has been considered an “administrative” or “disciplinary” problem; that is, researchers who do it might have breached a code of ethics or other acceptable standard of conduct, but are not criminally liable for their conduct. In many cases, papers are retracted or withdrawn, or a warning is added to suggest that the veracity of the research has been questioned. In others, the researchers involved resign their positions, avoiding the reputational or financial implications of their conduct.
The second answer to why researchers don’t face jail time is legal. Fraud prosecutions are incredibly complex, and that can be compounded in cases involving scientific research where the line between established facts, theories, and even educated guesses is razor-thin. In a lot of cases, it might be that the researcher involved simply doesn’t have “intent” to defraud; in other words, they did do something criminal to get a benefit, but they didn’t mean to gain or receive that benefit illegally. In other cases, police and law enforcement officials aren’t as technically qualified as the researchers they are investigating, making it hard for them to spot acts of fraud.
As a result, instances of criminal prosecution of researchers are few and far between (but they do exist):
In Thailand, a government official was convicted of fraud in 2012 after plagiarising 80% of his PhD thesis, yet still kept his job in the National Innovation Agency;
In the United States, former Assistant Professor Dong-Pyou Han at Iowa State University was jailed for five years in 2015 after he “spiked” samples of rabbit blood in an HIV vaccine study;
In Australia, two doctors – Bruce Murdoch and Caroline Barwood of the University of Queensland – were convicted of fraud in 2016 after an investigation by the University (and later, the Crime and Corruption Commission) found that research on Parkinson’s disease reported in published articles by the pair had not occurred;
Three researchers in China in 2019 were jailed after making illegal modifications to human embryos, which led to three genetically modified babies being born.
A third reason that research misconduct investigations take a long time to resolve, and are often internal to the university and completely confidential. For example, a psychology researcher at Harvard University, Marc Hauser, was originally accused of misconduct in 2007 – the report of his activities was not made public until 2011, and he was allowed to resign without penalty in 2012. Another researcher, Dipak K. Das of the University of Connecticut, was accused of misconduct in 2007. Five years later, a review board published a 60,000 page report alleging 145 instances of fabricated data, and used to support the termination of Das’ employment (though no criminal charges were laid).
So what can be done to fix the problem of scientific fraud being “everywhere”? Some have suggested that the best way to tackle these cultural deficits in research misconduct is to re-brand serious conduct as “research fraud”, thus calling the actions exactly what they are:
…fabrication, falsification or deception in performing or reporting research results. Research fraud deceives employers, funders, the research publishers and readership (and ultimately the general public) by attempting to publish research that is misleading, has been fabricated in some way, has not even been conducted in the first place or has already been published elsewhere.
This has led to a counter-argument being raised that a focus on “research fraud” runs the risk ‘of giving the false impression that dubious practices falling outside the legal regulation “do not count”’.
Some have also suggested that integrity investigations should be made public. In that vein, the growing field of “forensic scientometrics” (FoSci) has emerged as a discipline to counter research fraud. That field engages in ‘quantitative analysis of scientific publications and research outputs in this larger context’; in other words, looking for anomalies in scientific publications. Although it remains an emerging field, FoSci has the potential help investigators spot instances of fraud more easily.
Others have indicated that prosecutors and law enforcement should focus on data in investigations involving academia. Again, FoSci has the potential help these government bodies translate complex concepts into legally admissible evidence, as well as sorting through a vast number of data points for evidence of potential illegality.
Irrespective of the approaches taken, research fraud will continue to pose a problem for universities and the higher education sector as a whole. There is much to be said for the threat of potential criminal prosecution, given that it can shape researcher behaviour ex ante and appropriately punish malfeasance ex post. However, to best utilise that mechanism to encourage honesty and transparency in research, we need to develop a deeper understanding of research fraud, and that requires transparency, publicity and – dare we say it – a little bit of infamy.




Full transparancy isn't even enough, as journals and ethic commitees and sometimes reviewers are just as complicit in my experience. I've been involved with three cases, two obvious, one less so.
One of these wrote about an outbreak of disease A, but genome data shows it to be a more common organism B. I first mail the authors, no response, then I express this to the editor of the paper, who escalates to the head editor of the journal, it dissapears for 3 months, and comes back with a ethics committee referral. They thing that me and the original authors should write a letter, let the audience decide, i.e. shifting the burden of proof off the journal and onto the reader, essentially making a case that journals aren't actually usefull for publishing factual data.
Second case, a bit more nuanced. A partial genome is presented that is statistically impossible. It presents 30% of a genome that coincidentally has 98% of all critical diagnostic loci. Secondly, it has reads that are identical to high coverage samples in their study, suggesting cross-talk or spiking. We were not allowed to raise this problem to the journal, because collaborators were involved. So we are shielding "friends" from clearly dubious data that is the foundation to their paper.
Last, a paper publishes a disease A from region B. They also show several other signals in region B, but only partial genome recoveries for disease A. It only took a day to identify that they used the wrong software to identify the partial genomes, and the wrong software to visualize the accuracy of the calls, because in fact, there was bo signal, just incorrect use of software (like throwing darts, and everything lands on the wall around it, and still calling it bullseye). Secondly their sample with full genome recovery has human DNA. Mapping that human DNA revealed it is not from region B at all, but from the other side of the world. Given that their entire story hinged on the presence of disease A being in region B, its in their title, intro and conclusion, you'd expect the authors to acknowledge the issue or the journal to pull it. However first we wanted to included it in a supp for our own paper, and were told by the chief editor to remove it. Then we wanted to submit a matter arising, but were told that they didn't see value in it. After a second group identified the same issue and raised it with the chief editor, we were asked to quickly send our matters arising manuscript. After that, we were left in the dark for more than a year. This year we finally got a response, one reviewer argued that the humand DNA waa likely just contamination, even though it contained a fully covered 18x mitochondrial genome, but then suggested the disease was valid. Another reviewer was hesistantly agreeing with us but essentially asked for months of additional work from our end to prove we were correct. The authors themselves redid their libraries and now did find human DNA from region B, mixed with region C, so they could not determine the validity. They also redid the low coverage ones, and were only able to obtain 600 reads, across the 3 samples, an all of them were in conserved genes. Yet somehow argued that this did not at all matter, because we didn't have an ethical agreement with the native population of region B to even be allowed to look at the human DNA. Lastly the journal gave us a week to respond, and that was a hard deadline. Now it is going to be published as an opinion piece rather than a document attatched to a retraction.
Overall, i see no accountability at any level. I see protectionism from top to bottom. I strongly doubt that a system like this will last much longer and only see a way forward with a fraud database. Something that funding bodies can just type a name in, and see how many red flags an author has. Because the system doesn't seem to be able to self correct. The fraudulent authors want to save face, the supervisors don't want the scandal, the uni doesn't want the lawsuit, the funders don't want the label of being unable to see the difference between good and bad projects (plus they need to fill their quotas), the journals don't want to lose their income and status despite not fulfilling their core function, and no one indirectly involved wants to be associated with drama and therefore no one will stick their necks out to say anything.
Gather round kids and grab your muskets, the Qui Tam Clan ain't nuthin' to fuk wit.
Qui Tam prosecution is a little-known provision still available in some jurisdictions (notably the US) which allows private individuals to file charges and prosecute a criminal case themselves on behalf of the government. It is sometimes referred to as "private prosecution".
The relevance here is that you can skip over all the challenges of getting the police (or some other agency) to understand and prosecute scientific fraud by just directly prosecuting scientific fraud yourself.
Even better, if your prosecution is successful, you win a cut of whatever civil penalties are recovered.
The one caveat is that this provision only lets you prosecute people defrauding the government, not fraud in general. But given that practically all research is funded (in part or in whole) on government grants, pretty much any fraudulent paper you identify likely has standing.
At some point, you fosci / sleuths / whatever are going to realize there's an upper bound on the effectiveness of any system based on reporting things to an authority. Doesn't matter if its a research integrity office, a district attorney, a regulatory body, etc.
At a fundamental level the premise of an office where "you report it, we do something about it" has pretty much never worked well in any context. Anyone who attempts to navigate such a system either gives up or ends up learning so much along the way just to get marginal success that they've effectively studied a law degree without getting one.
What I'm saying is, some of you are going to unwillingly become fake lawyers along your journey. You should embrace that fact now. Living in denial about it just means being willfully blind to a whole world of highly effective levers you could be pulling.
The court registry is a path to certain abilities some consider unnatural.