Doctored Knowledge
Doctored Images, Corrupted Knowledge: How manipulated images in scientific papers can feed AI and amplify harm
Scientific sleuth René Aquarius (and co-author on this blog) has raised many concerns over a year about “problematic images” in almost 100 papers, prompting a recent U.S. health sciences university to conduct a “review” of the allegations concerning one of its academics. The potential damage of such cases extends far beyond academic embarrassment, and the issues of image manipulation are much broader than those of a single researcher. In the age of artificial intelligence, doctored images pose an additional risk: accelerating disinformation.

Why This Matters Beyond Academia
Most of the articles flagged in this case focus on preclinical studies of hemorrhagic stroke, severe conditions that cause death and disability in patients who should be enjoying time with family, friends, and work. In health sciences, research must be trustworthy to develop effective solutions for patients, solutions that aim to reduce deaths and improve the quality-of-life for survivors.
People facing the terrifying aftermath of a stroke deserve better than "solutions" built on sand. The science of stroke treatment must rest on reliable data. If it doesn't, we waste time, money, and, most crucially, lives.
The ripple effects are extensive:
Stroke patients and their families deal with consequences of this condition every single day. If researchers are producing erroneous work, they aren't contributing solutions—they're creating obstacles. When the scientific community fails to detect these problems swiftly, we compound the harm.
Society funds research through taxes and deserves transparency about how those investments are used. When public money supports erroneous work, it's a betrayal of public trust.
Scientists invest time, effort, and money to replicate or build upon these results. Careers are sometimes built on flawed foundations, creating cascading reproducibility problems that can take years to untangle.
Why These Problems Persist
Detection and reporting consume enormous resources. In this case, the research was published in well-respected journals. Two previous retractions for image-related issues could have been red flags, well before the current issues were raised.
Detecting, documenting, and reporting these problematic papers required a year of early mornings of unpaid investigative work by René. This imbalance is unsustainable, as the work of one group can take months to investigate.
What should be done? Swift detection, swift publisher action, swift institutional response. Use only trusted research when making critical decisions. While we'd prefer researchers simply not to publish erroneous work, accountability must shift to publishers to retract, institutions to investigate, and funding agencies to conduct proper due diligence.
The AI Amplification Problem
Here's where the story gets more dangerous: erroneous articles can end up in systematic reviews and meta-analyses, repurposing them as high-level knowledge.
Instead of slowing down to make sure each article included in a systematic review is properly assessed, researchers and companies are more interested in speeding things up. The latest trend is to adopt AI to accelerate the systematic review workflow. Companies are now selling products to automate the systematic review workflow by letting AI do the heavy lifting. The promise? “Systematic reviews in hours, not months.” However, increased speed doesn’t mean that trustworthy output automatically follows. In this case, 100 papers may seem negligible in the vast ocean of academic papers that can be used as input for either manual or AI training data for systematic reviews. Things get even scarier when you realize that paper mills are probably also using AI to write more papers in less time. So we’re basically on the brink of a perfect storm: articles generated by AI, interpreted by AI, and meta-analyzed by AI. This isn't a more efficient path to improving humanity — it's a faster route to the erosion of reliable knowledge.
The Bigger Picture
We're both deeply concerned about the state of academic communication and trusted scholarship, but we're not advocating for the dismantling of the system. Instead, we're committed to building better, more trusted scientific practices throughout the ecosystem, and we believe that change is not only possible but necessary.
The risk is real: we're slowly moving toward a society where AI generates articles, AI peer-reviews articles, AI interprets those articles, and AI conducts meta-analyses of AI-generated content. At some point, human expertise is entirely squeezed out of the loop.
This is why forensic scientometrics isn't just academic housekeeping – it's infrastructure protection for the age of AI. Every compromised paper that slips through becomes a potential source of disinformation in countless AI applications, from medical decision support to research synthesis tools.
One bad actor can do immense damage, but what happens when systematic fraud meets machine amplification? Research integrity isn't academic idealism anymore — it's securing critical infrastructure.
Bonus - More on Forensics
Using our Taxonomy of Scientific Manipulation framework, we can systematically map cases to understand how a process may be manipulated. To stop this activity, we need to understand who, where, and how it occurs. The why is interesting, but that can wait. For example:
Who: Prolific scholar with institutional backing
Where: Established journals with peer-review processes
How: Systematic alterations through image falsification
Image manipulation has drawn attention from both visible and hidden sources working to uncover these inconsistencies. Current evidence of the scope can be seen in PubPeer discussions where René has flagged numerous concerns across this researcher's work.
What to learn more about image manipulation detection? Check out these trainings from Jana Christopher.
In 6 months, the traditional 'sleuths' will be out of their jobs. The new, AI-generated images will be unique and can no longer be caught by image comparisons. Sleuths can still analyze old papers, but going forward, it will be a whole different ballgame.