In 6 months, the traditional 'sleuths' will be out of their jobs. The new, AI-generated images will be unique and can no longer be caught by image comparisons. Sleuths can still analyze old papers, but going forward, it will be a whole different ballgame.
‘Traditional’ methods of sleuthing may be outdated but that does not mean there won’t be new ways of checking. Yes, if those checking on science do not update their methods, they may be out of their (here-to-date unpaid) jobs. But there are new methods for checking as well.
I don't even think it will matter, there are numerous traditional papers with "obvious" issues that take years for the editors to do anything with. The systems is so slow to self-correct that AI cannot really make it worse.
Just for example, we dealt with a paper that claims to have found a pathogen in archeological remains from Brazil... however the haplotype and SNPs from the human DNA suggest it's a European individual. Editor received a full report, but took almost a year for them to just put a tag on "we have been notified of issues". (https://www.nature.com/articles/s41586-023-06965-x)
Now we are almost a year further, and nothing happened. And i have a couple more of those cases. Editors don't want to acknowledge they let something slip, and the Journals will not want the bad press of having a retraction. Authors either knowingly put it out with incorrect data, or are too proud/embarrassed to admit it. AI is not a new disease, it's just another symptom of a broken system.
"The systems is so slow to self-correct that AI cannot really make it worse." You think? Imagine AI freely roaming the various publicly available databases, and making up its own "raw data" as the crooks demand it, and then making fake figures that nobody can tell apart, because they are unique. Then it writes the whole thing on plausible language. And the paper mills, as usual, sell it to customers. If the journal asks for the source data, hey presto, they can be produced. If they ask for additional histo figures, not just the representative ones in the paper, hey presto, AI will generate them. IT CAN GET WORSE and it WILL. Quickly. Journals will be inundated with fake stuff that will look better and better visually and stylistically. I talked to one major journal's editor and their possible solution starts with actually checking on the authors of the paper, talking to them on zoom. That way at least they can see that there are humans on the other end. (of course then they can still be AI-using cheating humans as well.)
In 6 months, the traditional 'sleuths' will be out of their jobs. The new, AI-generated images will be unique and can no longer be caught by image comparisons. Sleuths can still analyze old papers, but going forward, it will be a whole different ballgame.
‘Traditional’ methods of sleuthing may be outdated but that does not mean there won’t be new ways of checking. Yes, if those checking on science do not update their methods, they may be out of their (here-to-date unpaid) jobs. But there are new methods for checking as well.
I agree with you Csaba. It's going to be very tricky regarding newly submitted manuscripts.
I don't even think it will matter, there are numerous traditional papers with "obvious" issues that take years for the editors to do anything with. The systems is so slow to self-correct that AI cannot really make it worse.
Just for example, we dealt with a paper that claims to have found a pathogen in archeological remains from Brazil... however the haplotype and SNPs from the human DNA suggest it's a European individual. Editor received a full report, but took almost a year for them to just put a tag on "we have been notified of issues". (https://www.nature.com/articles/s41586-023-06965-x)
Now we are almost a year further, and nothing happened. And i have a couple more of those cases. Editors don't want to acknowledge they let something slip, and the Journals will not want the bad press of having a retraction. Authors either knowingly put it out with incorrect data, or are too proud/embarrassed to admit it. AI is not a new disease, it's just another symptom of a broken system.
"The systems is so slow to self-correct that AI cannot really make it worse." You think? Imagine AI freely roaming the various publicly available databases, and making up its own "raw data" as the crooks demand it, and then making fake figures that nobody can tell apart, because they are unique. Then it writes the whole thing on plausible language. And the paper mills, as usual, sell it to customers. If the journal asks for the source data, hey presto, they can be produced. If they ask for additional histo figures, not just the representative ones in the paper, hey presto, AI will generate them. IT CAN GET WORSE and it WILL. Quickly. Journals will be inundated with fake stuff that will look better and better visually and stylistically. I talked to one major journal's editor and their possible solution starts with actually checking on the authors of the paper, talking to them on zoom. That way at least they can see that there are humans on the other end. (of course then they can still be AI-using cheating humans as well.)