pull down to refresh

Journal editors and peer reviewers are being flooded with AI-generated papers that are almost impossible to detect.

Last summer, Peter Degen’s postdoctoral supervisor came to him with an unusual problem: One of his papers was being cited too much. Citations are the currency of academia, but there was something unusual about these. Published in 2017, the paper had assessed the accuracy of a particular type of statistical analysis on epidemiological data and had received a respectable few dozen citations in other research papers over the years, but now it was being referenced every few days, hundreds of times, placing it among the most cited papers of his career. Another professor might be thrilled. Degen’s adviser asked him to investigate.

Degen, a postdoctoral researcher at the University of Zurich Center for Reproducible Science and Research Synthesis, found that the citing papers all followed a similar pattern. Like the original, they were analyzing the Global Burden of Disease study, a publicly available dataset compiled by the Institute for Health Metrics and Evaluation at the University of Washington. But they were using the dataset to churn out a seemingly endless supply of predictions: about the future likelihood of stroke among adults over 20 years old, of testicular cancer among young adults, of falls among elderly people in China, of colorectal cancer among people who eat minimal whole grains, of disease X among population Y, and so on.

...

Yeah, I'm pretty concerned. I'm not an editor, but as an author I feel that my papers are getting shorter attention from editors, and thus quicker, less informative, less engaged decisions.

reply

Just read it. I'm pretty concerned, too. Yet, maybe this is just the necessary catalyst to help the rotten publish-or-perish system break down faster. But what's the alternative?

Selfishly, I just hope I get tenure before the collapse happens and my last 20 years of hard work/publications become worthless~~

reply

https://arstechnica.com/science/2026/05/preprint-server-arxiv-will-ban-submitters-of-ai-generated-hallucinations/ talks about the same problem (and potential solution) on arXiv.

Dietterich also notes that all authors of a manuscript are responsible for its content. So, if they carelessly submit material generated by an AI that violates these guidelines—Dietterich cites “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content”—then they’re responsible, not the AI. Should violations be discovered, all of the manuscript’s listed authors will now receive a one-year submission ban, and any future manuscripts will only be accepted after they’ve been through peer review by a journal.

But also... More worrisome :

The unfortunate problem is that, like most other things, the system can be gamed—people could submit flawed content that lists people as authors who have never been involved. Fortunately, its moderation system includes an appeal process.

Go ahead, get your science nemesis on the arXiv black list~~

reply

imo, even more worrisome than submitting a false article under someone else's name, is harassing their work by continually flagging them for AI generated content, given that we don't have reliable detection methods

reply

hope you get tenure soon. the system really does feel rotten and at the breaking point

reply