This is old (1982!), but I wasn’t aware of it before and I’m guessing you weren’t either. Peters & Ceci took a dozen psychology papers published in leading, highly-selective journals, substituted fake author names and affiliations, and resubmitted the papers to the same journals 18-32 months after the originals were published. Only three of the twelve papers were detected as duplicates, and eight of the remaining nine were rejected, in many cases on grounds of serious methodological flaws.
We’ve talked in the past about other approaches to quantifying the randomness in pre-publication peer review. And I’ve argued that pre-publication review is, like democracy, the worst system except for all the others. So I’m not surprised at these results, or even hugely bothered by them. The grounds cited for rejection of those eight papers are somewhat troubling, as they suggest that limiting pre-publication peer review to considering “technical soundness” would not actually increase the reproducibility of pre-publication review.
But it’s a small dataset, from a different field, so I wouldn’t hazard a guess as to what the data would look like if someone were to do the same experiment in ecology. It would be good to also try it with Plos One as well as with selective journals. And to try it with rejected papers as well as accepted papers, though of course obtaining a random sample of rejected papers would be tricky.
Anyone want to try the experiment? It wouldn’t be that hard…
(HT Leonardo Saravia, via Twitter)