In a recent post, I examined what’s happened since Mayfield & Levine (2010) rebutted the idea that one can infer contemporary coexistence mechanisms just by plotting coexisting species on a phylogeny. I concluded that M&L were having only modest influence.
At the time, I didn’t realize that, compared to other rebuttal efforts, M&L was if anything more successful than average. Writing recently in Ecosphere, Banobi et al. (open access) review the impact of rebuttals on subsequent citation of seven prominent papers in fisheries ecology. They find that there’s basically no impact: rebutted papers continue to be cited many, many times more often than the rebuttals, aren’t cited any less often post-rebuttal than you’d have expected if they’d never been rebutted, and are rarely cited critically, even by papers which also cite the rebuttals. Worst of all, sometimes the rebuttals are cited as supporting the papers they rebutted! And these results can hardly be attributed to the rebuttals not being noticed, as they hold even for a paper (Worm et al. 2006 Science) that attracted numerous rebuttals in the same journal, and that was superseded by a subsequent “consensus” paper by the same authors in the same journal.
The results of Banobi et al. also accord with this old post analyzing citation rates of papers supporting the zombie idea of the intermediate disturbance hypothesis (IDH), vs. papers rebutting that idea. Citations of rebutted IDH papers have continued to accumulate as if the rebuttals were never written, while the rebuttals are cited more than an order of magnitude less often than the rebutted papers.
Put all that together with some other points noted by Banobi et al., and you get a depressing picture of the ability of the scientific literature to correct itself after an article is published:
- In marine biology, fully 24.2% of all citations do not clearly support the assertions they were cited in support of (Todd et al. 2010)
- Retracted biomedical articles are frequently cited, and mostly cited as correct, even years after the retraction, and even though the MEDLINE database explicitly links corrections and retractions to the original article (Budd et al. 1999 Bull. Medical Library Assoc.)
- Biomedical articles which are superseded by a corrected article by the same authors continue to be cited more often than the correction even years later (Peterson 2010 J. Medical Library Assoc. 98:135-139)
- UPDATE: Commenter Mike Fowler points out another paper I should’ve known about, Todd et al. 2007, showing that ecologists’ citation practices are exactly as terrible as those of marine biologists. Todd et al. 2007 also reviews work from other fields giving the same results, and points out a study of the propagation of misprinted citations (Simkin and Roychowdhury 2003 Complex Syst. 14:269) which inferred that as many as 80% of cited papers weren’t actually read by the authors citing them.
All this explains why I’m very leery of relying on post-publication peer review of any sort as a substitute for pre-publication review (complement, yes; substitute, no). The data show that scientists rely on pre-publication peer review, to the exclusion of post-publication review. Once something has passed pre-publication peer review, the scientific community mostly either accepts it uncritically, ignores it entirely, or else miscites it as supporting whatever conclusion the citing author prefers (UPDATE: or perhaps most commonly, cites it without reading it!).
So for all its flaws, pre-publication peer review is our only hope for filtering out flawed ideas and errors. Unteaching ecology is hard, so our only hope is to get it right pre-publication. For the vast majority of papers, pre-publication review is the only time they’re exposed to a critical reading. Advocates of post-publication peer review often miss this, arguing that, post-publication, papers can be exposed to many “reviewers” rather than merely two or three. That’s wrong. Post-publication, papers are exposed to many readers–but the evidence shows that those readers are not critical in the slightest, at least not enough to have any detectable effect on citation patterns. I wish it were otherwise, but it’s not, and I see no way to change it. There may be few incentives to do pre-publication peer review, but at least there are professional norms obliging each of us to do it. There are neither incentives nor norms obliging us to do post-publication peer review. For better or worse, the only time most of us read like reviewers is when we’re acting as reviewers. Plus, pre-publication is the only time authors are obliged to pay attention to criticism. If anything, I’d like to see pre-publication peer review get even more critical than it already is. For instance, I’d like to see peer reviewers get in the habit of checking and correcting authors’ citations (I actually do this as a reviewer, but most reviewers don’t).
It’s not easy to criticize the work of others, because that often seems like criticizing the people who did the work, and nobody but a jerk enjoys criticizing other people. Pre-publication peer review is an institutionalized practice that gets around this very human desire to want to think well of one’s peers, and to have them think well of you. That’s why, as frustrated as I (and probably all of you) often get with pre-publication peer review, I’d like to see it reformed rather than replaced. And that does seem to be happening.
p.s. Before anyone points it out, yes, I’m well aware of cases where post-publication review rapidly corrected the scientific record. The data say that such cases are the exception, not the rule. And much as I might wish it otherwise (and I do wish it otherwise), I don’t see much reason to think that the advent of blogs–even blogs as super-duper-awesome as this one–will make such exceptional cases into the rule. Although perhaps we’re about to find out…
HT Eric Larson for pointing out Bonobi et al. in comments on a recent post. I’m embarrassed I didn’t know this paper and hadn’t already blogged about it!