In praise of pre-publication peer review (because post-publication review is hopeless) (UPDATED)

In a recent post, I examined what’s happened since Mayfield & Levine (2010) rebutted the idea that one can infer contemporary coexistence mechanisms just by plotting coexisting species on a phylogeny. I concluded that M&L were having only modest influence.

At the time, I didn’t realize that, compared to other rebuttal efforts, M&L was if anything more successful than average. Writing recently in Ecosphere, Banobi et al. (open access) review the impact of rebuttals on subsequent citation of seven prominent papers in fisheries ecology. They find that there’s basically no impact: rebutted papers continue to be cited many, many times more often than the rebuttals, aren’t cited any less often post-rebuttal than you’d have expected if they’d never been rebutted, and are rarely cited critically, even by papers which also cite the rebuttals. Worst of all, sometimes the rebuttals are cited as supporting the papers they rebutted! And these results can hardly be attributed to the rebuttals not being noticed, as they hold even for a paper (Worm et al. 2006 Science) that attracted numerous rebuttals in the same journal, and that was superseded by a subsequent “consensus” paper by the same authors in the same journal.

The results of Banobi et al. also accord with this old post analyzing citation rates of papers supporting the zombie idea of the intermediate disturbance hypothesis (IDH), vs. papers rebutting that idea. Citations of rebutted IDH papers have continued to accumulate as if the rebuttals were never written, while the rebuttals are cited more than an order of magnitude less often than the rebutted papers.

Put all that together with some other points noted by Banobi et al., and you get a depressing picture of the ability of the scientific literature to correct itself after an article is published:

  • In marine biology, fully 24.2% of all citations do not clearly support the assertions they were cited in support of (Todd et al. 2010)
  • Retracted biomedical articles are frequently cited, and mostly cited as correct, even years after the retraction, and even though the MEDLINE database explicitly links corrections and retractions to the original article (Budd et al. 1999 Bull. Medical Library Assoc.)
  • Biomedical articles which are superseded by a corrected article by the same authors continue to be cited more often than the correction even years later (Peterson 2010 J. Medical Library Assoc. 98:135-139)
  • UPDATE: Commenter Mike Fowler points out another paper I should’ve known about, Todd et al. 2007, showing that ecologists’ citation practices are exactly as terrible as those of marine biologists. Todd et al. 2007 also reviews work from other fields giving the same results, and points out a study of the propagation of misprinted citations (Simkin and Roychowdhury 2003 Complex Syst. 14:269) which inferred that as many as 80% of cited papers weren’t actually read by the authors citing them.

All this explains why I’m very leery of relying on post-publication peer review of any sort as a substitute for pre-publication review (complement, yes; substitute, no). The data show that scientists rely on pre-publication peer review, to the exclusion of post-publication review. Once something has passed pre-publication peer review, the scientific community mostly either accepts it uncritically, ignores it entirely, or else miscites it as supporting whatever conclusion the citing author prefers (UPDATE: or perhaps most commonly, cites it without reading it!).

So for all its flaws, pre-publication peer review is our only hope for filtering out flawed ideas and errors. Unteaching ecology is hard, so our only hope is to get it right pre-publication. For the vast majority of papers, pre-publication review is the only time they’re exposed to a critical reading. Advocates of post-publication peer review often miss this, arguing that, post-publication, papers can be exposed to many “reviewers” rather than merely two or three. That’s wrong. Post-publication, papers are exposed to many readers–but the evidence shows that those readers are not critical in the slightest, at least not enough to have any detectable effect on citation patterns. I wish it were otherwise, but it’s not, and I see no way to change it. There may be few incentives to do pre-publication peer review, but at least there are professional norms obliging each of us to do it. There are neither incentives nor norms obliging us to do post-publication peer review. For better or worse, the only time most of us read like reviewers is when we’re acting as reviewers. Plus, pre-publication is the only time authors are obliged to pay attention to criticism. If anything, I’d like to see pre-publication peer review get even more critical than it already is. For instance, I’d like to see peer reviewers get in the habit of checking and correcting authors’ citations (I actually do this as a reviewer, but most reviewers don’t).

It’s not easy to criticize the work of others, because that often seems like criticizing the people who did the work, and nobody but a jerk enjoys criticizing other people. Pre-publication peer review is an institutionalized practice that gets around this very human desire to want to think well of one’s peers, and to have them think well of you. That’s why, as frustrated as I (and probably all of you) often get with pre-publication peer review, I’d like to see it reformed rather than replaced. And that does seem to be happening.

p.s. Before anyone points it out, yes, I’m well aware of cases where post-publication review rapidly corrected the scientific record. The data say that such cases are the exception, not the rule. And much as I might wish it otherwise (and I do wish it otherwise), I don’t see much reason to think that the advent of blogs–even blogs as super-duper-awesome as this one–will make such exceptional cases into the rule. Although perhaps we’re about to find out

HT Eric Larson for pointing out Bonobi et al. in comments on a recent post. I’m embarrassed I didn’t know this paper and hadn’t already blogged about it!

29 thoughts on “In praise of pre-publication peer review (because post-publication review is hopeless) (UPDATED)

  1. This is also why I’m in favor of a preprint culture. Yes, pre-pub closed peer review can catch SOME things…but not as efficiently as having preprints out there for the world to see BEFORE pieces are accepted by journals, the final arbiters (as it were).

    • Certainly can’t hurt. How much it will help, I’m not sure. You really need a strong culture of reading preprints *as if you were reviewing them*. That culture already exists in some fields, like economics, where exchange of pre-prints (“working papers”) is a long-standing important practice. But how you create that culture where it doesn’t already exist, I have no idea. And I’m pessimistic that it’s possible.

      I think it’s more likely that ecologists will take up preprint sharing but will read those preprints like we read published journal articles. People just read the abstracts, or skim the results, looking to “get the gist”, or looking for ideas they can use in their own work, or etc., rather than reading carefully and critically, the way we read as reviewers. After all, there’s no incentive to read preprints any other way.

      I’m not saying people will start citing preprints as if they’d been peer reviewed, or putting preprints on their cv’s as if they were peer-reviewed papers. I just don’t think people are likely to read preprints in the way you need to to catch flaws and mistakes. But as you say, it wouldn’t hurt to try.

  2. I’m pretty much in agreement with all that’s said above, including the benefits of pre-publication dissemination and the unlikelihood of it making much difference.

    What I really want to know is why Jeremy picked on the poor Marine Biologists, when it looks like Ecologists are just as guilty. At least, according to Oikos (where Jeremy allegedly does some editing…). I know at least one of my own papers has been cited in a way that would most generously fall into the “that’s not at all what we said” category.

    • Thanks for the link to Todd et al. 2007. I wasn’t aware of that paper but should’ve been. You get the background research you pay for on this blog. 😉 Todd et al. 2007 confirms my implicit assumption (which I’m confident was clear enough in the post) that ecologists are no better than marine biologists in terms of our citation practices.

      I know you were just teasing me about allegedly editing at Oikos. But just FYI, I’m no longer on the editorial board at Oikos. I resigned from the board as well as the Oikos Blog when I started this blog. And just so no one reads anything into that decision, here’s why I resigned from the board. I’d been on the board at Oikos a long time, it felt like it was time to give someone else a chance. I needed some time to be semi-selfish and spend less time reviewing and editing (I’m living off my accumulated “PubCred balance” for a little while). And I was afraid it might be difficult to avoid getting sucked back into involvement with the Oikos Blog if I remained an editor. In no way should my decision to resign from the Oikos editorial board be interpreted as reflecting negatively on Oikos or anyone involved with Oikos.

  3. The data presented in this post are excellent, but provide no support for the conclusions drawn. Thanks for an excellent summary of the data demonstrating that the publication of rebuttal papers rarely impact future citation rates.

    It seems the data also support the conclusion that pre-publication peer review frequently fail to reject inadequately supported conclusions, and further that the process is not as reliable indicator of scientific soundness as most scientists think it is. Hardly great praise.

    The problem of course is that most of pre-pub peer review has nothing to do with the question of filtering out flawed ideas. In the current system, with PLoS ONE perhaps the only exception, peer review has the double-task of assessing “importance/potential impact” as well as “validity.” To first approximation you might say it focuses only on “potential impact”, which necessitates pointing out the more obvious flaws in reasoning which would hurt the impact of the paper just as much as a boring topics or trivial conclusion. Despite such emphasis of pre-publication review on what is “important”, it does only a middling job of this, and probably at the considerable cost to publishing “what is valid”.

    Post publication review does an excellent job of the “what is important” question, including the examples you present in which rebuttals draw more attention to a paper. Hardly surprising that “Importance” is better assessed by the many rather than the few, while the technical is better assessed by the few. Why is it then that we place greatest value on journals that place the greatest fraction of the effort on the “importance” side of the peer-review duty, and least emphasis on journals that dedicate that effort entirely to the “what is valid” side?

    Why are both tasks part of pre-publication review anyway? The answer has nothing to do with science and everything to do with economics, where publishing is expensive. From the era of the printing press through the era of the television, publishers could only afford to publish stuff that got the most impact. In an era of internet publishing, it’s simply a spandrel. I again think its mostly economics forcing us slowly to the new equilibrium, and it will get there, but we aren’t there yet.

    Post-publication review is mostly holding up its end of the bargain identifying what is important. Pre-publication review is spending most of its energy competing with that objective and perhaps doing an inadequate job on the other side, which seems to have motivated this post. What did I miss?

    • Sorry Carl, going to mostly disagree with you here. Pre-publication review is not failing to hold up its end of the bargain, has not outlived its usefulness in this new online world, is not interfering in any important way with post-publication review, and should not be abandoned.

      Yes, pre-publication review also misses flaws, at an unknown rate. But given how many flaws referees catch in a typical review, I don’t think pre-publication review does a poor job of catching flaws, and I certainly think it does a much better job than post-publication review. Unless you have data refuting this, I think the conclusion of the post stands.

      Do pre-publication reviewers focused on “importance” thereby do a poorer job than they should at catching flaws? I don’t know but I doubt it. I suppose you could try to study this by looking at error rates in PLoS ONE papers vs. papers in other journals, or (better) by submitting lots of papers with known errors to both PLoS ONE and other journals.

      Yes, pre-publication evaluations of “importance” don’t necessarily do a great job of predicting which papers will be most cited. But since the majority of citations are perfunctory (see the papers linked to in the post), and a non-trivial fraction of the rest are miscitations, all citation metrics are pretty poor measures of “impact”, “influence”, or any other aspect of “quality”. And as far as I’m aware, all studies of how poor pre-publication review is at predicting “impact” are studies of how poor pre-publication review is at predicting citations. If you know of studies of this issue based on more valid metrics of importance, I’d be interested to know of them. (as an aside, I doubt there are any, as I don’t think there are any valid metrics of “importance” or “impact”, either single metrics or complementary combinations of them. Those concepts, as important as they are, are too multifaceted and necessarily vaguely-defined to be captured in more than the broadest-brush way by quantitative metrics.)

      You also raise the issue of pre-publication review based on “importance” as a filter on the literature. This is something I’ve touched on before: https://dynamicecology.wordpress.com/2012/07/16/citation-concentration-filtering-incentives-and-green-beards/ Briefly, I’d say that, while this sort of filter is far from perfect, I actually don’t think it works too badly. Further, given that we need *some* filters or other (nobody has time to even read the titles of everything), I remain to be convinced that any post-publication filtering system you care to name (or can conceive of) would be significantly better. The old post discusses my reasons for my views on this. I do think various sorts of post-publication filters are useful complements to pre-publication filtering, but not replacements.

      • Thanks for the reply! I think you’ve misrepresented some of my statements. I have not suggested pre-publication review is not useful, or that it does a poor job of finding flaws. Nor have I suggested that post-publication review is better or even decent at finding flaws. I have not suggested that we don’t need pre-publication review (rather the opposite). I think we agree on all these points.

        Your post highlights a problem (unappreciated rebuttals) without proposing a solution (though hinting that it isn’t post-review, and looks more like pre-pub review). I agree. (though one might instead suggest teaching scientists not to trust/cite all published stuff as if it were true. Of course we already do plenty of the former and not the latter due to incentive structures).

        We do seem to disagree on whether most journals filter more on validity or on impact. What is clear is that journals compete based heavily on the quality of their impact filter, not on the quality of their validity filter. A little selection theory tells us where this leads.

      • Hi Carl,

        apologies for misreading you, my bad.

        It’s true I don’t have any solution to unappreciated rebuttals, beyond doing our best to ensure that rebuttals aren’t needed in the first place. I mean, there are things one can try to do post-publication, and that wouldn’t hurt to try, I just don’t see any reason to think they’d succeed.

        Re: the filters on which journals compete, Oikos used to have a reputation as someplace that cared first and foremost if your work was interesting, and cared somewhat less whether it was a complete, rigorous, and technically-sound story. Whereas Ecology had the reputation of caring first and foremost if your work was rigorous and technically sound. So Oikos had the reputation of being interesting-but-maybe-wrong, while Ecology had the reputation of being correct-but-boring. The landscape has shifted since the days when I was a grad student, and so I’m not sure if it’s still the case anymore that journals differentiate themselves along multiple axes in that way. I’m not even sure they could if they wanted to. These days, I think that authors increasingly decide where to submit based on one criterion: “which journal would look best on my cv?”

  4. “You really need a strong culture of reading preprints *as if you were reviewing them*.” …Is there any other way to read a paper? Seriously, though – I think more scientists read critically than you say here. If not, then I am very sad for science.

    We know that post publication peer review (including revisions, letters, retractions, etc as well as blogs and such) is a complement at best to pre publication peer review. We also know that current pre publication peer review works most of the time – but still manages to allow some truly horrible garbage through.

    It seems to me that the solution would be to combine tactics. We need the multi person post publication review to be happening pre publication, in addition to the formal pre publication review process. Why not post publications for an open comment period before publishing? There are details that would have be be worked out to protect the authors, of course, but it could be done. Things like a registration system requiring real names; limit access to only certain groups, like tenured faculty or only those with relevant expertise, etc; have a non-disclosure statement, and similar legal protections for the authors.

    • Sorry Anastasia, but in my experience, post-publication readers mostly don’t read as if they’re reviewers. Seriously, they don’t. People mostly read abstracts, or skim the figures. If they read the paper at all. There are obvious exceptions. But that’s what most post-publication scientific reading is like.

      Afraid I don’t see how having a pre-publication comment period would help. Nobody would be obliged to comment or have any incentive to comment, there’s no culture of providing such comments (any more than there is for post-publication comments), and there’s no professional norm that says you should comment. So nobody would.

      It’s nothing to do with legal protections for authors or commenters, or registration systems, or who should be allowed to comment. No matter what you did with any of that stuff, nobody would bother to comment under the system you suggest.

      On a happier note, thanks for reading and commenting! (seriously) 🙂

  5. Miller et al (2005). A Critical Review of Twenty Years’ Use of the Resource‐Ratio Theory. American Naturalist, 165:439-448.
    They found 1333 citations to R* theory (at that time). Of these, only 26 actually tested the predictions of the theory.

    • I’m unclear what you’re getting at here. Are you saying R* theory has been uncritically accepted? R* theory is actually well-tested in systems where it’s been tested, which are the model systems in which it’s easiest to test it. When it’s been tested, it actually has an incredible predictive track record. And it’s quite normal that lots of citations of it would not come from tests of it, because it’s a simple illustration of key concepts in ecology. That Miller et al. paper was kind of a strange one, I never really got what they were complaining about. So no, in this case I don’t think the citation data you cite indicate that ecologists are accepting R* theory uncritically.

  6. Pingback: We’ll never get rid of “salesmanship” in science (and wouldn’t want to) | Dynamic Ecology

  7. Pingback: How random are referee decisions? (UPDATED) | Dynamic Ecology

  8. Pingback: Friday links: the physiological ecology of horse-sized ducks, “Go home evolution, you are drunk”, and more | Dynamic Ecology

  9. Pingback: Advice: Why some academics SHOULDN’T blog (or use Twitter, or Facebook, or…) | Dynamic Ecology

  10. Pingback: Felix Schönbrodt's website

  11. Pingback: Friday links: transparency in research, and more | Dynamic Ecology

  12. Pingback: Bonus Friday link: the reproducibility of peer review | Dynamic Ecology

  13. Pingback: When, if ever, is it ok for a paper to gloss over or ignore criticisms of the authors’ approach? | Dynamic Ecology

  14. Pingback: Friday links: liberal arts ecologists, potato beetle bombs, good modelers “lie, cheat, and steal”, and more | Dynamic Ecology

  15. Pingback: Does peer review ever increase “researcher degrees of freedom” and compromise statistical rigor? | Dynamic Ecology

  16. Pingback: Two stage peer review of manuscripts: methods review prior to data collection, full review after | Dynamic Ecology

  17. Pingback: Ask us anything: how do you critique the published literature without looking like a jerk? | Dynamic Ecology

  18. Pingback: Proceedings of Peerage of Science – complementing the journal landscape, or yet another new OA journal? | theoretical ecology

  19. Pingback: Friday links: confirmation bias confirmed, peak reading, diatom art, and more | Dynamic Ecology

  20. Pingback: Post-publication “review”: signs of the times | Dynamic Ecology

  21. Pingback: Conflicts of interest – are there ever any situations where there aren’t? | Don't Forget the Roundabouts

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.