Post-publication review is here to stay–for the scientific 1%

Another high profile case of post-publication review is in the news, to do with a Nature paper demonstrating an easy method to create pluripotent stem cells. The paper’s dramatic claims immediately received heavy scrutiny from a lot of people, and look like they might be walked back or even retracted. See here for an overview of the post-publication discussion from PubPeer (a website dedicated to post-publication review).

Advocates of post-publication review like to trumpet such cases, understandably and rightly so. Thanks to new online tools, science can now self-correct instantly! I’m all in favor of finding new, effective ways for science to self-correct (see also this). In the linked piece above, the folks at PubPeer suggest that post-publication review is “here to stay”.

To which I’d respond, that’s fine, but let’s be more precise. Post-publication review of very high-profile papers is here to stay. Post-publication review of most other papers not only isn’t here to stay, it doesn’t exist.

Think of the cases of post-publication review everyone talks about. Arsenic life, stem cells, papers on dinosaur growth rates and network analyses discussed in this old post…Such papers are the scientific 1%. Actually, probably 0.01%, or some other really small number. It’s mostly only really high-profile papers, published in Science, Nature, and a few other top journals, that receive close post-publication scrutiny from lots of readers. The vast majority of papers don’t receive any post-publication “review” at all, because relatively few people read them and the people who do read them mostly just read the abstract or skim.*

Which isn’t necessarily a criticism of post-publication review, actually! Indeed, one could argue that some sort of “hybrid” system is close to ideal. Because in a hybrid system, every paper gets careful pre-publication review from 2-3 people, and then the small fraction of papers that lots of people really care about gets post-publication review from lots of people.

But it’s important to recognize the essential role that pre-publication review plays in this hybrid system: it ensures every paper gets scrutinized by somebody. You need some mechanism to ensure that, because left to their own devices most people just pay attention to the same small fraction of stuff everyone else pays attention to.** That’s why distributions of “attention concentration” for everything from citations, to downloads of scientific papers, to website traffic, to YouTube video views, to book sales, to movie box office grosses, are highly skewed.

In a recent post, I suggested that post-publication review will be most effective when, like pre-publication review, it’s based on agreed norms and practices everyone buys into. One aspect of pre-publication review is that it forces redistribution of attention. It’s a practice that ensures at least some close attention is paid to every single paper–namely, the attention of the pre-publication reviewers. I don’t see any sign of any norm or practice that would force redistribution of attention in post-publication review. So if you want post-publication review can serve as a complete replacement for the current functions of pre-publication review (and I recognize that that may be a big “if” for some), then I think you need to suggest some mechanism to redistribute the attention of post-publication reviewers.***

In summary, advocates of post-publication review are rightly impressed with high-profile cases in which it works. It’s very rare for the scientific record to rapidly self-correct, so we should be glad when it happens. But in its current form, post-publication review is only for “elite” papers–it doesn’t scale to the whole scientific literature. It’s actually pre-publication review, or some hybrid of pre- and post-publication review, that works at scale.

p.s. Please don’t push back against this post by noting that pre-publication reviewers sometimes do a careless job, or sometimes just miss things. That’s true, but irrelevant to the point of the post.

*That’s why the large majority of papers attract no post-publication comment at all, on any of the many commenting systems that have been tried–PubPeer, PubMed Central, journal-based commenting systems, blogs, etc. At least as far as I know–if you have data to the contrary, I’d love to hear about it. I know some folks chalk this up to the technical design of previous commenting systems, and believe that if only we get the design right, most every paper will attract comments. I doubt it, but I’ll happily change my mind if the data demand it.

**In the current hybrid system, pre-publication review also contributes to a second function: concentrating the attention of post-publication “reviewers” on a small fraction of papers. Post-publication review mostly falls on papers published in high-profile journals, and pre-publication review goes a long way towards determining what gets published in high-profile journals. Of course, if pre-publication review ceased to exist, post-publication review would still concentrate on a small subset of papers to the exclusion of most others. It’s just that the concentration would occur via some other mechanism(s), which probably would be just as stochastic and subjective as current attention-concentrating mechanisms. See this old post.

****For instance, you could instead argue that some current functions of pre-publication review don’t need replacing. For instance, you could argue that science would work as well, or better, if everyone just published everything without review, and then readers just chose to read, comment on, and use those preprints however they saw fit. Which is a much larger and different discussion, I think. I hesitate to dive into that discussion too much, as that hypothetical world is just too different from the one I’m used to for me to really wrap my mind around it. For what it’s worth, there certainly are disciplines (economics, physics) in which unreviewed preprints are a much more important part of professional discourse than is currently the case in ecology. But even in those disciplines, peer-reviewed journals do still play various important roles: validating the outcomes of pre-publication discourse, scrutinizing and improving papers that might not otherwise be closely scrutinized at all, and other functions. For instance, see this editorial by a physicist who’s a strong supporter of the ArXiv preprint server, or this recent post from Andrew Gelman. So I think the examples of economics and physics certainly are suggestive, and indeed have argued that ecology could learn something from how economists communicate. But I don’t think that economics and physics provide already-existing examples of disciplines where post-publication review has replaced pre-publication review.

19 thoughts on “Post-publication review is here to stay–for the scientific 1%

  1. Hi Jeremy,

    I broadly agree with your points, but I’m not sure if the problems you bring up is really about pre/post publication review as such.

    Pre/post publication review, broadly speaking, distinguishes if papers are made available by a journal before or after the review. There are journals such as Biogeosciences that make papers available when they go into review, although they are regularly reviewed by 2-3 reviewers chosen by the editor, so this is post-publication review, but with editorial oversight.

    Your concern seems to be more about whether reviewers can choose freely which papers to review, or whether there is some system to match reviewers and papers. The former seems more natural for post-publication review, but there are also examples of pre-publication review where reviewers have a choice (e.g. PoS). In general I agree with you that some sort of reviewer distribution mechanism (pre or post) is necessary to make sure that all papers are reviewed.

    That being said, I feel you are dismissing post-publication review to readily, based on a straw-man made up by a badly designed post-publication system. A hybrid system, where pre-publication review is cut back to checking whether a paper is technically sound, for example, could be a sensible compromise. Once technical correctness is established, the discussion of whether a paper is important could then be left to the wider community in an open post-publication process, and journals could then pick up those recommendations.

    There will be likely flaws and loopholes in any system, but I just don’t see why the current system, slow, built for around technology of the last century, should still be ideal in a time where we have the ability to much more rapidly distributed information and generate interactions and feedback.

    • I just thought I should add that Biogeosciences has a kind of pre-publication review as well, after submission the reviewers do a fast screening, and if this is positive, the paper is published as a discussion paper and goes into open review.

  2. Pre-pub review serves a host of other important services, related to the points you make above, which make our scientific lives an awful lot easier.

    Simply, it allows us to develop an acceptable level of trust in the published literature and all that it is built upon. If I actually had to go and check every paper listed in the bibliography of an article/grant proposal I’m reading/reviewing (and this would include, e.g., the UK’s research comparison framework that determines how much baseline government money departments/unis will receive – no idea if/how other nations do this – but it’s an enormous exercise over here) just to see if it was a valid piece of work, I’d stop wanting to be a scientist pretty quickly. I suspect most other serious academics would too.

    Getting rid of pre-publication peer-review would be unworkable for this simple reason. This is not to say that post-pub review doesn’t have a valuable place – it does IMO. But if we had to check for post pub reviews (or carefully read the actual text, heaven forbid) of each article that was ever cited, rigorous science would slow down, not speed up.

      • I guess many people are unhappy with pre-publication peer-review, because there are also many ways it can go wrong. Although, most people would complain that they are annoyed by it, because it keeps them from publishing, I would like to point out that there is a danger that pre-publication peer-review gives us a false sense of certainty about the correctness of presented findings. After all, a paper usually has only 2 to 3 reviewers and everyone knows how variable the quality of reviews is. Furthermore, with increasing interdisciplinarity of research it becomes less and less likely that two people can validate all aspects of a paper sufficiently. You may trust that a reported finding is correct, but perhaps this is dangerous (http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124).

  3. Are the stem cell and arsenic cases really examples of post-publication “review”? Are they not about reproducibility of results? Review and reproducibility seem to me to be different things: I can provide a review or comment on a published paper without trying to replicate its findings.

    • Well, they both certainly started out as review. People initially just didn’t find the evidence presented in the papers to be convincing, and said so.

      As you say, people subsequently went further, trying and failing to reproduce the results. But even here I do think post-publication review played a role. My sense is that it was one important motivation for people to take the next step and try to reproduce the results.

      • OK, fair point. The implication though is that the 1% drops to perhaps 0.1% or less when review is taken to the next stage of trying to reproduce the work. Reproducibility is one of those things that we pay lip service to in science, but actually rarely try to do such is the emphasis on novelty rather than confirmation.

  4. If papers were easier to build upon, I think we would see that 1% (or whatever it is) increase. Currently it is quite difficult to build on a paper in a meaningful way — As Mike Fowler says, a typical citation just references the idea (the kind of thing you could take away by skimming the abstract), because digging down further for every paper is unworkable.

    That’s notably not true of methods papers, particularly when the method is implemented in software. When lots of scientists apply the method, rather than just citing the paper to support some claim, errors or bugs in the implementation can easily surface. These don’t always take the form of what you’re calling post-publication peer review — the software is just updated, or sometimes/eventually replaced by a better method and science progresses. Importantly, this isn’t a binary process of “trust-worthy” and “not trust-worthy”, since every paper has limitations and many have something valuable to build on, provided it’s easy enough to do so. The harder it is, the fewer papers go in that 1% category.

    Making more elements of the paper easy to build upon would I believe increase both the potential impact of a paper and (thus) the post-publication peer review. Publishing data is another avenue for that to happen. Like building on the methods, this is twofold more robust than simply citing a claim — First, because trusting the claims a paper makes means trusting the data, the analysis, etc, instead of just trusting the data; and second, because it tempts more eyeballs to go beyond the abstract. If the only value of your paper to me is some claim (because it’s the only part I can build upon without undo effort), I have little incentive to do more than skim your papers, and only the 1% I don’t skim can be part of a real post-publication review.

    Notably, we trust methods that have actually been widely reused, not just any method that’s passed peer review. Such papers make up the current 1% of papers that have really been (implicitly) post-publication peer reviewed, so it’s little surprise that methods papers are the generally the most cited publications of the past few decades. If other parts of the research process were likewise easy to build upon, the whole system might be a bit more robust, a bit more scalable, and a bit, well, more useful to read beyond the abstract?

    • Hmm…can you elaborate a little bit what you mean by making work easier to build on? Because in many respects I think that’s out of the author’s hands. For instance, basically nobody’s ever going to try to build on my microcosm experiments, not because I haven’t shared the raw data or whatever, but because few people work in that system and those who do all have their own lines of research on other topics. Much the same could be said of lots of other work in ecology and evolution–different people either ask different questions in the same system, or the same question in different systems. In the majority of cases in ecology and evolution, I think the only person likely to build on your work is *you* (or future members of your lab), at some later date.

      • Sure, sorry if I was unclear. If I describe a method in a long mathematical appendix it is easier to build upon if I also publish software implementing the method alongside the paper. If I publish an analysis of the dataset, it is easier to build on if I also publish the data in a standard format with good metadata. Other things that go in the same box of making a paper easier to reproduce would also make it easier to build upon.

        You seem to be raising a separate question of who would want to build on the paper in the first place. You might as well ask, who would cite the paper? I’m not positing that every paper would be built upon, any more than I would argue that every paper will be cited. I only mean to suggest that some number greater than what we’re calling the “1%” of papers that currently get this closer scrutiny of post-publication peer review if it was easier to engage in the details of the paper in the first place. Would you agree?

  5. One thing I’d like to mention is that, whatever its form, a solution is desperately needed, and will be forthcoming—much to the chagrin of some. As you point out, the most high-profile papers receive the most attention, but to my mind that suggests that if all papers received that kind of scrutiny, we would find that a very large percentage of published (i.e., supposedly vetted, validated science) scientific studies are actually quite poor—they only aren’t being re-examined because their claims aren’t particularly grandiose or sensational. So, there appears to be a very significant problem (well, lots of problems) with current peer review. It’s going to change. The change is coming. The discussion we need to have is how it should look.

    • “the most high-profile papers receive the most attention, but to my mind that suggests that if all papers received that kind of scrutiny…”

      Isn’t that kind of a big if? I mean, there’s only so much reviewing effort to go round. Subjecting all papers to the intense, widespread post-publication scrutiny to which high-profile papers are currently subjected would seem to imply a big increase in the total amount of time and effort that scientists collectively put into reviewing. What would we all do much less of, in order to be able to devote much more time and effort to reviewing? (Presumably, doing research…) Or am I misunderstanding your suggestion here?

      “we would find that a large percentage of scientific studies are actually quite poor.”

      Maybe, not sure. It’s an empirical question, obviously, though I think the answer will depend a lot on what you mean by “poor”. Different sorts of problems vary a lot in terms of how easily reviewers can identify them (and the extent to which reviewers will agree that they’re even problems). And then there’s the question of what lengths we’re all prepared to go to to vet the work of others. After all, flaws of any sort are always going to slip through at some non-zero rate, no matter how much effort is put into peer review. Peer review also will *introduce* flaws at some non-zero rate (peer reviewers, individually and collectively, aren’t infallible any more than authors are). So you kind of have to decide what error rates you’re willing to live with. Which then gets back to the time and effort allocation issue I raised above. How much time and effort would it be best to spend on doing science vs. reviewing it? I have no idea, really.

      “much to the chagrin of some”

      Not sure what you mean by that aside, perhaps you can elaborate? I had thought that the point of peer-review reform of any sort was to maximize learning and scientific progress, not schadenfreude (to borrow a phrase I believe I first read on Data Colada…). Indeed, as this old post notes, it’s going to be hard for post-publication review to work effectively if it’s seen by reviewers and/or authors as primarily a way to attack authors.

  6. Pingback: Updates, taxonomy, and post-publication review | An Arachnolingua Blog

  7. Pingback: Friday links: top journals vs. thumbs, history of peer review, 40 years of Darwin’s finches, squnks, and more | Dynamic Ecology

  8. Pingback: Friday links: does Gaad exist, stories behind classic ecology papers, evolution of chess, and more | Dynamic Ecology

  9. Pingback: Friday links: a botanical brainteaser, hippos as invasive species, tau > pi, 2>55, and more | Dynamic Ecology

  10. Pingback: The silent revolution in peer review | Frontiers Blog

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.