Two stage peer review of manuscripts: methods review prior to data collection, full review after (UPDATED x2!)

In this post, I’m going to propose a new approach to peer review that I think could be used in addition to the current, traditional approach. I will admit at the start that I think it’s unlikely to take off, but I think it’s worth throwing out there, anyway.

The basic idea is this: split up peer review into two stages.

Stage 1: Review of an introduction (which sets up why the question is important and interesting) and methods (which give the details about what would be done, and how data will be analyzed). The goal of this round of review would be to make sure that you have proposed something interesting and novel (or at least worth repeating) and, just as importantly, that the methods you’re proposing to use are appropriate for the question.

Once through that round of review, you would go and do the experiment. Maybe you’d do it three times, because everything died the first two times for no explicable reason. Then, once that was done, you’d write up the results and discussion, add an addendum to the methods (e.g., to say that you’d changed to using a quasi-poisson distribution due to overdispersion), and send it back off to the same set of reviewers.

Stage 2: Review of the entire manuscript, including the methods addendum, results, and discussion.

Major advantages of this approach:
1. It catches potential methodological problems when it’s still (relatively) easy to fix them. Surely it’s much better to be able to point out pseudoreplication or the omission of an essential control prior to the study being done, right? And, even if there aren’t glaring errors like that, sometimes there are other tweaks that would improve a study design. Right now, my student and I are trying to design an experiment for a somewhat complicated question. There’s a potential confounding factor that we’re trying to figure out how to account/control for. There are a few options, and we’re currently trying to decide which is best. We’re doing this by asking people who seem like they would have good ideas. As peer review stands now, if we get substantial feedback on the design from some people, we’d put them in the acknowledgments; that would then make it likely that the people who would be the most useful peer reviewers would no longer be asked to review the final manuscript. That seems pretty unfortunate to me.

(Note: I realize some people will say that grant proposal review should do this. However, given how little room there is in proposals, you can never come close to fully explaining methods in proposals. Plus, many studies are not subjected to review by an NSF panel or similar body prior to being carried out.)

2. Related to the above: It allows the reviewers to say “Have you considered Smith et al. 1990?” at a time when you can still do something about it if you missed an important earlier study that is relevant.

3. Two stage peer review would encourage more publication of negative results. You’re already halfway there, right? It lowers the barrier to getting the results out there, which is important given time constraints. If the question was interesting in the first place, then the answer should be interesting, too, even if it is a negative result.

4. It would help people avoid getting into trouble with exploratory analyses and post-hoc tests. You couldn’t pretend that the study was all about pattern X all along, because the reviewers would know that you were originally planning on studying pattern Y. You would instead write up what happened related to pattern Y, and then would have to make it clear in the manuscript that pattern X is interesting and noteworthy, but resulted from post hoc analyses. Maybe we’d become more like artists.

As I said at the beginning, I think it’s unlikely that this approach will take off. If it does take off, it seems like it would be most likely to happen at one of the Open Access journals (like PLoS ONE or Ecology and Evolution) where the decision is made based on the soundness of the science, rather than the novelty/sexiness of the results. But I don’t see why it shouldn’t be possible for society journals like Ecology, AmNat, etc. As I said above, if there is agreement that a particular question is really important, shouldn’t we want to know the answer to that question, even if it is a negative result? Journals could still choose based on impact/sexiness; they would just have to evaluate the potential impact of the question, rather than the results. And I don’t expect that all manuscripts would get reviewed this way – just that it would be nice to have this as an option.

There would need to be some way to prevent people from abusing the system by going through the first round of review at one journal (say, PLoS ONE), getting useful feedback on their experiment, and then deciding the results were particularly sexy (maybe based on post hoc tests) and sending them off to another journal (say, Science) instead. I suppose one thing that might keep that in check is that the people who did the initial rounds of review would see the paper eventually (either as a peer reviewer, or once it came out), and would know what had been done; keeping that in mind might reduction the temptation on the part of some people to jump ship half way through the process.

Do you think this approach would be worth pursuing? Do you have thoughts on how to improve it? Could you see yourself trying out this sort of approach? Please let me know in the comments!

UPDATE: Thanks to twitter, I’ve learned that Chris Chambers proposed something similar for the neurosciences. His post on this raises lots of the same issues that I raised in mine, especially those related to getting in trouble with exploratory analyses. As he put it, “The moment we incentivize the outcome of science over the process itself, other vital issues fall by the wayside.” Chris has created something called “Registered Reports” as part of the journal Cortex. It will be really interesting to see how it works out!

UPDATE 2: James Waters recently proposed a very similar idea in the comments of a recent post by Ethan White. (ht Ethan White on twitter)

Related post
Jeremy’s praise of pre-publication peer review

20 thoughts on “Two stage peer review of manuscripts: methods review prior to data collection, full review after (UPDATED x2!)

  1. I think methodological concerns or incorrect analyses are the most common reasons I recommend rejection of a ms. But I wonder if journals (/editors/reviewers) would feel pressured to accept a ms that they had already had its methods “approved” (whether implied or stated explicitly). We already tend to see this when papers go back for “Round 2” (either with an editor or back to the original reviewers), which strengthens by belief that peer-review should be more discussive rather than 2 rounds of point/counterpoint.

  2. Why are we restricting ourselves to the standard paper format? I.e. All of the paper coming out at once? If the methods get approved (say after revision) then why not make them available on the journal website as part 1, and once the experiment is done, after more peer review include a part 2. Of course, sometimes a part 2 would never appear, but that can be useful information as well.

    It would eliminate the journal jumping program, and let you more evenly distribute your work. This could also allow, for instance, one group to make part 1 and then one or more groups (maybe including the original, maybe not, but obviously give them first dibs) to do part 2. This would also be especially useful for avoiding accidental scooping, where a long term experiment is being done by more than one group without knowing others are doing it, and then one ‘winning’ to publication and the rest throwing their work away.

    • When I first read this post, this was almost exactly my thought. I think publishing each step in turn would smooth out the publication process considerably. This would be great for students or early career researchers who need to be able to demonstrate work that can pass reviews.
      It also seems like having a published, peer-reviewed method out would be useful for large collaborative projects like NutNet, as you could recruit more people by having a vetted, standardized paper to point new participants to.

  3. While I see some wisdom in here (I’ve also been curious about the filing of a priori hypotheses before testing), does this also take away from the ‘surprise results’ that we so often encounter? Experiments gone awry or revealing something novel can often be just as informative if not moreso than getting results that fall within one’s original intellectual framework.

    • These would still happen, but would just have to be acknowledged as surprise results. Is your concern that people might not find those results or they might not get enough prominence in the paper? I could see both of those being potential problems, but presumably, if you write the title and abstract at the end, those could still be highlighted.

  4. I have to admit I’m in the “isn’t this what grant review is for” camp. To go beyond that, maybe consultation with peers is good. But a formal process…. my brain shrieks MORE ADMIN!!! and shuts down 🙂

    Possibly it’s a field-dependent thing but pre-experiment review, at a more detailed level than the project review of a grant, just wouldn’t work for people like me (who do single neuron recording). A lot of what we do is exploratory, until such time as a clear and interpretable finding emerges in which case it becomes frantic data collection to nail the effect and then try and understand it. General predictions can be made, and are in grant proposals, but the details are usually surprising and nothing like what anyone had expected a priori.

    Your proposal might be good for things like clinical trials, however, whether the problem is clear and the results more predictable.

    • I’m inclined to agree with you katejeffery. I can see the merits of this argument – it’s very sad when one has to reject a ms because the experiment was flawed (especially when that experiment is enormously time-consuming and consumes a lot of resources). However, it massively increases the amount of peer review. We all know that reviewing the methods isn’t going to take half the time of reviewing the whole ms. And, more importantly, is is a blatant acceptance that the grant review stage has failed. It would make more sense to me to have a more elaborate review process at the grant stage, i.e., before the money has been given to the researchers.

      For research that is either start-up or pilot work for a major grant, isn’t getting feedback on experimental design something one could get from peers within the department or from existing collaborators? Students are encouraged to give talks on their plans and seek feedback; I sometimes wonder if PIs forget to do this themselves. I think we should be really pushing informal peer-review. As a research fellow in a UK institution (for US readers, this falls between a postdoc and an Assist. Prof in the US system), I am enormously grateful for the chat in the pub where three or four PIs talk through ideas. In my opinion, structured informal peer review (i.e., where colleagues critique expt design and don’t just say “yeah, that sounds like a great idea – I hope it gets funded”) has a much bigger role to play.

      • I would say that getting feedback from peers within the department isn’t always sufficient. It is surely helpful, but for some things, the input of a specialist in the field is likely to be particularly important.

    • Interesting point. I would agree if reviewers got a copy of the grant and were asked to check to make sure the methods matched. That is not what happens in practice. Also, many projects are not part of grants. Especially ones that don’t involve a lot of material/extra-labor costs.

  5. I think this is an interesting idea, but as mentioned by others I think that it applies best to certain kinds of research. Experiments benefit most clearly from this kind of initial review and planned analysis of existing datasets would also benefit (though I suspect that the addendum to the methods sections would be more substantial), but for the development of new theory, natural history observations, and happening across interesting events in long-term surveys this would seem to be less useful. As a result implementing this at a journal would need to be undertaken carefully to avoid influencing the kind of science being conducted as opposed to how we conduct it.

    This is also clearly an idea that a lot of people are thinking about since James Waters also suggested something similar a few weeks ago

    • Yes, I agree that this will be more useful in some cases than in others. I definitely do not think this should be how all papers should be reviewed. But it seems like it could be nice to have it as an option.

  6. Another related old post, on the idea of pre-registering study designs:

    https://dynamicecology.wordpress.com/2012/11/21/do-we-need-a-registry-for-ecological-experiments/

    I saw pre-registration of study designs mostly as a way to deal with your point #4; your other points hadn’t occurred to me. I guess because one key difference between a registry of study designs and two-stage peer review is that there wouldn’t be any peer review of registered study designs.

    Re: there being little room in grant proposals to fully explain one’s methods: you don’t know the half of it, Yankee! (said the Canadian) 🙂 I agree that stage one of the sort of two stage review you’re proposing can’t really happen at the grant application stage, because too much work isn’t funded by grants in which the methods are spelled out in sufficient detail.

  7. I’m a big fan of this idea, especially if this scheme leaves open the possibility of noting that a given pattern wasn’t one you set out to test or look for, but is still interesting as a possibility to look at for follow-up.

    I’d be curious what sort of standard would arise for how detailed you methods would have to be in the initial submission though. Would it be along the lines of ” we will regress y on x1 and x2″, or “we will use generalized additive regression with a Poisson link function and spatially autocorrelated errors to account for non-linearities and spatially structured residuals”.

    • Good question. I think probably you would want to specify things in as much detail as possible, but obviously things will need to get changed sometimes depending on how things go. So, I think it would be more along the lines of your latter example, but then the addendum is likely to make changes to some of those specifics.

  8. Interesting proposal! I’m a bit unclear how phase 1 differs subsantially from the function and objectives of the reviews of _grants_ that would fund the research in the first place. I realize this is much more granular than the level of a grant proposal, but perhaps that is an argument for grant proposals to be more granular, rather than adding a second stage to the manuscripts. (Conversely, reasons that grants should not be more granular might apply here too).

    I am curious if an editor asked certain reviewers to focus on only a part of the paper — say, the methods, and other reviewers to focus only on other aspects of the paper, if reviews would do a better job addressing the issues behind the nasty cover on the Economist this week? (e.g. http://t.co/7wBP9n8e2N )

    • Saw the Economist cover article, will have some brief comments in the Friday links. Basically, yeah, the cover is pretty provocative. But at least as I read it the article itself doesn’t really say anything–about the problems or the remedies–that hasn’t been said already by a lot of scientists. Including but by no means limited to Brian, Meg, and I. I think the importance of that story is more that it’s The Economist saying it. People with real power take notice when The Economist says something.

  9. Well so long this new method does not decrease the chances of manuscript acceptance then it is a welcome idea.but if it increases the chances of manuscript rejection I think it should be rather left out

  10. Pingback: Friday links: two-stage peer review, economists vs. economics website, and more | Dynamic Ecology

  11. Pingback: Friday links: Montpellier > everywhere else (apparently), how to be a journal editor, and more | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.