In this post, I’m going to propose a new approach to peer review that I think could be used in addition to the current, traditional approach. I will admit at the start that I think it’s unlikely to take off, but I think it’s worth throwing out there, anyway.
The basic idea is this: split up peer review into two stages.
Stage 1: Review of an introduction (which sets up why the question is important and interesting) and methods (which give the details about what would be done, and how data will be analyzed). The goal of this round of review would be to make sure that you have proposed something interesting and novel (or at least worth repeating) and, just as importantly, that the methods you’re proposing to use are appropriate for the question.
Once through that round of review, you would go and do the experiment. Maybe you’d do it three times, because everything died the first two times for no explicable reason. Then, once that was done, you’d write up the results and discussion, add an addendum to the methods (e.g., to say that you’d changed to using a quasi-poisson distribution due to overdispersion), and send it back off to the same set of reviewers.
Stage 2: Review of the entire manuscript, including the methods addendum, results, and discussion.
Major advantages of this approach:
1. It catches potential methodological problems when it’s still (relatively) easy to fix them. Surely it’s much better to be able to point out pseudoreplication or the omission of an essential control prior to the study being done, right? And, even if there aren’t glaring errors like that, sometimes there are other tweaks that would improve a study design. Right now, my student and I are trying to design an experiment for a somewhat complicated question. There’s a potential confounding factor that we’re trying to figure out how to account/control for. There are a few options, and we’re currently trying to decide which is best. We’re doing this by asking people who seem like they would have good ideas. As peer review stands now, if we get substantial feedback on the design from some people, we’d put them in the acknowledgments; that would then make it likely that the people who would be the most useful peer reviewers would no longer be asked to review the final manuscript. That seems pretty unfortunate to me.
(Note: I realize some people will say that grant proposal review should do this. However, given how little room there is in proposals, you can never come close to fully explaining methods in proposals. Plus, many studies are not subjected to review by an NSF panel or similar body prior to being carried out.)
2. Related to the above: It allows the reviewers to say “Have you considered Smith et al. 1990?” at a time when you can still do something about it if you missed an important earlier study that is relevant.
3. Two stage peer review would encourage more publication of negative results. You’re already halfway there, right? It lowers the barrier to getting the results out there, which is important given time constraints. If the question was interesting in the first place, then the answer should be interesting, too, even if it is a negative result.
4. It would help people avoid getting into trouble with exploratory analyses and post-hoc tests. You couldn’t pretend that the study was all about pattern X all along, because the reviewers would know that you were originally planning on studying pattern Y. You would instead write up what happened related to pattern Y, and then would have to make it clear in the manuscript that pattern X is interesting and noteworthy, but resulted from post hoc analyses. Maybe we’d become more like artists.
As I said at the beginning, I think it’s unlikely that this approach will take off. If it does take off, it seems like it would be most likely to happen at one of the Open Access journals (like PLoS ONE or Ecology and Evolution) where the decision is made based on the soundness of the science, rather than the novelty/sexiness of the results. But I don’t see why it shouldn’t be possible for society journals like Ecology, AmNat, etc. As I said above, if there is agreement that a particular question is really important, shouldn’t we want to know the answer to that question, even if it is a negative result? Journals could still choose based on impact/sexiness; they would just have to evaluate the potential impact of the question, rather than the results. And I don’t expect that all manuscripts would get reviewed this way – just that it would be nice to have this as an option.
There would need to be some way to prevent people from abusing the system by going through the first round of review at one journal (say, PLoS ONE), getting useful feedback on their experiment, and then deciding the results were particularly sexy (maybe based on post hoc tests) and sending them off to another journal (say, Science) instead. I suppose one thing that might keep that in check is that the people who did the initial rounds of review would see the paper eventually (either as a peer reviewer, or once it came out), and would know what had been done; keeping that in mind might reduction the temptation on the part of some people to jump ship half way through the process.
Do you think this approach would be worth pursuing? Do you have thoughts on how to improve it? Could you see yourself trying out this sort of approach? Please let me know in the comments!
UPDATE: Thanks to twitter, I’ve learned that Chris Chambers proposed something similar for the neurosciences. His post on this raises lots of the same issues that I raised in mine, especially those related to getting in trouble with exploratory analyses. As he put it, “The moment we incentivize the outcome of science over the process itself, other vital issues fall by the wayside.” Chris has created something called “Registered Reports” as part of the journal Cortex. It will be really interesting to see how it works out!
UPDATE 2: James Waters recently proposed a very similar idea in the comments of a recent post by Ethan White. (ht Ethan White on twitter)
Related post
Jeremy’s praise of pre-publication peer review