Reviewing is something that brings out my imposter syndrome, and I know I’m not alone. Being asked to review implies that someone views us as having expertise in a given area, which means that, if you screw up the review, you will reveal yourself as an imposter (or so our brains tell us). And, for journals that copy reviewers on the decision letter, one way to tell if you’ve messed up and are an imposter is by comparing your review to that of the other reviewer(s). Rarely, I’ve been unable to figure out which was my review, because the reviews were so similar. (Phew, not an imposter!) But what about when the other reviewer notes things I missed? Clearly that means I’m an imposter!
For a long time, I viewed it as a failure on my part if the other reviewer caught something I missed. I felt like it indicated that I hadn’t been careful or critical enough. If we aren’t super critical, we aren’t good scientists, right? (I’m being facetious. I don’t actually believe that being harsh = being a good scientist. And it is definitely not the case that the harshest review is the best review!) But what about cases where the other reviewer raises concerns or criticisms that seem important and insightful and constructive. If I missed those, I failed as a reviewer, right?
Again, not necessarily. The reason relates to something covered in a recent blog post by Stephen Heard, where he talks about finding reviewers. In it, he says he only uses one of the reviewers suggested by the authors, and explains that is because:
I’m aiming to represent perspectives the authors might not have thought of. Those could be diversity perspectives – when authors name a bunch of senior people, those lists can be quite unrepresentative – or they could be disciplinary or organism-centric perspectives (the latter may be the most common thing I’m trying to broaden). I think peer review is more useful if the reviewers are thinking from different places.
I also seek to have diversity in my reviewers. Let’s start with the type of paper I might handle at AmNat: for a manuscript about an experimental test of virulence evolution theory, I might look for one reviewer who is a theoretician who works on the evolution of virulence and another who is an empiricist (ideally, one who knows both about the study system and about virulence, though I might need to choose between the two). Or, for Ecology & Evolution, where I’m the Associate Editor for submissions to their Academic Practice section, for a manuscript that focuses on mentoring programs for assistant professors, I might seek one reviewer who is an assistant professor in ecology & evolution who I know has thought about mentoring, and one who is someone whose research expertise is in mentorship (though not necessarily in ecology & evolution). Or, for a manuscript that interviews students at the beginning and end of the semester to ask them about their views on evolution, I might seek one review who is an expert in qualitative analysis, and one who is an expert on how to teach evolution.
For AmNat, where I send in a list of six possible reviewers to the journal office, this means that my emails can end up looking like this:
- Start with Person A; if they’re not available, please ask Person B.
- Person C; if not available, please move on to Person D
- If still looking for reviewers, Person E
- If still looking, Person F
In other words, I set things up to try to ensure that the manuscript is read by reviewers with different backgrounds and areas of expertise. To me, this means it’s completely expected that the two reviews will have differences. Because of the way I choose reviewers, reviews that differ from one another are a feature of the review process, not a bug.
This doesn’t mean that I hope that the reviews will be completely opposite of each other. Though, even then, that could be that they are both reacting to the same problematic aspect of a manuscript, offering different solutions. In that case, part of my job as an Associate Editor is to offer the authors guidance about a path forward. (I do a full review of manuscripts I handle as an AE, reading them as carefully as I do when I am reviewing a manuscript for another journal.)
In short: if your review differs from the other reviewer’s, it doesn’t necessarily mean you did a bad job. I think it’s good to read the other person’s review and to reflect on what they saw in the manuscript. Maybe you really did miss something, or maybe their review will suggest an approach you didn’t know about but that is relevant to your work. But, if the other person notices something you didn’t, that doesn’t mean you’re an imposter.
Finally, I wrote this from the perspective of the reviewer, since the motivation for this post came from my own experiences as a reviewer and from a conversation I had recently with someone who reviewed a paper I was handling as AE. But it also applies to situations where we’re the authors, not the reviewers. If the two reviews don’t agree with each other, it doesn’t necessarily mean one was way off base. (In this case, though, we’d be tempted to assume the less critical one was the “good” one!) It might just be that one of the reviewers came from a background or perspective that let them see something in the manuscript that is problematic or needs more explanation. In that case, hopefully their review was written constructively (passing the Poulin test!), and hopefully the paper will ultimately be stronger for having addressed it.
Nice post, I fully agree with this. The flip side other reviewers seeing things that you didn’t is, of course, that you saw things that they didn’t. Works both ways.
Yes to all this!
Thanks for a really useful discussion. From the point of view of an author, it helps explain the sometimes frustrating experience of getting divergent/contradictory opinions from the reviewers. When this happens it can be hard to figure out the best way forward, since many editors don’t go to the effort of providing guidance. But usually it highlights that some more thought needs to go into that particular aspect. I struggle at times with knowing how far I can push back against something I strongly disagree with. For example I had a paper rejected one time because I didn’t add in statistical tests that I felt were inappropriate (we explained why in our response letter), so on reflection it would have been easier to suck it up and add the analyses into an appendix. It’s a bit of a cost:benefit analysis of balancing the risk of rejection and effort of a rewrite to different journal criteria with having content in the paper that you aren’t entirely happy with.
From the point of view of a reviewer, your post will give me more confidence to accept reviews on topics where I don’t consider myself an ‘expert’. I occasionally accept reviews for papers that I’m really interested in but are a bit outside of my expertise, and have enjoyed the experience. I’m up front with the editors if there are aspect where my comments may be a consequence of my naivety, but my imposter syndrome does kick in and make me worry that I’ll come across as an idiot. This isn’t helped when the editor’s decision doesn’t match with my recommendation, so I can feel like I might have mucked up the review. I find it extremely useful to read the other reviewers comments, and think it helps me develop as a reviewer. Not all journals share the other reviews and I wish they did. In particular, I want to be able to see the other review/s and the authors response to them if I am re-reviewing a paper after major revisions. I had a recent experience of this where I ended up requesting them from the editor. This swayed my decision, as the authors had chosen to ignore much of my advice, but I didn’t want to be unduly harsh on them (see thought above from the authors point of view). When I saw that the other reviewer had made some really sensible suggestions that the authors had also chosen to ignore, this made it much easier for me to recommend that the journal reject the paper.
“From the point of view of an author, it helps explain the sometimes frustrating experience of getting divergent/contradictory opinions from the reviewers. When this happens it can be hard to figure out the best way forward, since many editors don’t go to the effort of providing guidance.”
As an author, that frustrates me too. Good editors will always give authors guidance as to what are the most important points to respond to, and how to address any issues on which the reviewers disagree. That’s the whole point of having an editor! If all the editor did was count the reviewers’ “votes” as to whether the ms should be accepted or not, and then tell the authors “respond to the reviews” with no elaboration, we wouldn’t need editors.
Agree with Jeremy that the editor should resolve conflicting advice. Of course it doesn’t always mean they do – too many editors these days see editing like a video game – go into their dashboard skim the comments and say yes or no in a few boilerplate sentences. But good journals don’t allow this (or at least it should be rare).
If it does happen, it is appropriate for the author to email the AE and ask for guidance on how to resolve the conflict.
I love this way of putting it!