As a peer-reviewer are you gatekeeping or editing?

Dynamic Ecology has had a couple of recent posts relating to peer review roles (reviewer, associate editor) that seem to have struck a nerve. I want to  provide some thoughts on the two fundamental roles of peer-review: gatekeeping and editing.

I think the two notions are fairly clear, but briefly:

  • Gatekeeping – the focus is on deciding (or advising others who are deciding) what is “good enough” to go forward (e.g. to go to the next round of review or be published in a given journal, or to be considered good enough for the awarding of a PhD degree).
  • Editing – the focus is on giving advice to improve the manuscript. Shorten the discussion. Reorient the introduction around X which is really your main point. Add this additional analysis. It is probably important to note that to me editing is mostly about big picture advice like the examples just given. A list of 50 typos, grammar mistakes, missed citations, and ways to improve word choice is fine, but is not much of a job of editing without the big picture advice in my opinion.

It is my belief these roles are fundamentally orthogonal or independent of each other. One can do zero, just one or both of these roles at the same time. I’m not sure what one is doing if one is neither gatekeeping nor editing but I’ve certainly seen a few reviews that have achieved this in my time. And of course gatekeeping may be done in a selfish fashion (this paper contradicts my work) or altruistic (honest advice to a journal editor about whether the readers of that journal will find the paper interesting). I suppose editing can also be selfish (cite my paper, pay homage to my idea even though it is not really central to your paper) or altruistic (this is my honest best assessment of what would take your paper up another notch).

The relative proportions of gatekeeping vs editing probably naturally vary depending on one’s role. As a PhD adviser reading a student’s first draft editing is foremost. As an external examiner on a PhD committee or as a reviewer for Science or Nature, gatekeeping is probably foremost (indeed in my experience a lot of people get so into the gatekeeping role when invited to review for Science or Nature that no paper ever would be good enough to make it through the gate).

But mostly in my experience a lot of people are innately gatekeepers or editors and don’t do a great job of wearing both hats at the same time, even though wearing both hats simultaneously is probably what is called for in most circumstances (PhD committee member reading a thesis, reviewer or associate editor peer reviewing a manuscript). I can tell you as an Editor-in-Chief (or previously as an associate editor) a reviewer who could wear both hats won my eternal gratitude. Tell me whether the paper is fundamentally sound or unsound and how exciting and novel it is (with some rationale and detail behind those arguments). And even if you think the paper doesn’t belong in the current journal, tell me and the authors the 3-5 most important things they can do to improve their paper (in part because it is kind to the authors and in part because your opinion might get overriden and the paper could go ahead at the current journal).

Unfortunately, I think the world is shifting more and more to gatekeeping. The symptoms from a reviewer are very short reviewers mostly arguing why the paper doesn’t belong in the journal or an associate editor saying “because I have a negative reviewer I cannot recommend the paper goes forward.” But every once in a while you will see a reviewer or an AE who has that rare gift to recognize the diamond in the rough. That there is a nugget of something really cool, so buried that probably even the authors don’t see it. And perhaps recognize that the author is a student. And then be willing to work through three rounds of revision with the authors to completely turn the paper inside out and upside down and then polish and tighten it in its new version to become a great paper. The times I have done that as an AE are among my most satisfying experiences as a peer reviewer (and something I value highly in associate editors as an Editor-In-Chief). It would be inappropriate to name them, but I still remember very clearly some examples and taken almost as much pride in those papers as if I wrote them myself.

No journal with 50 associate editors is going to be completely uniform in the balance of gatekeeping vs editing. But there are journals out there that have a history and reputation of focusing on editing as well as gatekeeping. I hope you think about that as a trait every bit as important as turnaround times and impact factors. Editing often actually goes in direct conflict with turnaround times (aside from the fact one cannot always think deep thoughts on a schedule, journals focused on turnaround times often strongly limit the number of revisions they will go through as this counts against the key statistic of time from first submission to publication).

Do you think gatekeeping and editing are the main roles of peer review or do you see others? Which do you focus on more? As an author or peer reviewer do you remember an example where good patient editing really turned a paper into something much better than the original? Any journals you think are unusually good at editing and not just gatekeeping? Do you think the world is swinging towards gatekeeping at the cost of editing?

38 thoughts on “As a peer-reviewer are you gatekeeping or editing?

  1. Very nice post. I’ve always tried to wear both hats. Nevertheless, in the past years I started to be more careful when playing the gatekeeper role, because even the slightest hint of a negative opinion can destroy a very good paper in the most competitive journals. I’ve witnessed some cases when a manuscript received very positive reviews from all reviewers, but just one small criticism made the editor reject the study, even if the issue was easy to solve. Maybe the publishing system became so sensitive to criticism, because the waiting lines are now too long and editors have a hard time avoiding traffic jams in their journals. Too many scientists for too few top journals…

  2. Nice post. These two role are each important, although as a reviewer I find gatekeeping much less interesting than editing. Like you, my more rewarding editorial experiences have involved reviewers (and me) working with authors to help a paper have the impact it deserves.

    I’ve mentioned the two roles in a number of posts, including https://scientistseessquirrel.wordpress.com/2016/02/15/the-best-acknowledgements-section-ever/ and https://scientistseessquirrel.wordpress.com/2015/05/18/the-dumbest-thing-i-ever-said-to-a-reviewer/. These complement your analysis – which I think is spot on.

  3. Pingback: Como revisar um artigo para uma revista | Sobrevivendo na Ciência

  4. Very nice post. Personally, I find the gatekeeping role a bit problematic because it is so subjective. Many people may agree on what sound science is and how a manuscript can be improved. But opinions diverge widely (and may change over time) as to whether a manuscript is interesting and suitable for a specific journal. Eventually, less than a handful of people have a say on whether a manuscript should be published in a specific journal – but a ‘yes’ or ‘no’ here can have profound influences on academic careers.

    • Rainer – I agree with you that gatekeeping is subjective. Its the main reason that at GEB we have two EiC decide what 50% get passed on to review even thought its twice as much work – but its an effort to reduce the subjectivity.

      However, just because its subjective, I’m not sure we can avoid gatekeeping. The logical extreme of that argument is that we should never judge scientists on the quality of their output. I can’t agree with that.

      I also think that whether a paper is scientifically correct or not is every bit as subjective as whether it is important. I’ve had papers rejected at PLOS one which supposedly only rejects things that are scientifically flawed. As somebody who has published 70 papers and reviewed 400, I didn’t think it was flawed, but somebody else did. And just in the reviews of my papers, more often than not the opinions converge, but a significant fraction of the time you can diametrically opposite reviews on the scientific validity.

      I expect we may have to agree to disagree about gatekeeping, but I would really like to puncture the myth that scientific correctness is objective.

      • I didn’t mean to say that one should get rid of gatekeeping. I also see the advantages of such a system. I just think that too much emphasize is put on it at the moment (and I guess we agree on that). Getting or not getting into a prestigious journal can decide which way a career goes and that is in my view problematic. But the problem is maybe not the gatekeeping itself, the problem is that people rely on journal titles too much when evaluating researchers.
        I agree that there is a grey zone when it comes to scientific correctness. But I still think it’s more objective than gatekeeping. There are certain procedures on how to correctly conduct experiments (having controls, a certain samples size etc.) that probably many people would agree upon (Admittedly, I might be wrong here). I’ve also seen manuscripts that I thought were scientifically correct but another reviewer pointed out flaws. But in many such cases I’ve just overlooked those flaws or I didn’t have enough expertise to judge that particular experiment.

      • I would disagree that the logical extreme of a de-emphasis on gatekeeping is that there is no judgment on quality- I would actually expect something like the opposite in the limit (where journals are dead and everything is an arxiv post, or whatever). In this case, without many of the traditional signifiers of ‘quality’ we might be forced to rely more heavily on content.

        Consider the Andrew Gelman thought-experiment, where everything is published (http://andrewgelman.com/2015/09/02/to-understand-the-replication-crisis-imagine-a-world-in-which-everything-was-published/). What is the downside here? Well, you have to wade through crap to find anything good, but that’s no different from now- it’s just a lake rather than a swimming pool. We already depend heavily on automated methods to find worthwhile papers (or at least I do).

        Under this everything-is-published world, I think that peer review is even more important. And what you call the ‘gatekeeping’ role, I would argue, is intact- with the requirement that the peer reviews be published right alongside the paper (anon if necessary). The difference is that to figure out if the paper is worth it, the reader will turn first to the reviews, for a quick distillation of what the paper’s good for. It’s totally nonsensical that we have highly trained people do this pre-digestion and evaluation and then DON’T give it to the reader (at least in most cases). The main problem under this model is convincing reviewers to review papers in the first place; they will have to be a lot choosier, and many papers will have to make do with fewer reviewers. You could argue that a gatekeeping role might consist of simply deciding which papers are important enough to review in the first place.

        This would resolve your differences with Rainer I think. Subjectivity wouldn’t kill papers, and the necessary role as an evaluator of the merits of work is intact. How to make this work in a journal is tougher (it may not work), but I think that it’s the logical next step for dissemination of research.

      • Fair enough. Its a logically consistent system. Not the one I want to live in though. I don’t have time to dredge a whole lake looking for the papers I really need to read. I’m happy to have some pre-filtering (however imperfect) done for me.

  5. Great post. Like Stephen, I think that both roles are important, but not only is gatekeeping less interesting, it is also ideally sort of incidental to the process of editing. Similar to the balance between formative and summative assessment in teaching. As an educator (and a reviewer is in part an educator – we educate each other as peers by providing multiple perspectives on a problem), the formative work is what actually helps the students to learn, but our summative work is important for maintaining institutional standards and societal educational norms and to let students know where they stand relative to those standards.

  6. Maybe it’s a difference of fields and cultures (I’m biophysics and computer science), but, absolutely, gatekeeping. I’m not aware of anyone in my fields who thinks it’s their job as a reviewer, to provide any sort of advice regarding how the material is presented, only to advise the editorial process regarding whether the material is sufficiently accurate, and presented well enough, to be interesting to the community and worthy of publication.

    Minor comments regarding clarification or structure are often appreciated by authors (that I know) in my fields, but if grammar comments go beyond “you really need to find someone who is a native speaker to help you with the language in this”, everyone thinks the reviewer is being a jerk.

    • Interesting cultural differences. I wonder if in part it is because the scientific correctness of a paper is much more objective (at least in computer science – I imagine a biophysics experiment can be every bit as fuzzily correct as an ecological experiment)

      • That certainly seems like a reasonable guess at an origin. And I have to admit that I’ve been leaning more and more towards the computational side of biophysics in recent years, so my experience on the biophysics side is with a restricted sub-culture as well.

        I find it fascinating that reviewers in your domain(s) want to be, and feel that they are, part of the “design team” regarding how a body of work is going to be presented. It’s hard for me to express how alien that feels. I’d like to contemplate whether I think I’d like (as an author) to be the recipient of that kind of review process, and I just can’t get my head far enough around it to have a valid thought yet!

    • Really interesting perspective. What about feedback not just on grammar, but on the quality of the narrative and whether it communicates the underlying reasoning effectively? Too hands on?

      • I’m not sure I’ll have time to compose a completely coherent reply, so if this is so disjointed or confusing as to be opaque, please do ask for clarifications. I actually think this is a really interesting window into different publication cultures, but my brain is not completely with me today…

        So – the review culture that I know, would find feedback regarding whether a presentation is clear, to be valuable and worthwhile feedback. At the same time, we’d find it outside the norm to suggest, or receive suggestions in any detail, regarding how clarity could be improved.

        I don’t know if I’m expressing that quite right. As an example, it would be fair game, and appreciated, to point out that the authors have made a conceptual leap that the reader is unlikely to follow. It would feel quite strange to be making suggestions regarding how that conceptual leap could be bridged. At the same time, it would be acceptable to point out that there is a step that is inadequately explained in an algorithm, or a leap in the simplification of an equation that will stumble the reader, and to provide more concrete advice regarding how that deficit could be corrected.

        Trying to analyze what seems like a sort of instinctual understanding, I think it has to do something with aspects of style and voice, versus aspects that are more concretely factual. As reviewers, we perceive our charge to be informing the editorial process regarding whether the material is correct, whether the science has been properly conducted, whether the authors are presenting something novel and interesting, and whether it’s presented well enough that the expected audience will be able to understand and use the publication. We see our charge with respect to the authors, as aiding in pointing out where they might have missed the mark on some of those aspects. We’d feel it’s ok to provide advise regarding “what”, but not advise regarding “how”.

        I can’t find the right words to say this, and I don’t at all mean to suggest that what other cultures is doing is wrong, but the thought of making a review comment that suggested that I had an opinion on /how/ the material is presented, generates a mental recoil reaction in me that is very similar to what I feel if I _try_ to litter. It’s simply such a deeply ingrained “don’t do that” thing, that it’s almost physically hard to think about!

        I also think there might be a difference in how we think about gatekeeping. With only a few situational exceptions, I don’t believe reviewers from my culture, believe that they are in any position to make decisions regarding what should, or shouldn’t appear in the literature. Our job is solely to advise the editors regarding whether the manuscript is _ready_ to appear in the literature.

        I think it would be really interesting to compare the content of actual reviews from our respective fields. I think I could dig up some that would be acceptable to share from my side…

        I’m finding this a fascinating topic, so please do inquire further if I’ve failed to be clear.

  7. Gatekeeping drives everyone nuts – it’s just as unpleasant as editor to recommend that a scientifically sound paper stays on the other side of the gate as it is to be an author denied entry through the gate. But if a journal will publish 5%, 15% or even 40% of submissions (= almost all journals), then this is a critical function of the reviewers + handling editor, regardless of whether we find it interesting. The unpleasantness stems from the inescapably subjective nature of gatekeeping. In my experience, a small fraction of submissions clearly “belong” in journal X, another small fraction (that make it past the EIC) are clearly not suitable, which leaves a huge fraction that could go either way depending on the roll of the reviewer/editor dice. If things were otherwise, it seems logical to suppose that journals wouldn’t receive submissions in such excess relative to what they will publish.

    In terms of editing, as Brian says, some reviewers will just stop once they’ve shut the gate (why bother with editing if it won’t be accepted?), but it’s important to remember that any given manuscript will very likely be published somewhere eventually, so big-picture editing advice is a service even in the case of rejection.

    • As an editor and author I hate it when reviewers stop at what they see as the first fatal mistake. It’s not even good gatekeeping. What if you’re wrong to think the mistake is fatal, or even a mistake at all?

    • “it’s important to remember that any given manuscript will very likely be published somewhere eventually, so big-picture editing advice is a service even in the case of rejection.”

      Well said!

  8. Thanks for nice post! Gate keeping in small ecological journal in which I am working (Folia Geobotanica), means to open for those who with some help (sometimes great effort of reviewers and associate editors) can publish their results. It is difficult but rewarding as those authors will probably next time go for bigger journals thanks to our help.

  9. It’s sequential. When I’m wearing an AE hat, the first cut is gatekeeping, and then facilitation and editing. Sometimes it’s done in one letter for a clearly good or clearly flawed paper, but often in two and sometimes more rounds of revision, where after the first in or out decision, subsequent efforts are focused on improvement. I realize that in doing so, I shift from being a dispassionate gatekeeper to a facilitator/adviser. The latter is more commonly the role of book/monograph editors. When wearing a Reviewer hat, same thing except opening sentence is for the gatekeeper function and the rest for honing. When wearing an Author hat, I just want to quietly slip through the gate, although after the fact I appreciate the extra advice, mostly.

    It’s not a perfect approach for me. It takes time and is only mostly appreciated by authors, based on their open feedback. I had one blow up in my face and reverberated for months, which goes back to the recent discussions over signing reviews. The AE is the only name the authors’ have, and when they aren’t happy, justifiably or not they can really lay into the AE, both openly and behind one’s back. I have to remind myself that this is the authors’ paper, and my job is really just to look out for the journal’s interests. Once past gatekeeping, I think these usually align.

    I look askance at the “mouse-click editors.” These gatekeeper editors are probably much more common than true editors. Just select one of those Reject, Revise, or Accept buttons and presto, the manuscript management software will write your letter, and attach the reviews all for you. The editor’s valet. No need to even read the paper. That’s for the reviewers do, hopefully.

  10. Sometimes I worry that the initial gatekeeping (i.e. reject w/o review, or reject based on a very short, cursory review) is where the most (unconscious?) bias happens – i.e. authors who are not ‘big names’ or have minority or female names could get chucked before a deeper read of the paper. Obviously this doesn’t always happen, but I wonder if the increasing pressure on the gatekeeping part of things could be perpetuating some of these known biases in scientific publication…

    • The concerns of bias are common. There is quite a bit of published research on this that Jeremy has linked to. My quick read of the literature is that there is not much evidence for a large effect of gender on these decisions, there is evidence of a pretty good sized effect of seniority (your big names), and I’m not as clear of the final effect of minority or more commonly authors from Asia, Africa and South America, but it definitely cuts both ways (many are more inclined to give the benefit of the doubt to such authors). But despite all that even the possibility of these concerns is why so many people are talking about double blind review these days.

      • Except the concern of bias from the commenters was directed towards editors, especially for initial desk rejects. In double blind reviews, the editor is not blinded.

      • You’re right – I missed that. Double blind won’t fix that. I do think some of the same studies I mentioned apply to the early editorial decisions as well which is maybe good news for gender bias but not for seniority bias.

      • I hope I’m wrong, but I don’t think there’s actually that much research on reviewer or referee bias. Even studies on the mechanics of reviewing or decision ratios like those of Fox et al or Campos-Arceiz (2015 in Biological Conservation) are rare. There’s a little more that is indicative of citation biases, and lamentably Michal is in a tough spot – there definitely appears to be a bias against ecology articles by Polish authors in terms of both Citations and Journal Placement. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0109195

        If there is quite a bit of literature assessing bias by referees or editors I’d be glad to see it, as i’ve just spent several days looking for it.

    • Again, coming from a different review culture here, but a lot of the reviews in my field/s (biophysics and computer science) are conducted double-blind. We’re pretty much all LaTeX users, and the “review” document style for most typical venues omits the author and affiliation blocks.

      That being said, when author and affiliation data is available, it’s not something that I ever bother looking at, unless the language usage is sufficiently peculiar that it piques my curiosity regarding its cultural origins.

  11. As early-career scientist, I find gatekeeping really difficult as the criteria for it are in most journal pretty wage and similar. Further, (as somebody already wrote) paper usually can fit in several journals and I guess many publications would have been published in different journals with different reviewers and editors. Often, I honestly do not know if this work is now suited for journal, even if I think it is great.
    Thus, I focus on editing and pointing the novel aspect of the manuscript to the editor and what it is adding to the current knowledge and avoid any clear recommendation for the decision.

  12. A very wise adviser once gave me this advice: when reviewing a manuscript look for reasons it should be published rather than reasons it shouldn’t. As an early career scientist, this changed the way I thought about the gatekeeping aspect of peer review. Certainly studies in which there are fundamental flaws in experimental design or data collection cannot be accepted, but for me it shifted the focus from “is it good enough?” to “is it scientifically sound and does it contain information that would benefit the field?”

    • My PhD adviser advised similarly. In fact in graduate reading seminar courses he taught, he made everybody in the room say something they liked about the paper before we launched into dissecting it. His point was that we are much better at training people to identify flaws than to identify the contributions that a paper makes (and every paper is imperfect so it is always about the balance of contributions vs imperfections which is hard to weigh if you are not recognizing the contributions).

    • Yes, I start off my reviews with this philosophy. I optimistically assume that if someone went to the trouble to conduct a study, write up the results, and submit a paper to a professional journal that they likely have something of value to convey. So I start off thinking about how best to get value from the paper: what could be presented better, what could be analyzed better, what additional interpretations might improve the message, etc. But ultimately I have a role as a gatekeeper for recommending against publication submissions that seem to have no redeeming value after I’ve considered ways that it might be improved short of starting over. I have made far fewer recommendations of “reject” than of “needs major revisions”.

  13. I think there is a difference between peer review requests for grant applications and for manuscripts. In the first case, I have often felt as though the grant agency is asking my opinion about whether the project (as currently described) is good enough as-is, and does not necessarily want feedback on whether it could be made better. In the second case, I have instead felt that the editors and authors are both generally interested in feedback on how to make the project better, if it is not already good.

    I would like to see more efforts to help reviewers be editors/improvers in both scenarios.

  14. Pingback: An introduction to writing a peer review | Small Pond Science

  15. Pingback: Reviewing with imposter syndrome | Scientist Sees Squirrel

  16. Pingback: 3 differences between peer review and academic developmental editing -

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.