Why the peer-review process is so slow

Jeremy’s post this morning about personalized review requests from the NSF is a perfect lead into something I’ve wanted to discuss: why it takes so long for manuscripts to be reviewed. Here are some thoughts as an associate editor, that aren’t at all original but bear repeating.

The typical process: A manuscript is submitted (cross those fingers!) If it passes a brief technical check for formatting, the editor-in-chief assigns it to an associate editor (AE) to handle. The AE gives it a quick read to sort out clearly inappropriate manuscripts and identify a list of potential reviewers. AE invites reviewers until two or three have accepted their assignment. Reviewers read the manuscript and submit their reviews, which the AE uses to guide their decision.

Therefore there are three places to lay the blame for slow turnaround: the editorial office, the AE, and the reviewers. The editorial office is staffed by paid professionals. This isn’t usually a major source of delay. Sometimes the volunteer AE’s fall asleep on a paper, but that’s unusual and the editorial office should keep an eye on them.

That leaves reviewers to gum up the works, which they can do in at least four different ways:

1) Not agreeing to review. Personalized invitations do help, but they’re no panacea. It’s not uncommon to have to invite eight to ten people just to find two who agree.

2) Not agreeing OR disagreeing to review. Even worse than saying no is saying nothing. Or taking ten days and multiple invitations to say no. If the odds of any reviewer saying yes is 25%, those week-long delays add up fast.

3) Turning in the review late. Journals typically give reviewers two to four weeks to read the paper and submit their reviews. Very few succeed in meeting this goal.

4) Going AWOL. The worst situation is when a reviewer agrees to review, but then disappears from the face of the earth. Should the AE wait for a review that may never materialize? Should they try to find another reviewer? Should they act as a reviewer themselves and go with one other review? In any case, the time to decision just went up by at least two months.

Things often work smoothly, but sometimes it’s a comedy of errors. And that’s why it took four months for your manuscript to emerge from the peer-review system.

How to improve the process? As Jeremy alluded to, Mark McPeek and other editors wrote a great editorial calling for the application of the Golden Rule. In practical terms, here’s what we all could do:

1) Say YES when asked to review.

2) If you can’t do it in a reasonable amount of time, say no immediately. Don’t let that invitation sit in your inbox.

3) Suggest colleagues who might be good alternatives, preferably non-obvious ones like postdocs who might not have already been invited.

4) Don’t take the deadline as a suggested time to start reviewing.

5) Keep the editorial office informed. If you’re going to need an extra week, let the AE know, and try hard to stick to that new deadline.

My own personal suggestion is inspired by the announcer at Schiphol airport: “Passenger X traveling to Barcelona. You are delaying the flight. Immediate boarding at gate C14 or we will proceed to offload your luggage.” When all reviews except one are in, and that one is late, the tardy reviewer gets an automated email letting them know that they are delaying the review process. Chop chop! Unfortunately the review process is strictly voluntary, so we have no luggage to unload.

If any of you have further suggestions, leave them in the comments.

36 thoughts on “Why the peer-review process is so slow

  1. I should be reading Jeremy’s post on SEM resources but this is real timely for me.

    Two days ago I received a message from the editorial office of a journal that I completed a review for in late July (everything I relate in this story is going to remain anonymous). The message was simply to inform me that the manuscript had been rejected. There was only one other reviewer, and his/her review was included with the message.

    My review had been supportive, saying the work was fundamentally sound, a useful addition to the subject area (bias in the taxonomic composition of trees from land surveyors data in the 1800s) and then detailing some weaknesses I thought should be fixed. The manuscript was quite nice–tightly written and nicely integrating both simulation analyses and application of a new technique to a large set of empirical data. My review was a mix of general comments and line by line specific comments.

    The other review consisted of two short paragraphs of about 3 sentences each. The reviewer was completely wrong in all comments, and did not even seem to grasp the most basic goal of the paper. It was a complete travesty, just a hodgepodge shotgun assortment of wrong statements and confusion. Pretty clearly, this review had just come in, almost two months after mine, and on its basis the paper was rejected. I should note that this field has been plagued by a number of long-standing analytical problems that both the author of the manuscript and I have been trying to correct for several years now. The nature of these problems would make people here cringe they are so basic. I won’t get into that here though.

    This was something of a straw that broke the back for me.

    I immediately composed a letter that I sent to the coordinating editor, explaining that the other reviewer’s review was a travesty, that he/she was either grossly negligent or incompetent. I requested an explanation for exactly why the paper was being rejected, and that my response letter be passed anonymously to the other reviewer, before I would consider any further reviews for the journal. I also made it clear that I did not know the author and had never met him/her and that my concerns were based 100% on the scientific issues at hand. Which is 100% true.

    I cc’ed this letter to the author, whom I think was kind of shocked, but pleasantly surprised, and who responded with his/her own letter detailing some of the specifics of why the reviewer was egregiously wrong.

    We’ll see what happens, but as for me, I’ve just decided that I’m not going to take this kind of blatant malfeasance anymore. I’ve seen too much of it, too many times. In fact, I’ve decided that I’m going to detail the most egregious of the examples I’ve experienced on my new blog and/or wherever else seems appropriate. If you don’t shine a light on this stuff, too many people just quietly accept it like good little boys and girls, and so the same behavior gets repeated over and over again.

    • I once came into the position of reviewing a paper criticising one of my own papers. I was as sympathetic as possible and only corrected some misunderstandings of what I’ve actually said, but emphasiszed that the author came up with his own genuine idea and – with the corrections – it would therefore be worth publishing. It got published without any correction. The BS of the system is unfathomable.

  2. Professional ethics seems to be slowly breaking down under the pressure of the incentives academics face. I don’t see how to change the incentives, so I’d suggest that we need to replace professional ethics with binding rules. Hardin’s “mutual coercion, mutually agreed upon.” That was the motivation for PubCreds:

    https://dynamicecology.wordpress.com/2011/04/22/an-oikos-editor-and-a-former-editor-are-fixing-the-peer-review-system/

    Worth noting that something like the PubCreds idea (basically, require people to review in appropriate proportion to how much they submit, on pain of not being allowed to submit) has been incorporated into two major new for-profit peer review and publishing initiatives: PeerJ and Peerage of Science:

    https://dynamicecology.wordpress.com/2012/06/14/great-minds-think-alike-when-theyre-trying-to-fix-peer-review/

    Of course, there are other worthwhile ideas out there. Sharing of reviews among journals is one that seems to be gaining traction. Usually as part of a “peer review cascade” whereby a publisher makes it easy for papers rejected from a flagship journal to be transferred (along with their reviews) to an unselective, author-pays open access journal in order to collect the resulting publication fees. That’s the model I’d like to see ESA pursue with Ecosphere, and they seem to be moving in that direction.

    An off-the-wall idea, which I’ll be posting on in near future, is to offer authors a “no revisions” option. That is, when authors submit, they can specify that they either want the ms accepted as is, or rejected entirely. This could make it easier to attract reviewers by reducing the burden on them. They don’t have to give detailed comments, they only have to say enough to explain their recommendation to accept or reject.

    And of course there are less worthwhile ideas. Increasing the rate of rejection without review is one most leading ecology journals seem to have adopted. I find it increases the stochasticity (and the perceived stochasticity) of the review process, at least when journals are aggressive about it. But it’s an easy way for journals to respond to increasing numbers of mss and the increasing difficulty of finding reviewers, so it’s probably here to stay.

    Most ideas I’ve seen for incentivizing peer reviewers don’t really seem very plausible. For instance, the incentives journals typically offer (like 6-12 months of free online access to the journal) aren’t valuable to academics with access to institutional subscriptions.

    Much more discussion of these issues can be found by following my first link.

  3. I should add that this incident coincides with a decision rendered on a manuscript I submitted to PNAS earlier this summer, and which was a travesty of similar proportions and on which I am similarly forcing the issue. Combining these with an unbelievable experience I had with Geophysical Research Letters a couple of years ago, plus some other incidents, I have pretty much fully lost confidence in the review process as any sort of high quality and defensible procedure. Indeed, I don’t trust it anymore. It’s too full of personal subjectivity and unprofessionlism. I think my experiences are too many and too egregious to be unreflective of the general state of affairs. Perhaps I’m just less willing to put up with such things than I used to be, but that’s not going to change, that’s for sure.

    I think we need a completely new and different review process, especially in this internet age where everybody can easily access and read things. If PNAS rejects my appeal and complaints, I may very well put the entire R code, the manuscript and its detailed supplement, and the many results and diagnostic measures, online and available for anyone who wants to evaluate. And I guarantee you, that given the contentiousness of the climate change issue, and the various methodological problems in paleoclimatology in particular, that there will be a whole bunch of people very interested in the study and its conclusions, which are not favorable.

    Or we can continue to allow two or three people to anonymously review a paper and rip it to shreds and disallow publication for no good reason, should they so choose.

    • Jim, I can only sympathise with your bad reviewing experience. As Jeremy has said, here and elsewhere, we pretty much all suffer from what we perceive to be bad reviews, and some folks are trying to come up with ideas to improve the system. I’ve also thought about publishing some of my choicer (ahem) reviews on a blog, along with my detailed responses, especially after an appeal fails. Unfortunately, the small print often points out that the review process is confidential, and the reviews (or copyright on them) are possibly even ‘owned’ by the journal.

      But,

      I may very well put the entire R code, the manuscript and its detailed supplement, and the many results and diagnostic measures, online and available for anyone who wants to evaluate.

      just cut out the middle man, submit to PLOS One, BMC Ecology or another open access publisher that focuses on technical/methodological quality (with all your code, appendices, etc). Get your work out there and let the community evaluate how useful it is, safe in the knowledge that it is technically correct.

      • Re: PLoS ONE and co., the definition of “technically sound” is apparently the subject of ongoing debate among editors. I’ve been rejected from PLoS ONE, as has a colleague who, like me, mostly aims to publish in quite selective journals. I freely admit this is a very small sample size. But I’m don’t think one can bu guaranteed a positive review experience anywhere. And of course, publishing in PLoS ONE or journals with a similar financial model costs money authors may or may not have…

      • That’s exactly what I want to do Mike, so thanks for those suggestions. The question is how best to go about it, and there are a number of considerations, not of my making, that truly put me between a rock and a hard place on this. The topic I addressed in the paper is of such importance that I’m no longer particularly concerned about where it gets published, but that the information just get out there.

        As for owning the rights to the reviews, I don’t see how they can claim such a thing if an article is not published, and I’m more than willing to force the issue on that as well. Very literally, the more I feel I’m being backed into a corner or not dealt with fairly, the fiercer I am going to fight. I’m tired of this crap and I mean business, and if they don’t think I mean business, let them think that.

      • Jeremy, I’m glad to hear there’s still a discussion about the ‘quality’ issue. Having read quite a few of your articles, I’m not surprised though 😉

        I’ve come to (grudgingly) accept, or at least not beat myself up over, the subjective element of peer review over time. But if there are elements of ‘technical correctness’ that still appear debatable, then this is a subject that really deserves to be debated out in the open. I know PLOS run a range of blogs, perhaps this has been/can be brought up over there.

        As for finances, PLOS will waive fees if you or your institute doesn’t have the resources. This may be harder to argue from a tenured position in a N. American/European university, where the department/library should have resources to help pay page charges/publication fees in this day and age, but I have colleagues who have successfully applied to have the fees dropped from PLOS One. Spain is not a country that always supports it’s scientists very well from a financial perspective. The UK (and NIH in the USA, I think), on the other hand, is now mandating that scientists who receive substantial government funding will have to make their published research Open Access within certain, short time-scales, based on the argument that the public should have ready access to the research that they pay for. This has the added benefit of allowing international researchers to access (and cite and develop) all published UK (and NIH) research.

        Jim, good luck if you do choose to go down the Open Access publication route. There are even OA journals that don’t charge any publication fees e.g., this one if that is an issue.

        It’s a fine, and important balance to strike between getting information out there, and ensuring that the scientific record isn’t swamped by poor or pseudo-science.

      • Re: open access author fees, yes, Univ. of Calgary has an open access author fees fund–*if* the author certifies that they have no funds of their own to pay the fee. I drew on this fund once, but frankly mine was a borderline case. I had to argue that my grant funds were fully committed to other projects. Which in a way they were (there’s always science I can spend my money on), but in a way they weren’t (NSERC grantees are free to spend their money for any permitted purpose; there’s no firm budget to which they must stick).

      • It’s fundamentally wrong to suggest we should ‘democratize’ scientific peer review. Just because 99 out of 100 people like your idea doesn’t make it correct. We shouldn’t dilute the (already massive) scientific record with popular, but flawed ideas.

        Imagine a world without refereeing

        I can’t help but imagine a world where creationists are peeing themselves in glee at their new found ability to flood the ‘marketplace’ with whatever flavour of shite they want that day, then insisting the government listen to their arguments because they’ve got more ‘like’ clicks than a paper about control and modification of Hox genes in GM organisms. (apologies for the language if this is a family blog. I’ll try not to use the ‘C’ word again)

        Yes, bad papers get published under the current model, and good papers get help up. But the alternative ‘free-market’ idea doesn’t work in economics (think of all the extra regulation that’s required to control everything, then read Adam Smith), and there’s not evidence to suggest it would work in scientific publication. At least PLOS One carry out a technical check first.

        All in all, I’d say that Wasserman’s piece is very poorly thought through. It fails to acknowledge so many realities of the scientific community, including those that can feedback and inform government policy, that would make a free-market publication model unworkable. He basically relies on argumentum ad populum rather than any evidence or explicit proof. Luckily, the internet already exists for the publication of such nonsense (and blogs to take them down), so we don’t need to throw out a scientific approach to filtering and publication.

      • Wow…people really disagree on this topic! The difference in reaction to Wasserman between Jim and Mike is like night and day. I still don’t know where I stand. Probably somewhere in the middle, as usual. No time to think it through now though…I’ve got reviews to do and manuscripts to write! 😉

      • I’m not sure exactly where I stand on Wasserman either. My instincts are to disagree, but that’s because I’m old and the world he envisions would break my filters on what to read. I think it’s an empirical question, with a non-obvious answer, how well the wheat and the chaff would get separated in his world vs. our current world.

        I have an old post that’s somewhat relevant here. In Wasserman’s world, I do think a small fraction of self-published or arXived papers would rise to the top. But not just because they were the “best”, but because they were the ones that went “viral”. In Wasserman’s world, what rises to the top, I think, is whatever stuff manages to get popular enough so that a positive feedback loop can kick in and people start reading it because other people have already read it. After all, in science you actually have a strong incentive to read what everyone else is reading; you can’t afford to just ignore what your colleagues are thinking about. And even if that weren’t the case, how are you going to filter a world in which everybody self-publishes, except by reading the most popular (most linked-to, most tweeted, etc.) stuff? Popularity of YouTube videos is a good analogy here, I think. Really popular ones usually have something going for them–but so do lots of videos that don’t go viral. Of course, that sort of viral dynamic happens in a world with conventional pre-publication peer review too too–we have bandwagons in ecology.

        I also wonder if, in mathematics and physics, Wasserman isn’t underrating a bit the role that journals still play in acting as a final seal of approval on stuff that was first posted on arXiv. Yes, mathematicians and physicists make heavy use of arXiv–but if you asked them if they wanted to do away with their peer reviewed journals, what would they say? Similarly, in economics there’s a long (many decades) tradition of sharing pre-prints (known as working papers), which are the basis for most of the active discussion and debate in the field. But you can’t get hired just based on working papers–you need a publication or two in a selective, prestigious economics journal. Is there any field that’s gone completely down the road Wasserman advocates?

        Whether Wasserman’s world is one in which it would be harder to tell science from non-science, that’s a good question to which I’m not sure of the answer.

      • Wasserman is not advocating truth by popularity there Mike–I doubt that anybody would be for that. I think he makes the point quite well that the cream will still rise to the top, but just by a different process, a more inclusive one than we presently have, which creates so many problems.

        There would however need to be some mechanism for deciding which studies were the most important and relevant when it comes to making policy decisions.

      • See my reply to Steve, Jim. Yes, something would rise to the top in Wasserman’s world. Whether it would be more or less creamy than in our current world, I’m honestly not sure. I think it’s an empirical question.

      • Jeremy, in my view, the whole filtering process would still be heavily reputation based. The filters would be largely based on the quality of work one has produced in the past. Granted that this raises the question of how “newbies” would get their reputation started, but I don’t see why the current method for doing so–collaboration with people who are already known in the field–would necessarily not also work in that system. But yes, I agree that it is not going to be easy in general to wade through a huge number of studies. It will be the price we pay in my view.

        Stepping back a step however, it’s clear to me that the first order of business is raising a ruckus regarding the current state of affairs, as it usually is when you’re trying to reform something.

      • Yes, absolutely, I don’t think that’s a point that gets made enough. A Wasserman-type world wouldn’t necessarily look so different than ours, at least early on, if readers used filters like “read stuff written by people who got famous back in the pre-Wasserman world of selective journals and pre-publication review”.

      • I must be becoming a fogey like Jeremy, but I’ll take the status quo over a world without refereeing. The Wasserman article reminds me of this XKCD cartoon.

        The peer-review system is a division of labor. I review a few papers in my area of expertise, you review some in yours. This guarantees all published papers have been read by at least three people (two reviewers plus AE).

        In the alternative world, most papers wouldn’t be carefully read by anyone. You’d have to go through each one yourself. It’s impossible to keep up with the literature as it is! Some distributed review process could be superimposed, but if it’s hard to get reviewers now, it would be much more difficult if there was no editor asking you.

        Everyone has papers rejected. It sucks, and it’s worse when it’s on mistaken grounds. But there are plenty of journals out there. Reformat and resubmit elsewhere. Be sure to address the reviewers’ comments, even the most misguided, because they indicate points that readers may also be mistaken about. And they might end up reviewing your manuscript again in the new journal.

        By the way, some journals have a distinction between “reject” and “reject without possibility of resubmission”. Sometimes “reject” really means “major revision — don’t get your hopes up but don’t give up either”. If the language of the decision letter isn’t clear, ask the editor. Politely. They almost assuredly don’t have it in for you personally, and antagonism will get you absolutely nowhere.

      • I don’t agree with much of this at all Chris.

        First, I wish people would stop misrepresenting the Wasserman article. He’s not advocating a popularity contest and he’s not advocating a “world without refereeing”. Those are distortions of his position. He’s advocating unlimited access to draft manuscripts, which is not the same thing. I cannot for the life of me understand how anyone could argue that having your paper reviewed by 2-3 reviewers, instead of potentially many times that number, is somehow a preferred situation.

        Second, it’s no small task, in many cases, to “reformat and resubmit”. It can easily be a major piece of work, and represents a major slowdown and gum up of the publication process. It’s completely inefficient. In my recent PNAS submission, I literally committed myself to the specific format of that journal, and re-sbumitting it elsewhere is going to require many days of work just in re-organization. Unless you just submitted it to the wrong journal, which is easily identifiable without a review, then why should it be rejected at one journal and accepted at another? Is it a legitimate piece of science or not?

        Your revision comment also implies that the reviewers were necessarily right in their assessment. What if they weren’t? What if they were blatantly wrong. What if you have strong reason to believe something more nefarious is going on in their review (if you want examples I’ll give you some). Are you supposed to just change things to please these people so you can get it published? I for one am not going to do that. Ever.

        The other problem with re-submission is that, depending on the specificity of the topic, you may in fact get one or more of the same reviewers, and if they have it in mind that they don’t like the implications of what you have to say, they’ll shoot it down again if they can.

        As for the utility of antagonism, I don’t know what you mean by that term, but as to your conclusion there, I’ve found that it’s acquiescence or accepting of decisions that you don’t agree with that “gets you absolutely nowhere” and is completely detrimental to progress, and more importantly, to your own mental health and sense of power. You either fight things you think are wrong, or you get defeated by people who could not care less about the impact of their arbitrary decisions on you and your life. All social structures in which problems exist but are not dealt with depend on this expected passivity of the impacted for their continued stability.

        I also think you have no basis for the statement that “most papers wouldn’t be carefully read by anyone”. How do you know that?

      • Jim, with all due respect, it is not correct that other commenters are “misrepresenting” Wasserman. He does in fact want a “world without referees”. He explicitly asks his readers to imagine precisely that, and uses that phrase as the title for an entire section of his paper. He also says that, in his vision, refereed journals will have the same role as punchcards (i.e. no role). He is explicitly imagining arXiv (or self-publishing of preprints) not as an intermediate step in the scientific publishing process, but as the final step. He is explicitly envisioning a world where everyone just puts everything they have to say online and lets the “crowd” sort it out. We can argue whether that’s a good thing, a bad thing, or somewhere in between–but that is what Wasserman is envisioning, and it’s not a misrepresentation to say so.

        And while Wasserman would probably deny that in his world, science is a popularity contest, believing that the cream would inevitably rise to the top, I would argue that there is likely to be a popularity contest-like element to such a world (more precisely, a bandwagon-y or “viral” element). Because in that world (as in our world), one important filter for many readers is likely to be “I’ll read what lots of other people are reading”.

        As for the basis of the claim that most papers won’t be carefully read by anyone, well, the vast majority of articles in PLoS ONE or other open access journals attract no comments. So there’s one big piece of evidence. Happy to provide links to data on that if you like.

      • I think this is largely a semantic issue Jeremy. The “crowd” that you mention ARE the reviewers in my reading of what Wasserman means. There thus *are* still reviewers (if people are reading)–they’re just dispersed and are not part of any formal review process like we have now. Now if nobody’s reading at all and making decisions about what they think are good/bad papers (and why exactly), then yeah that’s a real problem. I also think you are right that some papers will be become popular for reasons not strictly related to their content. However, it seems to me that we have that problem already.

        I read Wasserman’s overall point (or one of them anyway) to be that the present situation limits the full expression of possible ideas. Some stuff just never sees the light of day, without good reason. With an open process, that doesn’t happen.

      • IIRC, Nature also tried an experiment with reader comments for online versions of articles. Hardly anyone did it, so they concluded it was not a popular option and dropped it. Part of the problem was a lack of publicity (I only heard about this experiment after it had ended, and I blogged at Nature Network while it was running!). Therefore lack of online comments on a paper do not necessarily mean people aren’t reading the paper. I think, instead, it reflects the conservative nature of scientists, not wanting to be openly critical of others’ furniture, while sitting in their house.

        PLOS One has a respectable IF (higher than Oikos!), which suggests that papers in Oikos do worse than random at being cited people are ‘commenting’ on PLOS One papers in the same way scientists have always commented on papers – in their own manuscripts.

        Apart from that, I interpreted Wasserman pretty much the same as Jeremy and Chris did. There are apparently problems in the way we perceive peer review to currently be working. But baby, bathwater, etc.

    • Similar to your experiences, Im coauthor on a paper recently rejected despite two quite positive reviews. The paper seemed to be rejected based on the third “review” which, similar to your first comment, consisted of about a paragraph which said merely that we hadnt considered X. The paper wasnt about X at all.
      We wrote back to the Editors and have heard nothing since.
      I wonder how typical this behaviour is.

      • If you think an unfair decision was made, which it sounds like it was, I urge you to not just take it passively, if you are the lead author.

        I’m going to start collecting incidents of unfair reviews witnessed as either an author or a reviewer. They can be as anonymous or detailed as one wishes. Anybody who wants to share anything can contact me at jrbouldin@ucdavis.edu or bouldinjr@gmail.com

  4. Pingback: Problems with the scientific publication review process | Ecologically Orientated

  5. Two to four weeks. Goodness! If only. My area regularly gives reviewers 8-12 weeks to complete a review. Editors insist that this is just how long it takes. Obviously that’s not actually true. The paper usually sits for 7 to 11 weeks and then is reviewed in a flurry. Why not just cut down on the number of weeks give. It could even be done incrementally over the course of a few years.

  6. A question for Chris, Jeremy and the other current/former AE’s out there: is there a notable trend in the quality of a review vs. the time required to receive it? I would imagine that the ones that take a really long time to come in might be bimodal — some really good ones where the person was waiting to find enough time to do a thorough job, and some really crappy ones where the person eventually just did a rush job to get it in and get it off his/her desk. Is that what you see?

    • Sometimes it’s really worth the wait for a late review, but I’m not sure it rises to bimodality. Even short reviews are useful as long as they’re not completely frivolous.

    • In my experience its not the length of the review (in # of words or time taken), its how objective (i.e. the reviewer not pushing their own agenda) and how thoughtful the review is. Most reviewers take their duties quite seriously but there are a few (5%) that just don’t want to see a paper published and make up 5 sentences on why it shouldn’t be (if anything these people get their reviews in on time). A good editor should filter that review accordingly though. Unfortunately in this day and age where there is pressure on editors to push down acceptance rates and editors are limited or time too, this doesn’t always happen like it should – its too easy to say “one bad review is enough to kill a paper at our journal” and be done. When I first started publishing 15 years ago, I had several cases where an editor went out and got a 3rd review that overrode the bad review. But that never happens now (too much pressure for fast turn around on papers which I think has been overdone).

      • I think the best editors just use the reviews to inform their own judgment. As an editor, I never just “counted votes”, and so never saw a need for “tiebreaker” reviews. I pretty much only cared about the reviewers’ comments, not their votes on whether to accept, revise, or reject.

  7. My question is how appropriate is to e-mail the AE with a question on the status of our manuscript if the peer-review was completed a month ago and its current status in online system indicates “Required Review Completed”. I understand that AE needs some time to read the reviews and to make a decision on rejection, revision or acceptance although how long can it take in reality? One day? An hour? A month? Half of the year?

    The worst thing is the diplomacy – you don’t want to ruing a thing by rushing the Editors.On the other hand I have experienced that AE simply fell asleep and forgot to deal with the manuscript. A simple question about the manuscript status suddenly gave a response with forwarded reviews. How do you deal with it? How long are you able to wait for this damned e-mail? Do you email the editors? If yes, how do you formulate such messages? Thank you, gals and guys 🙂

    • Because of the risk of annoying the editor, I basically never email them about this. Even if the ms status has been “all reviews completed” for a couple of months. I suppose if it was several months, I might email the managing editor (not the handling editor, and probably not the EiC) very politely, and let the managing editor do any prodding that needs to be done. But that’s just my suggestion, it’s not like there’s any universally-agreed protocol on how to handle this situation.

      I agree that this kind of thing is incredibly frustrating for authors, and rightly so. A good editor will never let this happen (and in my view, ought to resign if it happens with any regularity). Although you do also have to keep in mind the (perhaps unlikely, but non-zero) possibility that the editor really does have a good excuse, like say they got very ill, or went into the field for a month the day before the last review on your ms was submitted.

      EDIT: Oh, and in case it doesn’t go without saying: you should basically never email asking about ms status if all the reviews aren’t in yet. It’s not the journal’s or the editor’s fault if reviewers are slow. Indeed, they find slow reviewers just as frustrating as you do! And it’s not as easy as you might think to speed up the process by giving up on waiting for reviewer X and going and asking someone else instead.

    • I’d suggest going through the editorial office with a general inquiry of your manuscript’s status. Let them harass the AE rather than you. Of course things come up and the AE might be busy, but one month is time to start shaking the tree.

  8. Thank you guys. Of course the whole situation is annoying and frustrating especially when it happens every now and than and becuase it usually is truly an editors laziness. Authors are spending time on experiment modelling, doing the actual research, obtaining the results, analysing them and writting the manuscript and than someone just doesn’t give a shit and just delays the whole thing. I mean how long does it take to make a decision after reading the MS by yourself and than receiving two reviews? An hour? A year?

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.