Author-suggested reviewers and their effects: data from Functional Ecology

In an old post, I advised authors of mss to always suggest referees in their cover letters to journal editors, and gave advice on how best to do it. I think you should suggest referees because

  • many journal editors want you to
  • suggesting referees helps the editor understand the audience for your paper, and so helps the editor identify appropriate referees even if the editor doesn’t use your suggestions. For instance, one reason I suggest referees for my protist microcosm papers is to keep editors from mistaking my work for microbial ecology and having it reviewed by microbial ecologists, who aren’t the audience for my work and who are more likely to misunderstand it.

But one issue I didn’t talk about is whether suggesting referees affects the fate of your paper. Put bluntly: is your paper more likely to be accepted if you suggest referees? Put cynically: can you game the system by suggesting referees who will like your paper, or by asking that people who won’t like your paper be excluded as referees? Charles Fox et al. in press address this and other questions using 10 years of very detailed data from Functional Ecology (ht Small Pond Science). It’s an interesting exercise, following up on their earlier work on gender balance of peer review outcomes at Functional Ecology. Below the fold is a brief summary of some key findings, along with some commentary.

tl;dr: I think these data were well worth looking at, but they don’t allow you to isolate the effect of suggesting referees on editorial decisions.

Some of the headline results, with some commentary (click through for much more, including a thorough review of other data on these topics):

  • 50-70% of authors suggested referees before this became required in mid-2010. I’m very surprised it was that high. I was an editor at Oikos at that time, and very few authors suggested referees for the papers I handled. So I bet this number varies a lot among journals. Just guessing, but I bet the number is higher at more selective and prestigious journals. In contrast, <10% of authors suggest non-preferred reviewers; that jives with my experience as an editor at Oikos.
  • About 50% of author-preferred reviewers are used by editors, up from about 30% back in 2004 (non-preferred reviewers are hardly ever used). I’m sure a big part of this is that editors find it increasingly difficult to line up reviewers, so they’re increasingly happy to take up authors’ suggestions. But while it may be convenient to take authors’ suggestions, it’s not necessarily any more effective: author-suggested reviewers and other reviewers agree to review at the same rate (a rate which has been dropping rapidly, btw).
  • Suggested reviewers are mostly male, but the proportion of women among suggested reviewers has been increasing steadily and is now 25%. Notably, 25% is fairly close to the proportion of women among Functional Ecology’s first, last, and corresponding authors (33%). That suggests to me that the gender balance of author-preferred reviewers basically reflects the gender balance of ecology faculty (note that you’d expect women to comprise a slightly greater proportion of authors than preferred reviewers, because ecology grad students are more likely than ecology faculty to be female, and are more likely to be authors than to be named as preferred reviewers).
  • Other than the previous bullet, there aren’t many gender gaps here, and the ones that exist are small. Male and female authors suggest preferred reviewers at similar rates, for instance. And women suggested as preferred reviewers were actually more likely to be chosen as reviewers than were men, though the initially-substantial gender gap here has declined to zero in the last couple of years.
  • Before suggesting referees became mandatory, the probability that a paper would be sent out for review was independent of whether authors suggested referees. But papers for which authors suggested non-preferred referees were more likely to be sent out for review and more likely to have a revision invited rather than being rejected. The differences are bigger than I’d have expected: on the order of 8-10 percentage points. Fox et al. chalk this up to exclusion of non-preferred reviewers reducing the likelihood of negative reviews. I have a different, not mutually exclusive thought. I suspect that editors are more likely to send controversial papers out for review, and less likely to reject controversial papers after review. Not because they want to court controversy in order to draw readers to the journal–journal editors don’t think like that. Rather, they want to make sure controversial papers get a full and fair hearing and that controversial scientific issues are decided in public rather than via private editorial decisions. Which would explain why papers for which non-preferred referees are suggested are more likely to be sent out for review and invited for revision, if authors of controversial papers were particularly likely to suggest non-preferred referees.
  • Finally, the big result: author-preferred reviewers score mss substantially higher on average than do other reviewers. That’s in line with every previous study, apparently. As an aside, note that the scores of author-preferred reviewers and those of other reviewers have exactly the same-shaped distributions, except the former distribution is shifted in a positive direction. So it’s not as if author-preferred reviewers always hand out positive scores. And mss reviewed by author-preferred reviewers are substantially more likely to be invited for revision rather than rejected, to a degree that’s only partially explainable by the higher scores author-preferred reviewers give mss. Fox et al. hypothesize that differences in review tone may explain the remainder of this difference.

That last bullet needs discussion. It’s tempting to jump to the conclusion that recommending referees is just a way for ecologists to game the system. That author-recommended reviewers are just giving undeserved positive reviews to their friends. I wouldn’t jump to that conclusion for two reasons, one statistical and the other based on personal experience.

First, authors aren’t randomly assigned to recommend reviewers or not. So whether or not a paper has author-preferred reviewers could be–I’d say likely is–confounded with other features of the paper and its authors. In particular, I suspect that authors who recommend reviewers in their cover letters tend to be good, experienced, established ecologists, or students or postdocs of such ecologists. I wouldn’t be surprised if they submit to and publish in Functional Ecology more often than authors who don’t recommend reviewers. I suspect that they’re more likely than average authors to have a good sense of what sort of paper is a good fit for Functional Ecology. And frankly, I wouldn’t be surprised if they write better papers on average than authors who don’t recommend reviewers. So that, if you were to have papers with author-preferred reviewers re-reviewed by other reviewers, those papers would still score higher and fare better on average than papers lacking author-preferred reviewers. I’m speculating of course, and I could be wrong. I doubt this confounding fully explains the results in the last bullet. But I wouldn’t be surprised if it explains some non-trivial fraction of those results. There’s no way to say for sure.

Second, insofar as authors do manage to steer mss to reviewers disposed to like those mss, I don’t think there’s anything wrong with that. It’s just a side effect of something perfectly reasonable that we wouldn’t want to change. When I suggest reviewers, which I always do, I do not think to myself “Who would really like this ms?” or “Which of my best friends can I get to review this?” Instead, I ask myself, “Who are the best people working on this topic and/or in this system?” That is, I’m thinking about the primary audience for the ms, and then suggesting as reviewers top people from that audience. Now, are those people more likely than a randomly-chosen ecologist to like my ms? Almost certainly! And rightly so, because they’re part of the intended primary audience. That people in the intended primary audience for an ms will tend to like it better than people outside the intended audience just shows that different ecologists have different interests, expertise, and goals, and so make different professional judgments about which mss are best (by any criteria). In other news, the sun rose in the east today. The question is how to best review mss in light of the obvious and unalterable fact that ecologists don’t all think alike.*

When I was an editor at Oikos, I answered that question by never getting more than one of the reviews from a referee suggested by the author. I typically wanted reviews from a range of ecologists to inform my decision. Ideally, one or two from specialists on the topic and/or study system, plus one or two from broad-minded ecologists not specialized on the topic and/or system. For what it’s worth, I think the data in Fox et al. highlight the wisdom of this approach. Since 2010, mss that only get one review from author-preferred reviewers have been only slightly more likely to be invited for revision than mss getting no reviews from author-preferred reviewers. It’s the rarer mss getting 2-3 reviews from author-preferred reviewers that are much more likely to be invited for revision (in most years). As an editor at a general ecology journal like Functional Ecology, I don’t think you want to give too much weight to what a relatively narrow segment of ecologists thinks of any given paper. Which is what you’re probably at high risk of doing when most or all of your reviews come from author-preferred reviewers.

As always, looking forward to your comments.

*You can change where disagreement among ecologists is manifested during the review and publication process, but you can’t substantially reduce disagreement.

27 thoughts on “Author-suggested reviewers and their effects: data from Functional Ecology

  1. Interesting stuff. I am curious what percentage of the suggested reviewers are in conflict. In my experience as an editor at Applied and Environmental Microbiology a majority of the suggested reviewers have no track record with the journal or scientifically. Of the remaining ones, if I pick one then frequently they’ll decline because they’re in conflict. I always reply to thank them and tell them that they were recommended by the author in hope that there is some justice in the world. To the point: I’m pretty cynical at this point as to the value of suggested reviewers. I also suspect that the people I recommend for my papers are far harder than a random draw.

    • Just guessing but I’d be surprised if conflicts of interest are increasing all that much. Then again, I’m surprised to hear that it’s common at Appl Envi Microb for authors to have conflicts of interest with the reviewers they suggest.

      Also surprised to hear that authors would be suggesting reviewers who have no scientific track record, either with the journal or elsewhere. That wasn’t my experience at Oikos at all.

      Curious: do authors generally explain in their cover letters why they’re suggesting the reviewers they are? I always do that, but I’m guessing it’s not common. Maybe lots of people just give in to the temptation to fill in names on the online submission form, without any explanation? Maybe lots of authors are just filling in the first names to pop into their heads, without much thought, on the assumption that editors don’t really want any reviewer suggestions and won’t use them? (“Why does the journal make me fill in all this irrelevant information? Why can’t I just upload my ms and be done with it?”)

      Or maybe this is something that varies a lot among fields?

      • While I’m surprised that someone would suggest reviewers with no scientific track record, I think it is a good idea to suggest at least a few good* early career researchers (with potentially only a few publications) because they are more likely to accept [at least this is the general perception around my office]. Do you factor in the probability that the reviewer will decline to review in your listing of preferred reviewers? Do you think at all about career stage? I force myself to name at least one or two ECRs despite having mostly more established scientists at the top of my head when submitting a paper.

        *good can be assessed from seeing a talk at a conference, or having read their papers.

      • “Do you factor in the probability that the reviewer will decline to review in your listing of preferred reviewers? Do you think at all about career stage? ”

        I don’t think much about career stage. I do think a bit about the probability that someone will accept the invitation to review. So I’ve stopped suggesting really famous senior people, maybe unless it’s a Nature or Science paper (because I suspect even famous senior people will often make time to review for Nature or Science). And there are a few people I know of who aren’t senior but who who always decline to review, so I never suggest them.

        Lately I have started trying to mix in suggestions of good ECRs because they’re more likely to say yes (not because I’m trying to help ECRs become better known with editors or anything like that). But that’s harder unless I know them or know their supervisor. It’s not always easy to say whether a total stranger with only a few publications is part of the primary audience for your paper or not.

      • We were recently discussing in my lab whether to suggest Big Name Senior Person or one of that person’s postdocs as a preferred reviewer. It seemed like BNSP would almost surely turn it down (based on presumably getting a gajillion review requests), but, when doing so, might suggest one of his postdocs. So, one option was to just cut to the chase and suggest a postdoc from that person’s lab who seemed appropriate.

        Speaking of postdocs: when looking for people to review papers I’m handling as an AE, I often go to the webpage of a well known person in that area and look at the list of current postdocs (or recent grad students). In my experience, postdocs generally do a very good job as reviewers.

        It never occurred to me to suggest why I’m suggesting someone as a reviewer, but I should start doing that. I like that idea. One thing I do sometimes is include a list of people who would otherwise be qualified and obvious reviewers, but with whom I am in conflict due to collaborations. My hope is that this speeds up the process, avoiding the time it takes for them to ask that person and that person to turn it down.

      • “Speaking of postdocs: when looking for people to review papers I’m handling as an AE, I often go to the webpage of a well known person in that area and look at the list of current postdocs (or recent grad students). In my experience, postdocs generally do a very good job as reviewers.”

        Yeah, I used to do that. It’s also good for looking for coauthors and collaborators of the well known person you were thinking of.

        “It never occurred to me to suggest why I’m suggesting someone as a reviewer, but I should start doing that.”

        It never fails to surprise me who finds our advice useful.🙂

      • 🙂

        I’d been told to suggest a reason for excluding reviewers, but not for the ones I was suggesting!

      • My reasons for suggesting reviewers are usually pretty brief. Something like “Jane Doe is a leading expert on topic X” or “John Doe recently reviewed the literature on topic Y”. Basically, just say enough to show that you’re not trying to game the system! (Though in light of Brian’s comments, I bet there’s somebody out there who’s bold/silly enough to lie and say that their best friend is a leading expert on the topic of the ms…)

  2. Another alternative hypothesis: Could it be that authors who list “non-preferred reviewers” tend to be older, well known scientists who have had the time and exposure to develop “academic adversaries,” (or knowledge about who tends to give unfair reviews) and that such established academics may tend, on average, to write better (or more well received papers by reviewers) in general? The result that papers with listed non-preferred reviewers are more likely to be invited for revision might be due to many factors.

    • Could be, though I think it’s actually fairly rare for people to develop academic adversaries. But in general I agree that, as with the listing of preferred reviewers, listing of non-preferred reviewers could be confounded with lots of of author attributes.

    • As a grad student I listed several non-preferred reviewers on all my manuscripts. One was someone who trashed everything they reviewed by my advisor, for no good reason, starting when my advisor was a grad student. The others were people in the same small research area who had shown themselves to be poor reviewers (and who other people I know also list as non-preferred). In my case, we learned they were unfair reviewers with my manuscripts rather than with my (mid-career) advisor’s earlier work.

      Maybe our research area was small enough to learn about the poor reviewers before one becomes senior.

      I didn’t realize it is useful to list preferred reviewers in the cover letter, though, so will do that from now on.

  3. Here’s a thought: since the average author-preferred reviewer score at Functional Ecology exceeds the average score of other reviewers by a pretty consistent amount (about 0.3 on the FE scale), maybe FE should automatically subtract 0.3 from the score of every review by an author-preferred reviewer?

    Hmm…but if it became widely known that FE was doing that, some reviewers might try to compensate for it. Plus, a good editor mostly cares about the detailed content of the review, and there’s no way to adjust that. So in practice, I doubt that adding a correction factor to the scores provided by author-preferred reviewers would make any difference.

  4. Hey Jeremy, I’m a bit confused about your position. On one hand, it’s *good* for recommended reviewers to review, because presumably they’re the intended audience and less likely to misunderstand the paper! On the other hand, we don’t want *too much* of that sort of reviewing, so editors should only use no more than one recommended reviewer! You write from both author and editor positions, but you seem to take different stances as each. Are authors and editors in conflict? Shouldn’t we want people who will most ‘get’ a paper to review it for technical soundness? And shouldn’t it be up to the editor(s) to make judgments about fit and breadth of audience? Really curious about your views here, as I’ve never been an editor and find the whole process a bit mysterious still…

    • Fair questions Margaret, sorry if the post wasn’t clear on this.

      As an author, there’s a part of me–the narrowminded part–that just wants my papers to get accepted. I’m like everyone that way. So there’s a part of me that would be thrilled if editors always used all of my suggested reviewers! That same part of me thinks it would be ideal if editors never rejected any of my papers.🙂

      But there’s another part of me–the more broadminded, smarter part of me–that knows that editors, not authors, are generally the best judges of fit to the journal, breadth of audience, and relative quality of the paper compared to all others submitted to the journal. That same part of me knows that it’s often to my benefit as an author to have my paper rejected, often on grounds that force me to considerably improve it and write it for a different and/or broader audience than I would have had I been left to my own devices. So as both editor and (broadminded) author, I think it’s usually for the best if editors get a review from someone the author suggests, plus a couple of people the author didn’t suggest. That generally gives a good mix of reviews from people who are *definitely* part of the target audience for the ms, and people who *might* be depending on how good the ms is.

      And yes, it’s for the best if all of this is ultimately left up to the editors! As an author, all you’re doing is making suggestions to the editor, which the editor is totally free to take or leave. Editors themselves are generally the best judges of what sort of reviews they need for any given ms. That doesn’t mean I always agree with editors’ choices of reviewers in individual cases. For instance, I have had microcosm papers of mine rejected in part because of negative reviews from microbial ecologists who predictably didn’t “get” the paper, because they weren’t at all the intended audience. But in general, it would be much worse for science as a whole–and even for me as an author, I think–for reviewers to be chosen by some other means besides having an independent editor choose them. That’s why personally I’m leery of peer review reforms which involve having mss reviewed by solely by reviewers the author arranges, or solely by whoever happens to decide to review them.

      Does that make sense?

      • It might also help to clarify that, depending on the paper, my intended target audience as an author might be pretty broad. For instance, the intended target audience for Fox (2006 Ecology) was “ecologists interested in biodiversity and ecosystem function” (a *lot* of ecologists doing a *very* wide range of stuff in a *very* wide range of systems, and including many people who don’t even work on BDEF but just keep an eye on that literature) plus “people who know about the Price equation” (most of whom aren’t even ecologists). So depending on the paper, I might suggest as referees people whom I see as exceptionally sharp and broadminded ecologists. People like Bob Holt, Peter Morin, Tony Ives, and Gary Mittelbach are kind of my “search image” for this sort of ecologist.

  5. My own experience as an editor is that there are two types of suggested reviewers. I don’t know the exact ratio between the two types, but 50/50 is in the ball park. Type 1 is what I call sincere suggestions – these are suggestions of the authors best opinion of who would be the top expert in the field – often they list the author who wrote the paper identifying the concept the current paper is trying to test, etc. The second type is what I call gaming the system suggestions. They are often people close to the author or at least who have seen the manuscript and expressed appreciation for it. As an editor it is my job to figure out which type it is and use the first and ignore the second. I don’t actually find it that hard to distinguish. The sincere list usually has some names I recognize and overlaps with the people that come to my mind when I think of who should review the paper. The gaming list mostly has people I haven’t heard of or seem far from the topic at hand and often with a quick glance at institutions will show overlaps (e.g. same part of the country or same small country, overlap with the previous postdoc institution of the author, etc).

    • “The gaming list mostly has people I haven’t heard of or seem far from the topic at hand and often with a quick glance at institutions will show overlaps (e.g. same part of the country or same small country, overlap with the previous postdoc institution of the author, etc).”

      Another data point supporting my contention that little ways of gaming the system are a bad idea even on ruthlessly Machiavellian grounds, because they’re easily seen through! (

      I am surprised and depressed to hear that the ratio of these two types of referee suggestions is somewhere in the neighborhood of 50/50 in your experience. Not trying to toot my own horn here, but it honestly would never have occurred to me to make anything other than an honest suggestion. So I find it hard to imagine that anyone–never mind lots of people!–would try to game the system with their reviewer suggestions. Which just shows something about the limits of my imagination, obviously.

      • I should say that my comment below is based on the belief that readers of Dynamic Ecology are always in the sincere category!

  6. It’s very helpful if authors suggest a few reviewers and offer a few words of explanation of why that person is being suggested. Since we allow only one of the reviewers to be author suggested, it’s not helpful if the authors suggest a large number! We do conflict of interest checks and check the web so I can’t say we’ve run into questionable situations.

    So, in answering whether it’s an advantage to an author–I have my doubts that it clearly leads to a more positive outcome. I’ll have to check on that. The way it might work is that a paper that’s on the cusp with some good reviewer suggestions might be more likely to tip onto the review side rather than the desk reject side, but we’ve had a steady percentage of papers that go into review for at least 20 years, so I’m not sure it’s a strong impact.

    I do think there is one clear advantage to authors. Sometimes an associate editor is stretching to cover all aspects of a paper, especially here at Am Nat where a paper might bridge several areas, so it’s sometimes very helpful to provide a starting point. In that case, it can speed up the review process. The AE isn’t spending as much time struggling in unfamiliar territory and people who feel they have the right expertise are more likely to say yes to the invitation.

    • Interesting to get the Am Nat perspective Trish. As I said in the post, I think some aspects of this probably vary a lot among journals.

      Interested to hear that you think author suggested reviewers might sometimes be more likely to accept review invitations from Am Nat. That effect doesn’t show up in the FE data. I’d have thought it would.

    • GEB has the same policy of at most one recommended reviewer.

      I obviously can’t speak to AmNat, but I also think the benefits of suggesting reviewers has probably increased over time. I can’t recall if the study addressed that or not. But it has objectively gotten harder to get reviewers. As an AE I used to list 4-5 people and expect to get 2, now 8-10 is probably closer to the norm to get 2. And I think AEs are more prone to grab off the recommended list in a less than careful way in desperation after they’ve already come up with 8 independent names.

      Like you Trish, I have the intuition that at GEB we are careful and are not being outsmarted and would not expect a big advantage to listing suggested reviewers, even gamed reviewers. But I haven’t run the statistics. And wouldn’t having any reason to think we run our journal better than functional ecology. It makes me very curious.

      I wonder if suggested reviewers might review papers higher for valid reasons (i.e. off the sincere list than the gamed list). For example the author has probably put more time in thinking about who would be interested in and understand their paper than the AE.

  7. Author-suggested reviewers can be very helpful, so it’s a shame that because of the many cases of ‘fake reviews’ that have come to light since 2012, an increasing number of journals are now stopping asking authors to suggest reviewers. (For anyone not familiar with this problem, see the ‘faked emails’ archive at Retraction Watch There are now around 300 retractions due to ‘fake reviews’ (done mostly by authors but also by third-party service providers), so it’s a major issue, and has highlighted the lack of appropriate potential-reviewer checking. I’d just urge any editors to make sure they know what the checking arrangements are at their journals, and if it’s supposed to be their responsibility they’re aware of this and know what to do. I have come across cases where everyone assumes it’s someone else’s responsibility, or it’s mentioned to new editors in the induction material they get when they join editorial boards but then that information is never reinforced or refreshed.
    ps, much prefer the term ‘author-suggested reviewers’ to ‘author-preferred reviewers’ – they mean different things, and the former better describes what is being asked for/provided.

    • Re: the fake reviews problem, all of the ones I know of are at incredibly obscure journals, and 300 retractions is a drop in the bucket compared to the well over 1 million peer reviewed papers published every year. I confess I find it hard to get too worried about this.

      • These are just the cases that have been picked up, there may be many more. Some of the ones that have been identified are in journals from the major publishers, and include respectable established journals, eg yesterday’s RW post on another case of ‘compromised’ peer review because of 3rd party actions happened at the British Journal of Clinical Pharmacology (comments are worth a look!)
        I think everyone has to be alert to what’s happening and be careful and cautious. Making sure appropriate checks and processes are in place – in other things besides assigning reviewers, eg when authorship changes are requested – is one way to help avoid problems.

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s