In an old post, I advised authors of mss to always suggest referees in their cover letters to journal editors, and gave advice on how best to do it. I think you should suggest referees because
- many journal editors want you to
- suggesting referees helps the editor understand the audience for your paper, and so helps the editor identify appropriate referees even if the editor doesn’t use your suggestions. For instance, one reason I suggest referees for my protist microcosm papers is to keep editors from mistaking my work for microbial ecology and having it reviewed by microbial ecologists, who aren’t the audience for my work and who are more likely to misunderstand it.
But one issue I didn’t talk about is whether suggesting referees affects the fate of your paper. Put bluntly: is your paper more likely to be accepted if you suggest referees? Put cynically: can you game the system by suggesting referees who will like your paper, or by asking that people who won’t like your paper be excluded as referees? Charles Fox et al. in press address this and other questions using 10 years of very detailed data from Functional Ecology (ht Small Pond Science). It’s an interesting exercise, following up on their earlier work on gender balance of peer review outcomes at Functional Ecology. Below the fold is a brief summary of some key findings, along with some commentary.
tl;dr: I think these data were well worth looking at, but they don’t allow you to isolate the effect of suggesting referees on editorial decisions.
Some of the headline results, with some commentary (click through for much more, including a thorough review of other data on these topics):
- 50-70% of authors suggested referees before this became required in mid-2010. I’m very surprised it was that high. I was an editor at Oikos at that time, and very few authors suggested referees for the papers I handled. So I bet this number varies a lot among journals. Just guessing, but I bet the number is higher at more selective and prestigious journals. In contrast, <10% of authors suggest non-preferred reviewers; that jives with my experience as an editor at Oikos.
- About 50% of author-preferred reviewers are used by editors, up from about 30% back in 2004 (non-preferred reviewers are hardly ever used). I’m sure a big part of this is that editors find it increasingly difficult to line up reviewers, so they’re increasingly happy to take up authors’ suggestions. But while it may be convenient to take authors’ suggestions, it’s not necessarily any more effective: author-suggested reviewers and other reviewers agree to review at the same rate (a rate which has been dropping rapidly, btw).
- Suggested reviewers are mostly male, but the proportion of women among suggested reviewers has been increasing steadily and is now 25%. Notably, 25% is fairly close to the proportion of women among Functional Ecology’s first, last, and corresponding authors (33%). That suggests to me that the gender balance of author-preferred reviewers basically reflects the gender balance of ecology faculty (note that you’d expect women to comprise a slightly greater proportion of authors than preferred reviewers, because ecology grad students are more likely than ecology faculty to be female, and are more likely to be authors than to be named as preferred reviewers).
- Other than the previous bullet, there aren’t many gender gaps here, and the ones that exist are small. Male and female authors suggest preferred reviewers at similar rates, for instance. And women suggested as preferred reviewers were actually more likely to be chosen as reviewers than were men, though the initially-substantial gender gap here has declined to zero in the last couple of years.
- Before suggesting referees became mandatory, the probability that a paper would be sent out for review was independent of whether authors suggested referees. But papers for which authors suggested non-preferred referees were more likely to be sent out for review and more likely to have a revision invited rather than being rejected. The differences are bigger than I’d have expected: on the order of 8-10 percentage points. Fox et al. chalk this up to exclusion of non-preferred reviewers reducing the likelihood of negative reviews. I have a different, not mutually exclusive thought. I suspect that editors are more likely to send controversial papers out for review, and less likely to reject controversial papers after review. Not because they want to court controversy in order to draw readers to the journal–journal editors don’t think like that. Rather, they want to make sure controversial papers get a full and fair hearing and that controversial scientific issues are decided in public rather than via private editorial decisions. Which would explain why papers for which non-preferred referees are suggested are more likely to be sent out for review and invited for revision, if authors of controversial papers were particularly likely to suggest non-preferred referees.
- Finally, the big result: author-preferred reviewers score mss substantially higher on average than do other reviewers. That’s in line with every previous study, apparently. As an aside, note that the scores of author-preferred reviewers and those of other reviewers have exactly the same-shaped distributions, except the former distribution is shifted in a positive direction. So it’s not as if author-preferred reviewers always hand out positive scores. And mss reviewed by author-preferred reviewers are substantially more likely to be invited for revision rather than rejected, to a degree that’s only partially explainable by the higher scores author-preferred reviewers give mss. Fox et al. hypothesize that differences in review tone may explain the remainder of this difference.
That last bullet needs discussion. It’s tempting to jump to the conclusion that recommending referees is just a way for ecologists to game the system. That author-recommended reviewers are just giving undeserved positive reviews to their friends. I wouldn’t jump to that conclusion for two reasons, one statistical and the other based on personal experience.
First, authors aren’t randomly assigned to recommend reviewers or not. So whether or not a paper has author-preferred reviewers could be–I’d say likely is–confounded with other features of the paper and its authors. In particular, I suspect that authors who recommend reviewers in their cover letters tend to be good, experienced, established ecologists, or students or postdocs of such ecologists. I wouldn’t be surprised if they submit to and publish in Functional Ecology more often than authors who don’t recommend reviewers. I suspect that they’re more likely than average authors to have a good sense of what sort of paper is a good fit for Functional Ecology. And frankly, I wouldn’t be surprised if they write better papers on average than authors who don’t recommend reviewers. So that, if you were to have papers with author-preferred reviewers re-reviewed by other reviewers, those papers would still score higher and fare better on average than papers lacking author-preferred reviewers. I’m speculating of course, and I could be wrong. I doubt this confounding fully explains the results in the last bullet. But I wouldn’t be surprised if it explains some non-trivial fraction of those results. There’s no way to say for sure.
Second, insofar as authors do manage to steer mss to reviewers disposed to like those mss, I don’t think there’s anything wrong with that. It’s just a side effect of something perfectly reasonable that we wouldn’t want to change. When I suggest reviewers, which I always do, I do not think to myself “Who would really like this ms?” or “Which of my best friends can I get to review this?” Instead, I ask myself, “Who are the best people working on this topic and/or in this system?” That is, I’m thinking about the primary audience for the ms, and then suggesting as reviewers top people from that audience. Now, are those people more likely than a randomly-chosen ecologist to like my ms? Almost certainly! And rightly so, because they’re part of the intended primary audience. That people in the intended primary audience for an ms will tend to like it better than people outside the intended audience just shows that different ecologists have different interests, expertise, and goals, and so make different professional judgments about which mss are best (by any criteria). In other news, the sun rose in the east today. The question is how to best review mss in light of the obvious and unalterable fact that ecologists don’t all think alike.*
When I was an editor at Oikos, I answered that question by never getting more than one of the reviews from a referee suggested by the author. I typically wanted reviews from a range of ecologists to inform my decision. Ideally, one or two from specialists on the topic and/or study system, plus one or two from broad-minded ecologists not specialized on the topic and/or system. For what it’s worth, I think the data in Fox et al. highlight the wisdom of this approach. Since 2010, mss that only get one review from author-preferred reviewers have been only slightly more likely to be invited for revision than mss getting no reviews from author-preferred reviewers. It’s the rarer mss getting 2-3 reviews from author-preferred reviewers that are much more likely to be invited for revision (in most years). As an editor at a general ecology journal like Functional Ecology, I don’t think you want to give too much weight to what a relatively narrow segment of ecologists thinks of any given paper. Which is what you’re probably at high risk of doing when most or all of your reviews come from author-preferred reviewers.
As always, looking forward to your comments.
*You can change where disagreement among ecologists is manifested during the review and publication process, but you can’t substantially reduce disagreement.