One oft-voiced concern about the peer review system is that it’s getting harder to find reviewers these days (e.g., this editorial from the EiCs of various leading ecology journals). Which isn’t surprising, given that academics have strong incentives to submit papers, but much weaker incentives to review them.
A few years ago, Owen Petchey and I proposed a reform known as PubCreds, the purpose of which was to oblige authors to review in appropriate proportion to how much they submit. For instance, if each paper you submit receives two reviews, then arguably you ought to perform two reviews for every paper you submit.
Owen and I pitched PubCreds to various individuals and groups who might’ve had some power to make PubCreds happen, and we didn’t really get much traction. One reason among many we didn’t get much traction is that people questioned the need for PubCreds. Heck, even some of the authors of the editorial linked to above questioned the need for PubCreds! This was somewhat frustrating to Owen and I, but in retrospect I can understand it. The fact was, we didn’t have much hard data demonstrating the breakdown of the existing peer review system, at least not a breakdown so serious as to make major reform a matter of urgency.
So Owen and I decided to go get some data. The online ms handling systems that most journals have used for years compile data on how often individuals submit, how often they’re invited to review, and how often they agree to review. So we approached the EiCs and managing editors of something like 30 ecology journals, asking if they’d be willing to share (anonymized) data. The only journals from which we received a positive response that was then followed up were Population Ecology (which didn’t have enough data to be useful), and the journals of British Ecological Society (BES), of which Lindsay Haddon was managing editor at the time. Thanks to Lindsay’s hard work extracting the relevant data from Manuscript Central, we were able to compile an anonymized dataset on how often individuals submitted to, reviewed for, and were invited to review for, the four BES journals from 2003-2010. Our paper analyzing the data has just been published (Petchey et al. 2014; open access).
Here are the headline results (read the paper for details; it’s short).
- Our main question of interest was whether individuals’ reviewing and submission activities were in balance. In our dataset, “in balance” meant “doing 2.1 reviews for every paper you submitted”, since the average paper in the dataset received 2.1 reviews (
UPDATE: defining “in balance” relative to the mean number of reviews per paper corrects for rejection without review. If you were to do a similar analysis for, say, Science and Nature, you’d presumably find that the average paper receives much less than 2.1 reviews, because many papers are rejected without reviewUPDATE #2: In the comments, Owen jogs my memory, reminding me that we included in the analyses only submissions sent out for review.) For 64% of individuals in our dataset, the answer is “no”–they either did at least twice as many reviews as needed to balance their submissions, or less than half as many. So the majority of individuals are either “overcontributors” or “undercontributors” to the peer review system.
- The relative abundance of over- vs. undercontributors depends on the assumptions you make about how to distribute “responsibility” for multi-authored papers (e.g., if you’re corresponding author on a multi-authored paper, does that mean you personally should do 2.1 reviews to balance that submission?) Depending on assumptions, 12-44% of individuals did at least twice as many reviews as needed to balance their submissions, while 20-52% did less than half as many.
- Undercontributors mostly didn’t agree to do all the reviews they were invited to do. So undercontributors mostly didn’t undercontribute due to lack of opportunity to review, at least not completely.
- Researchers who submitted more were more likely to accept invitations to review.
Obviously, few ecologists submit to, and review for, only the BES journals. But there’s no reason to think that this biases our results, as far as we can see. So we’re reasonably confident that our results wouldn’t change if someone were somehow able to compile a larger dataset from more journals. (UPDATE: I’m sure some people who are undercontributors to BES journals would be in balance, or even overcontributors, if you accounted for their reviewing and submitting for other journals. But I’m sure some people who are overcontributors to BES journals would be in balance or even undercontributors if you accounted for their reviewing and submitting to other journals. And I’m sure some people who are in balance in our dataset would not be in balance if you had data from many more journals. So when I say our results are unbiased as far as we can tell, what I mean is that Owen and I can’t see any reason why ecologists would tend to overcontribute to BES journals compared to ecology journals as a whole. Or tend to undercontribute to BES journals compared to ecology journals as a whole. Or tend to make more balanced contributions to BES journals than to ecology journals as a whole. But if you can see a reason why BES journals might represent a biased sample, please say so, I really do want to hear people’s thoughts on this!)
(UPDATE #4: In the comments, Douglas Sheil suggests a potential source of bias in our estimate of the proportion of people who are in balance. Briefly, the fact that we’re working with a sample of journals rather than a census of all journals might cause our data to underestimate the proportion of ecologists who are in balance, even if our data are a random sample (i.e. people react towards BES review requests the same way they react towards non-BES review requests). I just did a quick and dirty simulation to check out this suggestion and it looks like there might be something to it. Hard to say much more than that without a much more thorough simulation study. And even then it might not be possible to say more, since I’m sure that the existence and strength of any bias will be sensitive to the assumptions one makes, and many of those assumptions can’t be checked with the data we have. I doubt I’ll be able to make time to really look into this thoroughly, but if somebody wanted to pick up this idea and run with it, I could try to pitch in…)
Overall, I was pleasantly surprised by the results. I was too cynical–I thought we’d find a very large proportion of people reviewing very little relative to how often they submit, balanced by a small proportion reviewing a lot relative to how often they submit. In fact, overcontributors aren’t that rare relative to undercontributors, and might even be more common than undercontributors. I was also pleasantly surprised that, on average, people who submit more are more likely to accept invitations to review. So score another victory for data over anecdotal impressions.
But having said that, undercontributors aren’t rare in an absolute sense, and so the only thing that keeps the system from breaking down is that the undercontributors are balanced by a sufficient number of overcontributors. Obviously, our data don’t give us any basis for predicting whether or how the relative abundance of over- and undercontributors might change in future.
UPDATE #3: Let me emphasize a point made in the paper, which I should’ve emphasized more in the post. We have no information on why individuals over- or under-contribute. There could be many reasons, some better than others. For instance, some undercontributors may serve as editors, and so decline requests to review because they contribute to the peer review system via their work as editors. Whatever the reasons for individuals over- or undercontributing, it’s of practical interest to know how common such individuals are.
Owen and I haven’t done much with PubCreds for a while now, but I still find these data interesting in their own right, and hope others will as well. The dataset is on Dryad for anyone who wants to explore it.
Apologies for the self-promotional post, I don’t ordinarily post on my own papers. I’m only doing it because it’s related to a topic I’ve blogged about in the past, and because I’d be very interested to hear what folks think of our results. Looking forward to your comments.