Functional Ecology just published a bunch of data from the past 10 years (i.e. as long as
the journal has existed data are available) on correlations between gender and various aspects of the peer review process (ht Retraction Watch). The headline results that most caught my eye (click through for much more):
- 1/3 of authors were women. Compared to their overall frequency, women were slightly underrepresented as sole authors (26%) and last authors (25%), but overrepresented as first authors (43%).
- The proportion of women among reviewers and editors increased over time, starting from a very low base in the case of editors. There were only four handling editors, all men, in 2004; now 24/64 editors are women.
- Male editors selected fewer women as reviewers (20-25%) than did women editors (30-35%). This difference was driven by differences between late-career men and women. Early career editors chose women as reviewers at similar rates, independent of editor gender. The proportion of women chosen as reviewers decreased with increasing seniority of male editors, but increased with increasing seniority of women editors.
- Women were slightly but consistently
moreless likely to decline invitations to review. This was one of a number of gender-imbalanced results that were small in magnitude but very consistent across years.
- Men and women acting as reviewers scored papers identically on average.
- Papers with women as authors (whether first, senior, or corresponding) were equally likely to be sent out for review, were scored identically by reviewers on average, and were equally likely to be accepted.
The overall conclusion is that gender imbalance in some aspects of the peer review process nevertheless led to gender-neutral outcomes. And gender imbalances in some aspects of the peer review process are shrinking over time thanks mostly to changes in the composition of the editorial board.
A few thoughts:
- Kudos to Charles Fox, Sean Burns, and Jennifer Meyers for writing this paper. I know from personal experience that extracting these sorts of data from online ms handling systems is a lot of work. And it’s a really thorough paper. They also seem to have had a very close look at the literature on gender and peer review, to the point of catching a study that claims a non-significant result as its main conclusion.
- As the paper notes, one needs to be cautious in speculating about the reasons behind these results. There are multiple plausible interpretations that can’t be teased apart with the data available. So with that caveat in mind: I suspect many of the results reflect social networks (a possibility the paper notes). Editors are inviting reviewers whom they happen to know, or know of, and reviewers are taking into account whether they know the editor when deciding whether or not to review. And depending on your gender and seniority, your social network is likely to have a different gender mix. That’s certainly what I used to do as an editor at Oikos: if someone immediately came to mind whom I knew would be likely to agree to review and who would do a good job, I’d ask them to do it. Only if I couldn’t think of several good names off the top of my head (which in practice was fairly often) would I look at the paper’s reference list and do some googling to identify other potential reviewers. One gender-independent sign of the importance of social networks in the Functional Ecology data is that editors tend to choose reviewers from their own geographic region.
- The paper reviews the literature on gender bias of peer review outcomes and notes that most (not all) previous studies find that reviewer scores and editorial decisions are gender neutral. FWIW, gender neutrality of peer review outcomes jives with my own anecdotal experience as an editor.
- It would be interesting to see similar analyses for other ecology journals. I suspect Functional Ecology is typical, but it would be interesting to know.
- The existence of small but consistent gender imbalances surprised me. You’ve got a growing editorial board of changing composition, inviting hundreds of reviews from a changing mix of hundreds of reviewers every year. But year after year women are always 2-3 percentage points
lessmore likely to accept invitations to review? Year after year, women always submit their reviews 1-2 days slower than men on average? Year after year, men are (almost) always 1-5 percentage points less likely to agree to review if the invitation comes from a woman? Huh. I’d have thought that small average effects like these would be a lot noisier in magnitude and even direction. Anyone else surprised by this?
- The paper suggests blinding prospective reviewers to the name of the editor inviting them to review, as a way to eliminate the small tendency for men to decline invitations from women. I wouldn’t favor that, because as the paper notes, it likely would have the side effect of causing more men and women to decline to review.* My hope is that calling attention to this and other gender imbalances in the peer review process will help eliminate them.
- In light of these results, I suspect that Am Nat’s experiment with double-blind reviewing won’t make any difference to the gender mix of its authors. Or if it does make a difference, it will be for other reasons besides reducing gender bias in review outcomes, since those outcomes seem to be gender-neutral. For instance, the experiment might lead more women to submit to Am Nat. Or the experiment might increase the proportion of grad students among accepted authors, thereby increasing the proportion of women among Am Nat authors because the percentage of women is higher among grad students than among postdocs or faculty. Note that I still I think Am Nat’s experiment is worth doing.
*A striking result having nothing to do with gender: in 2004, 70% of invitations to review for Functional Ecology were accepted. That number has been dropping steadily ever since and is now 47%. And remember, that decline is happening even though (I presume) Functional Ecology is making increasing use of rejection without review. This is why journals quite rightly worry about any policy change that might it any harder to recruit reviewers than it already is. Indeed, I’ll be curious to see if Am Nat’s experiment with double blind reviewing is making it harder to recruit reviewers. (UPDATE: it’s not making it harder; see the comments)