Gender and peer review at Functional Ecology (CORRECTED, UPDATED)

Functional Ecology just published a bunch of data from the past 10 years (i.e. as long as the journal has existed data are available) on correlations between gender and various aspects of the peer review process (ht Retraction Watch). The headline results that most caught my eye (click through for much more):

  • 1/3 of authors were women. Compared to their overall frequency, women were slightly underrepresented as sole authors (26%) and last authors (25%), but overrepresented as first authors (43%).
  • The proportion of women among reviewers and editors increased over time, starting from a very low base in the case of editors. There were only four handling editors, all men, in 2004; now 24/64 editors are women.
  • Male editors selected fewer women as reviewers (20-25%) than did women editors (30-35%). This difference was driven by differences between late-career men and women. Early career editors chose women as reviewers at similar rates, independent of editor gender. The proportion of women chosen as reviewers decreased with increasing seniority of male editors, but increased with increasing seniority of women editors.
  • Women were slightly but consistently more less likely to decline invitations to review. This was one of a number of gender-imbalanced results that were small in magnitude but very consistent across years.
  • Men and women acting as reviewers scored papers identically on average.
  • Papers with women as authors (whether first, senior, or corresponding) were equally likely to be sent out for review, were scored identically by reviewers on average, and were equally likely to be accepted.

The overall conclusion is that gender imbalance in some aspects of the peer review process nevertheless led to gender-neutral outcomes. And gender imbalances in some aspects of the peer review process are shrinking over time thanks mostly to changes in the composition of the editorial board.

A few thoughts:

  • Kudos to Charles Fox, Sean Burns, and Jennifer Meyers for writing this paper. I know from personal experience that extracting these sorts of data from online ms handling systems is a lot of work. And it’s a really thorough paper. They also seem to have had a very close look at the literature on gender and peer review, to the point of catching a study that claims a non-significant result as its main conclusion.
  • As the paper notes, one needs to be cautious in speculating about the reasons behind these results. There are multiple plausible interpretations that can’t be teased apart with the data available. So with that caveat in mind: I suspect many of the results reflect social networks (a possibility the paper notes). Editors are inviting reviewers whom they happen to know, or know of, and reviewers are taking into account whether they know the editor when deciding whether or not to review. And depending on your gender and seniority, your social network is likely to have a different gender mix. That’s certainly what I used to do as an editor at Oikos: if someone immediately came to mind whom I knew would be likely to agree to review and who would do a good job, I’d ask them to do it. Only if I couldn’t think of several good names off the top of my head (which in practice was fairly often) would I look at the paper’s reference list and do some googling to identify other potential reviewers. One gender-independent sign of the importance of social networks in the Functional Ecology data is that editors tend to choose reviewers from their own geographic region.
  • The paper reviews the literature on gender bias of peer review outcomes and notes that most (not all) previous studies find that reviewer scores and editorial decisions are gender neutral. FWIW, gender neutrality of peer review outcomes jives with my own anecdotal experience as an editor.
  • It would be interesting to see similar analyses for other ecology journals. I suspect Functional Ecology is typical, but it would be interesting to know.
  • The existence of small but consistent gender imbalances surprised me. You’ve got a growing editorial board of changing composition, inviting hundreds of reviews from a changing mix of hundreds of reviewers every year. But year after year women are always 2-3 percentage points less more likely to accept invitations to review? Year after year, women always submit their reviews 1-2 days slower than men on average? Year after year, men are (almost) always 1-5 percentage points less likely to agree to review if the invitation comes from a woman? Huh. I’d have thought that small average effects like these would be a lot noisier in magnitude and even direction. Anyone else surprised by this?
  • The paper suggests blinding prospective reviewers to the name of the editor inviting them to review, as a way to eliminate the small tendency for men to decline invitations from women. I wouldn’t favor that, because as the paper notes, it likely would have the side effect of causing more men and women to decline to review.* My hope is that calling attention to this and other gender imbalances in the peer review process will help eliminate them.
  • In light of these results, I suspect that Am Nat’s experiment with double-blind reviewing won’t make any difference to the gender mix of its authors. Or if it does make a difference, it will be for other reasons besides reducing gender bias in review outcomes, since those outcomes seem to be gender-neutral. For instance, the experiment might lead more women to submit to Am Nat. Or the experiment might increase the proportion of grad students among accepted authors, thereby increasing the proportion of women among Am Nat authors because the percentage of women is higher among grad students than among postdocs or faculty. Note that I still I think Am Nat’s experiment is worth doing.

*A striking result having nothing to do with gender: in 2004, 70% of invitations to review for Functional Ecology were accepted. That number has been dropping steadily ever since and is now 47%. And remember, that decline is happening even though (I presume) Functional Ecology is making increasing use of rejection without review. This is why journals quite rightly worry about any policy change that might it any harder to recruit reviewers than it already is. Indeed, I’ll be curious to see if Am Nat’s experiment with double blind reviewing is making it harder to recruit reviewers. (UPDATE: it’s not making it harder; see the comments)

16 thoughts on “Gender and peer review at Functional Ecology (CORRECTED, UPDATED)

  1. From what I can see women were consistently more likely to ACCEPT reviews (when they responded), but at the same time less likey to respond to invitations (see fig 3 for these effects). Taken together, these two effects resulted in females being slightly more likely to accept invitations

    “These two effects largely counteracted each other such that the probability that an invitee responded and agreed to a review invitation was at most slightly (~2%) higher for female than male reviewers (inline image = 3·70, P = 0·054).”

    • “From what I can see women were consistently more likely to ACCEPT reviews (when they responded), but at the same time less likey to respond to invitations (see fig 3 for these effects). Taken together, these two effects resulted in females being slightly more likely to accept invitations ”

      Thanks, fixed.

  2. Just to note, 10 years is as far back as we have peer review data for, but the journal has existed for 29 years. For me, another interesting point was the effect of editor seniority on tendency to agree to review– specifically, that reviewers were less likely to respond, and less likely to agree to review if the editor was a more senior academic (regardless of how long they were on the board). There are a few possible reasons for this, but it be due to the social network effect– people suggesting reviewers whom they happen to know better, editors who are more senior in their career selecting more reviewers who are also more established, who are generally less likely to agree to review.

    • “10 years is as far back as we have peer review data for, but the journal has existed for 29 years.”

      Thank you, fixed now, that was silly of me, don’t know why I said the journal’s only been around for 10 years. I read the journal, I know how old it is!

      My guess is the same as yours re: why reviewers more often decline invitations to review from senior editors–senior people invite reviews from people they know, who are more senior and so more likely to decline.

  3. Hi Jeremy, Thanks for the post and thanks to the study to get some real analysis.

    I did an update on how double blind has been going at Am Nat here:
    http://comments.amnat.org/2015/11/update-on-modified-double-blind.html

    To answer your question, comparing January 1 to November 18 of this year against 2014, the rate at which reviewers accepted invitations has stayed the same (actually, a bit of an uptick, though that’s probably normal fluctuation). As I mention on the comments, there have been a few *very* disgruntled reviewers refuse!

    We too seemed to be gender neutral when trying to come up with data for the Women in Science initiative at Evolution 2014 across almost all aspects of journal and society activity. Reviewer participation was also the stickiest wicket for us. But gender isn’t the only bias. When we’ve had Meet the Editor sessions in the past, it’s been clear that there are many concerns about unconscious bias–institutional affiliation, career stage, ethnicity, as well gender–so, it seemed worth a try across a number of fronts.

    If you haven’t seen it, I like this video from the Royal Society that just came out about Understanding Unconscious Bias: https://royalsociety.org/topics-policy/publications/2015/unconscious-bias/

    • Cheers for this Trish, thanks for pointing out the update on the double blind expt at Am Nat, will link to that in the next linkfest.

      “We too seemed to be gender neutral when trying to come up with data for the Women in Science initiative at Evolution 2014 across almost all aspects of journal and society activity. ”

      That’s great to hear. Was there a presentation with slides that you’d be able to share? Or have the analyses already been published somewhere?

      “When we’ve had Meet the Editor sessions in the past, it’s been clear that there are many concerns about unconscious bias–institutional affiliation, career stage, ethnicity, as well gender–so, it seemed worth a try across a number of fronts.”

      That’s a good point. It would be very useful to compile data on this. I’m cautiously optimistic that it would turn out that the peer review outcomes are unbiased–that the editors and reviewers are judging mss fairly, based on their content. Note that I could see the analysis getting a little tricky if there’s some reason why “true ms quality” is correlated with some author attribute. I’m thinking for instance of how academics in some countries get paid big bonuses for publishing in Nature. That’s caused a flood of low-quality submissions to Nature. And so if you looked for an association between author country and review outcome, you’d find one–but not because of an editorial bias against authors from those countries. Not saying that specific example is relevant at Am Nat, obviously–as far as I know, no country gives financial rewards for Am Nat papers. It’s just the first example that occurred to me in which there could be a correlation between an author attribute and review outcome without there being any bias against authors with that attribute.

      “To answer your question, comparing January 1 to November 18 of this year against 2014, the rate at which reviewers accepted invitations has stayed the same (actually, a bit of an uptick, though that’s probably normal fluctuation). As I mention on the comments, there have been a few *very* disgruntled reviewers refuse!”

      That’s good news, I think. It’s unfortunate that a few people are seriously disgruntled. But any policy change (including “keep the status quo”) would annoy *somebody*.

  4. I wish we knew something about the 42% of MSes that were declined without review. I would hypothesize that unconscious gender bias is more likely to arise at this stage than during the review process.

    But a super paper. Thanks for pointing it out.

    • “I wish we knew something about the 42% of MSes that were declined without review. I would hypothesize that unconscious gender bias is more likely to arise at this stage than during the review process.”

      In light of the data from other stages of the peer review process and my own anecdotal experience as an editor, my instinct is that there wouldn’t be any detectable gender bias at that stage either. But obviously I could be wrong, and I agree it would be useful to check.

      (UPDATE: see comments below, and the original post. As the OP states, the paper found no gender bias in decline-without-review decisions.)

      • “In light of the data from other stages of the peer review process and my own anecdotal experience as an editor, my instinct is that there wouldn’t be any detectable gender bias at that stage either.”

        Very possible. I just said “more likely” — not necessarily that either would be detectable. Also, I think it would be important to slice and dice by reputation, which you could approximate by professional age (years since PhD or whatever). I would hypothesize that for well-known authors, their reputation would outweigh unconscious bias. But for relatively unknown authors, any bias that might exist would be more detectable.

      • “Also, I think it would be important to slice and dice by reputation, which you could approximate by professional age (years since PhD or whatever). I would hypothesize that for well-known authors, their reputation would outweigh unconscious bias. But for relatively unknown authors, any bias that might exist would be more detectable.”

        Yes, that’s worth looking into.

        A broader remark: One’s analyses (say, of an author seniority x gender interaction) ought to be prespecified if hypothesis testing is the goal. Decide what hypotheses you’re going to test, and how, before you see the data. Because there are *lots* of subgroups and interaction terms here that one might want to look at. If you just explore the data in an open-ended way, you basically chuck your ability to do inference. Which isn’t to say we shouldn’t do exploratory analyses, of course, just that you can’t combine hypothesis generation and testing on the same data. Brian has a good post on this: https://dynamicecology.wordpress.com/2013/10/16/in-praise-of-exploratory-statistics/

        And to be clear, I know you know all this, my remarks here are aimed at any readers who happen not to be familiar with this statistical issue.

      • Thank you Charles. And rereading the original post, I see that I mentioned this already in the last bullet point of my summary of your work.

      • Right you are. Bad on me for skimming while running code, instead of paying proper attention. A very nice paper. Thanks for all the hard work. (And it’s clearly a lot of work to put together that data set.)

  5. Hi Jeremy – FYI, we found similar results for the New Zealand Journal of Ecology (although we had a much less comprehensive dataset): http://newzealandecology.org/system/files/articles/3136.pdf.

    One point we thought was important to make here was that although acceptance outcomes didn’t differ for men and women, being selected as a reviewer can be an important career- and skill-building opportunity, particularly for early-career scientists, so we argued that we should all work to avoid gender bias at this stage in the process too.

    • Cheers for this Hannah, very interesting and encouraging to see that the FE result on unbiased outcomes generalizes. Interesting to see that the result that male editors have a slightly greater tendency to select male reviewers generalizes as well. Hopefully increased awareness and greater use of reviewer databases would address this.

  6. Pingback: Friday links: The Cubs victory tweet prediction scam, and more | Dynamic Ecology

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.