Choosing reviewers, recognition not recall, and why lists like DiversifyEEB are useful

Last week, I was assigned a paper to handle as an Associate Editor at American Naturalist. After reading through the paper and deciding it should go out for review, I began the task of finding potential reviewers. There were two people who immediately stood out to me as qualified reviewers. But AmNat likes to have a list of six potential reviewers to work from, so I continued through my standard process: 1) try to think of another person; 2) struggle with that; 3) think “surely I can write the editorial office folks and tell them we should just go with these two I already thought of because they’re perfect”; 4) decide I need to try harder before giving up; 5) after some more effort, end up at a list of six (or, in this case, seven) people who would be good reviewers, ranked in the order in which I’d like them to be asked. After going through that process, the two people I originally thought of were still on my list, but they were numbers 4&5. For some other papers, the initial people I thought of didn’t end up on my final list at all. And I have never had the first two people I thought of end up being the first two people on my list of six.

In other words: there were really good options who I only thought of after working at it for a while; those people were better options for this task than the people who initially occurred to me. To me, this is striking, but not really surprising. It’s what motivated the DiversifyEEB list that I created with my colleague, Gina Baucom. We all have biases, and those make it so that the people we think of first aren’t necessarily the best ones. And, moreover, our biases make it so that we’re more likely to think of well-known white men. That’s just how our brains work.

As I thought through this on my walk home, it reminded me of a story Kay Gross told me shortly after DiversifyEEB launched. Kay said that, many years back, she had a conversation with Margaret Davis. What Margaret told Kay is that, when she got a phone call asking her to recommend people for something, she would say, “Let me think about that and get back to you”. She did this because she had noticed that the first set of names she thought of were always men. But, if she thought it over more, she came up with more names and more diverse names. I found it especially interesting to learn that Margaret Davis had created a set of cards, adding a new card whenever she met an interesting woman scientist; during the time between getting the call and getting back to the person, she consulted that set of cards (her own personal DiversifyEEB list!) to think through people who were well-suited but who didn’t initially occur to her.

Back to the specific topic of finding reviewers: Charles Fox and colleagues have done a set of really interesting studies related to gender and the publication process. In one, they found that just 25% of the reviewers suggested by authors were women. In another, they found that only ~27% of the reviewers invited by associate editors were women. I initially thought that perhaps one solution to the problem of lack of gender diversity in reviewers would be to have more journals ask for lists of 6 potential reviewers — perhaps thinking longer about who should review something would increase the diversity of who they think of? But it turns out that Functional Ecology already asks their AEs to come up with 6 potential reviewers, so clearly that, on its own, will not solve the gender balance problem.

After more reflection, perhaps it makes sense that the lists are still pretty biased, even if they have more people on them: these potential reviewer lists still rely a lot on recall (that is, who I think of as I think about a particular topic), not recognition (that is, choosing from a list of names that might be suitable). And the original motivation for DiversifyEEB was learning (from Joan Strassmann) about psych research showing that the best way to come up with more diverse groups is to rely on recognition, not recall. (If you remember nothing else from this post, remember “recognition, not recall” as a strategy for increasing diversity!)

So, if you are an associate editor for a journal (or, really, in any other position where you are trying to come up with a list of scientists for something):

  1. It’s worth the effort to try to come up with a longer list. In that process, you are likely to think of people who are better options. This will lead to better reviews (or a better seminar series or candidate list or whatever it is you’re trying to do.)
  2. Once you have your list, consider the diversity of it. Does it include diversity in terms of race, gender, career stage, and institution type (including non-academic ones)? In some cases, your list might intentionally be lacking a form of diversity (e.g., a candidate list for an endowed chair probably won’t include many early career folks). But, in most cases, a lack of diversity will reflect our inherent biases. (We all have them! The key is to recognize them and work to counter them.)
  3. If your list seems to be lacking in diversity, try to find lists that will give you more ideas. DiversifyEEB is one, but you can also look other places (e.g., if you are trying to think of Darwin Day speakers, a scan of the editorial boards of journals like Evolution, AmNat, J. Evolutionary Biology, etc. might give you ideas). Another great strategy, especially for looking for reviewers, is to go to the webpage of the person you first thought of and look at their grad students & postdocs. This includes looking at recent grads who have moved on to other positions.

As I said above, the key is being aware of the biases we have, recognizing when outcomes indicate biases are at work, and working to counter them. Lists like DiversifyEEB are one way to try to do that, and I love knowing that Margaret Davis had created her own version of a DiversifyEEB list long ago! I’d love to hear from readers about what strategies you use to try to increase diversity when coming up with potential reviewers, seminar speakers, etc!

27 thoughts on “Choosing reviewers, recognition not recall, and why lists like DiversifyEEB are useful

  1. Thanks for the post, they are great suggestions. I wonder whether we should also consider potential unintended negative consequences for female or minority academics during the transition to equity. As an example, I think the Australian Research Council tries to get roughly equal numbers of male and female academics to assess grant applications. On the face of it, this sounds great. But what if, because of past gender biases, there are fewer senior female academics, and as a result, the female academics are given a greater average number of proposals/papers to review compared to their male counterparts? Is there a risk that extra time allocated to ‘service to discipline’ could take time away from their research to the detriment of their own track record and future chances of funding success?

    • I wonder about this issue as well. I’m an AE for three journals and I try hard to diversify my lists of suggested reviewers but I don’t want to overrepresent the underrepresented in this unrecognized but vital part of science. I personally am not missing out on nearly unlimited opportunities to review papers. We may need to be careful we don’t send even more requests to the limited # of senior women in the field in our effort to diversifyEEB.

      One thing I do try to do is recommend young female reviewers when I turn down ad hoc reviews (for papers that look like they will be interesting). I like Margaret Davis’s cards idea and plan to adopt it so that I can do this better.

      Thanks for this post. This will definitely make me a more thoughtful AE.

      • Yes, I agree with the concern and with Emily’s suggestion that calling on younger reviewers is a good solution!

  2. Thanks — interesting and useful as usual. Can someone (Meg is the first person who comes to mind…) recommend a link to a good pithy summary of the “recognition not recall” business?

  3. I spent a while looking into this back when we were creating DiversifyEEB and was struggling to find the literature. I ended up emailing these really illustrious psych folks which felt super awkward (even though they all ended up being very generous!) What I was told about recognition, not recall was: “The fact that it’s easier to recognize than recall people is so well-established it wouldn’t be in studies anymore. It would be in cognitive psych or memory textbooks.” This is one foundational study on the topic:

    Click to access Loftus.1971.pdf

    and this is a textbook chapter that covers it:

    Click to access science-12.pdf

    There’s also a whole literature on “false fame” and the impact of gender stereotypes on false fame (where fame = prestige). I find this literature harder to fully follow, to be honest (but also have never managed to find the time to dive as deeply into it as I’d like; I’ve put off writing a post on the subject for a year and a half because I keep waiting until I manage to find that time!) Papers like this review by Greenwald & Banaji:

    Click to access 1995_Greenwald_PR.pdf

    show gender bias in fame judgments resulting from implicit social cognition (that is, without the person who is doing it realizing that it is happening).

    This Banaji & Greenwald paper is a foundational one in the area:

    Click to access 21fa38619123007779c7c900b3fb50fc527a.pdf

    It shows that there was greater attribution of fame/prestige to male names than to non-male names.

    So, I think the general idea is we’re more likely to think of men as famous/prestigious, and it’s hard to recall names (but we’re more likely to recall famous people), which then leads to gender bias when we rely on recall rather than recognition.

    If other folks have read up on this area, I’d love to get more recommendations for what to read and more insights into what studies have found!

    • Lately I’ve started defaulting to skepticism about any result in social psychology that hasn’t stood up to a big preregistered replication. Unfortunately, the Open Science collaboration’s recent preregistered replications of many social psychology results seems not to have attempted to replicate that paper of Banaji & Greenwald’s.

      A quick google turns up Steffens et al. 2005 (, which argues that the original Banaji & Greenwald paper is incorrect because the famous male names used in the paper were more famous than the famous female names, and presents experimental results supporting an alternative interpretation. But the Steffens et al. experiments weren’t preregistered and the sample sizes weren’t massive.

  4. DiversifyEEB is specifically *NOT* useful for this purpose, because the list is exclusionary by design. White, heterosexual, cis-gendered, able-bodied males are not eligible for the list.

    If you want to use a list for “recognition”, that’s fine, but it should be an *INCLUSIONARY* document that allows *EVERYONE* to sign up. Just because someone is a white male, it doesn’t mean that their name is on the tip of every editor’s tongue.

    The whole idea of DiversifyEEB as an exclusionary document is wrong-headed. But I guess identity politics rule the day.

    • Boborino,

      DiversifyEEB is designed to promote inclusion because pretty much the whole academic universe is structured to foster the success of “white, heterosexual, cis-gendered, able-bodied males,” as you put it.

      This list is useful because people want to make a point to reach out to include people who are traditionally underrepresented and traditionally excluded from full participation. I don’t have to show you statistics for you to grasp this concept. It’s pretty straightforward how folks who meet the criteria to be listed in DiversityEEB are members of group(s) that are marginalized and underrepresented in our field.

      Because DiversifyEEB is specifically designed for folks who meet those criteria for underrepresentation, then clearly the people going to that list are trying to find people who meet those criteria. Do you get mad at a vegetarian restaurant because they don’t serve meat?

      I got good news for you though, your idea of an inclusive list exists. Here’s the membership directory for the Ecological Society of America:

  5. Quick thoughts:

    1) does AmNat ask authors to submit names if suggested referees? I find those can be useful in this context but I’m dismayed by how if you want there’s actually submit names of suggested referees.

    2) you focus on what subject editors can do – which is appropriate, since they are the ones actually inviting referees – but I personally think the EIC should lead the way, both by slecting the members of the board that reflect the diversity we seek and guiding SEs in how to hone their craft as editors.

    3 @hormiga: lolz.

  6. Thank you for this. Very thoughtful comments. I also like to especially include women who I don’t cite in the paper. Usually, I assume the Editor will go straight to the reference section to find reviewers, so I like to leave all the female names there open for the editor to choose. This may help maximize the probability that the editor asks women. Given that most editors will likely end up going with more men than women anyway, I hope this can compensate a little. Not sure if this is a good idea, but thought I would mention it.

    Diversity EEB is a very useful list indeed!

  7. Great post Meg and I agree with you, if you are busy it is so easy just to go with the names that pop into your head and that means in my case, white, getting on in years and male or former PhD students. Thanks to Twitter and reading blogs I have over the last few years become very much aware of this problem and now resist the temptation to go with the recall option and instead go onto Web of Science and scour through the recent papers looking for potential referees from the pool of ECRs (and as far as I know I am gender neutral relying entirely on the appropriateness of the discipline).

    Unfortunately a lot of these ECRs turn down the invitation to review as do many of the suggested referees (again usually young post-docs or final year PhD students) that my first choice referees suggest as alternatives from their labs if they are too busy to do it themselves. There seems to be a a view among some of the ECRs that reviewing is not worth doing or that they are not obliged to do it. We at Insect Conservation & Diversity have written an editorial about this problem – the zero sum reviewer,

    • I’ve also noted that ECRs increasingly turn down reviewing opportunities. Which I don’t understand. I was advised to take very opportunity I got at that stage (once you’re doing more than say 5-10/year its a different conversation). As a general rule in academia, everything from papers to grants to awards occurs by peer review. And the best possible way to learn what they’re looking for and how things appear is to be on the other side evaluating

      • Yes, I learned (and continue to learn) so much from doing reviews, and seeing what other reviewers said about the mss I reviewed:

        I wonder if ECRs are increasingly turning down reviewing opportunities because they’re getting more of them? I’m recalling Functional Ecology’s data on how they’re expanding their reviewer pool to include many more ECRs. Perhaps at least some ECRs are now doing 5+ reviews/year?

      • I’m an ECR (postdoc) and have done >5 reviews already this year, and was >5 last year too, so it does happen. I don’t find it that onerous, but I both enjoy giving feedback on manuscripts and consider it an important professional activity. I accept several more review invitations than I turn down, but did turn down a couple this year. One just came at a bad time when given other obligations I wouldn’t have been able to meet the journal’s requested timeline (they requested a quick turnaround). Another I turned down because it came from a potentially predatory open access journal (on Beall’s list) that I had never before heard of, and I don’t want to support those practices.

        I 100% agree that being a reviewer has made me better at writing manuscripts and grants, and wholeheartedly agree that ECRs should take (nearly) every opportunity to do so.

      • Huh, I haven’t noticed ECRs turning down a lot of review opportunities, but my sample size is much smaller!

  8. My impression while checking in new mss is that almost all suggest reviewers. It’s not a required field. The author-suggested lists however also tend toward the usual suspects. We do have a rule that we only invite one author-suggested reviewer at a time.

    I’ll add that, if an AE hasn’t listed a specific order of preference, we make a point of asking under-represented researchers on the list. (the system defaults to alphabetical–so we can work around the recall problem already for the order).

    I’ll also add that the only area we were able to find clear bias–back when we crunched through many aspects of the journal and the society–was with reviewers. All other areas reflected the percentage of women who participate in general (e.g., the percentage of women first authors submitting to the journal is the percentage that gets in). We were only looking at women because we were producing numbers for a particular project.

    We are distributing Meghan’s post to the board to reinforce the emphasis the editors have had since the number crunching showed a problem..

    This came out about the problem in geoscience:

  9. “Yes, I learned (and continue to learn) so much from doing reviews, and seeing what other reviewers said about the mss I reviewed”. Learning from other reviewers was one of the best parts lf revieweing. In my field, however, journals are increasingly not sharing the reviews. You submit s review, and they never share with you the decision or the other comments. Sometimes if the paper is revised and they need a follow-up review, then they will share things. Essentially, it’s all about their needs. It used to be a few snobby journals doing this. But it’s spreading. I don’t know why. Any idea?

    About keeping a list of reviewers: I have been keeping a list for about 10 years now. At first I did it as a quick way to have handy the affiliation and contact info. I had to fill-in these in forms and I hated hunting these down. As the list grew longer, I realized that I usually found someone there which had not originally come to mind. In my case I tended to forget clinician scientists from abroad, especially Asia and South America. They don’t attend the same conferences I do, and their English is limited, so they don’t participate in some discussions or ask questions. Because of their clinical obligations, they also tend to publish less (or at least fewer research-intensive papers). Hence, they are not on my mind as much.

    • “In my field, however, journals are increasingly not sharing the reviews. You submit s review, and they never share with you the decision or the other comments. Sometimes if the paper is revised and they need a follow-up review, then they will share things. Essentially, it’s all about their needs. It used to be a few snobby journals doing this. But it’s spreading. I don’t know why. Any idea?”

      No idea, sorry. I haven’t noticed that trend in ecology.

  10. Pingback: “How do we diversify our seminar series?” | Small Pond Science

  11. Pingback: How (as an editor) I choose lists of reviewers | Scientist Sees Squirrel

  12. Pingback: Balance for Better: DiversifyEEB – The Applied Ecologist's Blog

  13. Pingback: Poll: guess the gender balance of N. American EEB seminar series | Dynamic Ecology

  14. Pingback: Scientific web apps – i am become computational

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.