Redacted ecology faculty search (UPDATED)

Duke is hiring an ecologist. And in an attempt to avoid bias, the initial stage of the search will be conducted on redacted applications. Applicants are asked to provide two copies of their cv’s, research statements, and teaching statements: a normal copy, and a copy from which the following information is redacted:

  • All mentions of the applicant’s name, date of birth, birthplace, citizenship, ethnicity, and gender.
  • The names of all co-authors on publications. The only authorship information to be provided is the number of authors and the applicant’s place in the author list. So for instance, your cv would list publications like this: “Second of two authors. 1974. Disturbance, patch formation and community structure. Proceeding of the National Academy of Science USA 71:2744-2747.”
  • The names and contact details of references.

Presumably, you’d also need to redact the names of PIs and co-PIs from grants, the names of co-authors of conference presentations, and various other bits of information, but the ad doesn’t say that.

I think this is an interesting experiment to address an important issue, and I think it’s a credit to the folks at Duke that they take the issue sufficiently seriously to be willing to take a step like this (EDIT: To be clear, I don’t necessarily agree that the specific step they’re taking is the right one. But that they’re willing to take this step is a sign of how seriously they take the issue, and it’s good that they take it seriously). It’s not an unprecedented step. UConn EEB did a version of this for a couple of searches a couple of years ago, though I hear through the grapevine that they’ve now gone back to doing conventional searches (anyone know more about that?) (UPDATE: Mark Urban from UConn has commented on UConn’s experience; thanks very much to Mark for sharing this.)

Some thoughts and questions below the fold.

  • In contrast to UConn, Duke is making applicants provide the redacted materials. UConn redacted the materials themselves.
  • As noted in Meg’s post on the UConn experience, you have to be careful when redacting materials, because a single mistake can give away information. Which means that, ironically, some members of underrepresented groups might have to redact some useful information, right? For instance, if you’d held a fellowship reserved for members of an underrepresented group, such as a SEEDS fellowship, presumably you’d have to redact the name of the fellowship, right? Or else omit it from your cv entirely? And if you attended a women’s college as an undergrad, presumably you’d have to redact the name of the college?
  • Just curious: what will Duke do with application materials that accidentally reveal an applicant’s identity, gender, ethnicity, etc.?
  • Even if the redactions are perfect, some of them may be seen through by some search committee members. This creates the potential for blinding to be broken non-randomly with respect to applicant attributes. For instance, more prominent applicants might be both more likely to be recognized, and more likely to be white or male. Which potentially can have perverse consequences–creating biases rather than (or as well as) weakening them. We’ve discussed this before in the context of double-blind peer review. How many and which applicants get recognized by which search committee members will depend a lot on the idiosyncrasies of the applications and who happens to be on the search committee. For instance, back when I served on an environmental microbiology search committee, I doubt I would’ve seen through Duke-style blinding on any application, since I’m not a microbiologist. But I’m sure some of the microbiologists on the search committee would’ve seen through blinding on some of the applications. In general, I think it’s hard to say how much to worry about the possibility that blinding will be seen through non-randomly and so create or exacerbate biases rather than reduce them.
  • Will search committee members will be forbidden from looking up the applicants’ papers during the initial stage? That of course would be a trivially easy way to break the blinding. If looking at papers is forbidden, then that means the initial stage search will be conducted without the information that would be provided by looking at papers and associated information like how often those papers have been cited, etc. Which means that the remaining information, such as publication venue, takes on correspondingly greater importance.
  • Related to the previous remark, redacting the identities of the applicants’ supervisors, references, and collaborators removes useful information about the applicants, even in the absence of reference letters. Doing a search blind necessarily means doing it based on less information, and so putting more weight on the remaining information.
  • Whether the redactions will actually have any effect on the search outcome seems likely to depend a lot on what constitutes the “initial stage” of the search. If the initial stage just involves cutting the applicant pool down to anyone who has a ghost of a chance of being hired, I doubt blinding will matter. In my admittedly-anecdotal experience with conventional searches, nobody potentially competitive gets eliminated at an early stage, before reference letters are requested or phone/skype interviews scheduled. But at some point, the search committee quite rightly will want the information provided by reference letters, publications, supervisor and collaborator identities, citations, interviews, etc. After all, if you didn’t need that information you’d just make the hiring decision based on the redacted applications. In this way, faculty searches seem to me to be importantly-different from the example of professional orchestra auditions. You can listen to orchestra auditions blind and still get all the information needed to make an informed hiring decision, and so you absolutely should conduct the audition blind. You can’t do that for faculty positions. So given that at some point the application process is going to be unblinded, what if anything is gained by blinding an early stage? Can’t you just pass through to the unblinded stage everyone who has even a remote chance of being hired? Because in my experience you can identify those applicants objectively from unblinded application materials.
  • Bottom line: it’s a hard nut to crack. I don’t have any answers. The final stages of the faculty job application process seem to me like the stages at which subtle implicit biases would most affect the outcome, because those are the stages at which the objective differences among the remaining applicants are smallest. But the final stages are also the stages at which you want as much information as possible–and you simply can’t get that information without knowing the applicants’ identities. So I dunno. I think it’s ultimately an empirical question whether Duke’s approach leads to better, fairer search outcomes than something like using a score sheet to rank all the applicants, or (my own preferred solution at the moment) giving the search committee members training in recognizing implicit biases and implementing appropriate procedures to minimize the influence of those biases.

Looking forward to your comments, as always.

24 thoughts on “Redacted ecology faculty search (UPDATED)

  1. The people at my alma mater have clearly lost their minds. Perhaps the winning job candidate should just be selected at random. Absolutely no bias there. Just like the lottery…

    • I wouldn’t have put it so starkly; there’s no suggestion that anyone at Duke wants to go to the extreme of a lottery. But yes, they are prepared to give up a fair bit of information at the initial stage of the search. As I said in the post, I’m not convinced it’s worth it. That UConn tried something like this and then went back to the traditional approach also makes me suspect that this is an idea that sounds better in theory than it works in practice.

      For what it’s worth, there are those who would suggest going even further and redacting things like journal titles at the initial stages:

      I would definitely regard that as removing too much information, and forcing the initial stage of the search to rely too much on the remaining bits of information (basically, publication counts).

      • This procedure is complete nonsense. In order to rank applicants even in the first stages of the search I ,as a dutiful search committee member, must do the hard work of actually reading some of their papers. Either that, or I must already know some of the papers. In either case the publications ARE the basis for ranking.

      • I think this gets back to what’s meant by the “initial stage”. I don’t find that reading papers helps me until later stages:

        But people vary on this. Same for reference letters–some people (like me) only find them helpful after making a first cut, others like to look at them from the get go.

        So yes, an alternative approach to Duke’s is “give thorough consideration from the get-go to all the information about every applicant”. That’s the rationale behind the increasing use of score sheets (

        It’s interesting to me that, in the name of “objectivity” and reducing “bias”, some universities are moving in the direction of obliging search committee members to consider all the information available for every applicant (via score sheets). And others are moving in the direction of “give search committee members as little information as possible about the applicants”. Like you, I prefer the first approach and am surprised anyone would prefer the second.

      • I would accept some training to help eliminate implicit bias [ your proposed ‘solution’], but I would never serve on a search committee that restricts my access to legit indicators of quality. And while I might not read papers from every applicant at the early stages, judging quality based only on the journals & titles seems too much . And lets face it, it takes maybe 15 seconds to find the actual authors of any publication in a good journal; even the reference list of other pubs will do it. Heck , an applicant in any of the areas I have worked in with any scientific impact would appear in the reference list of OTHER papers I would know: what am expected to do…quit reading the literature?

  2. Duke will presumably get applicants who have written field-changing papers. Blinding doesn’t do much to conceal the identity of the first author of that two-author paper from 1953 about the structure of DNA. Seems to me this introduces a bias in favor of the applicant who has written a single glam paper over the one who has written a series of high-quality-but-not-widely-famous papers.

    • Yes, I find it hard to imagine how the search will work in practice, given that anyone who’s written a high-profile paper is likely to be recognized. The argument I’ve seen from advocates of doing things blind is that, sure, blinding will sometimes be seen through, but unless it’s seen through 100% of the time it at least cuts down on bias. The counterargument is the one you make–that when blinding is seen through non-randomly, it can create or exacerbate biases rather than reduce them.

      Of course, it’s not guaranteed that being recognized will help a particular applicant. For instance, if you’re recognized as being from a high-profile lab that often publishes in Science and Nature, that can raise the question of whether you have your own ideas or whether you’re just a “cog in the machine” as it were. Which get back to the point that, faculty job applicants are, and should be, evaluated holistically on the basis of all the information in their applications.

      • Shouldn’t the career stage of the hire be key here? Many (but not all) authors of ‘field-changing’ papers are already at associate or full professor level. It seems like using a ‘blind’ initial step in an assistant professor search is likely to work well, but could be more difficult for senior-level searches. Duke’s search is for an assistant professor.

      • If by “field changing” paper you mean “any paper whose authors are likely to be recognized by someone on the search committee”, then I think a number of asst. professor candidates might well have such papers. Offhand, I’d say any recent ecology paper in Nature, Science, PNAS, EcoLetts, and other high-profile selective journals is a candidate to quite possibly be recognized by someone on the search committee. But as I said in the post, how often this happens is probably going to be very sensitive to exactly who’s on the search committee, who happens to apply, etc.

  3. I agree with everything you’ve written, from “I’m glad this is taken seriously” to all your questions about the actual process.

    One thought I had where it might matter (just a tiny bit) is that in making the “first round” (whatever that is) blind, the committee’s initial impression of individuals doesn’t depend on identity. And that might be important (or not; no data).

    For fun, I went through my CV to see how hard it would be to redact it. And though I’d have to remove or revise statements about advocating for “early career women ecologists” ( –> “early career ecologists”?) and “new mothers” (–> “new parents”?), it would be doable. (Or would I? A man could advocate for those things, too, but presumably it would be obvious I was female.) What about my mention of blogging? If my blog is known (not necessarily likely in a search committee), then mentioning it identifies me. Would I need to remove its mention? I guess the “outreach” section of one’s CV is a bit tricky… but then again, no one is getting kept or cut based on that section…

    Anyway, if I redacted my identity, a search committee would almost surely get the first impression that I’m a dude from my CV, due to my computer science cross-over. (I’ve only ever met one other woman with both a CS and Ecology degree…) I think that would be a good thing for me, knowing implicit bias, even though later on my identity would be revealed. That first impression might be important.

    • Good point about possibly also having to redact activities in which men or women could engage, but which in practice are more likely to be engaged in by women. That’s an awkward case, not sure what to do about it. To my mind, that’s another example of possibly having to redact real information that might well be helpful to the search committee even at an early stage.

      Your comments here highlight the opposition between two different ways of dealing with bias. You can remove information, and so accept greater randomness in the decision for the sake of less bias. Or you can try to take full account of all the available information. So for instance, taking it as points in an applicant’s favor that he or she is a member of an underrepresented group, has held a SEEDS fellowship, runs workshops for women in science, writes a great blog, etc.

      Re: the possibility that first impressions might carry over to affect the search later, once it’s unblinded, I don’t know either. My own suspicion is not, because *so much* additional information is going to be revealed at the unblinding stage.

  4. I think many scientists think they can be objective in this process (I don’t know if others have the same delusion). I wish that there was a simple solution to the problem. We usually only have the opportunity to test the quality of the one candidate that gets the job. It is difficult to assess how many equal or superior candidates get left behind due to “fit” issues, or first impressions of personality traits.

    • Well, without wanting to be difficult, it depends in part what you mean by “objective”. There’s always scope for legitimate disagreement as to who the “best” candidate is, that doesn’t reflect “bias” on anyone’s part, just legitimate differences in professional judgement. See this old post for some discussion:

      I agree that there’s no simple solution here. The goal is to get the best person for the job (again, recognizing that “best” can’t be “objectively” measured in the same way as someone’s height). Which ideally means making an unbiased decision based on all relevant information. But if fully eliminating all possibility of bias means throwing away lots of relevant information, well, there are going to be difficult decisions to make. Personally, I’m still struggling to see how Duke’s approach helps, given that at some point they’re going to be unblinding everything. If at the end of the day you’ve decided you want to base your decision on all the information available (and I agree that that’s best), then why not just make all the information available from the get-go? But I’m sure the folks at Duke asked themselves exactly that question before deciding to do a blinded search (and I’m sure folks at other places have asked themselves exactly that question and decided *not* to do a blinded search.)

      Re: only ever getting to hire one candidate, and so never knowing for sure if someone else might’ve been better, yes. But that possibility always exists and there’s no eliminating it, because no one has a crystal ball. Even a totally unbiased decision based on full information can sometimes turn out badly, because even full information doesn’t perfectly predict the future.

      Also, just as you only ever get to hire one candidate, you only ever get to run any given search one way. Duke’s experiment necessarily is unreplicated and uncontrolled. I wouldn’t say it’s thereby totally uninformative. But it might be difficult to say if the blinding made a difference–if the search would’ve come out differently had it been run conventionally. Or if it would’ve come out differently had it been run using some other approach to minimizing bias (e.g., score sheets).

      As noted in the post, UConn EEB tried doing things this way for a while, and it’s my understanding they’ve gone back to doing things traditionally. I’d be very interested to hear comments from someone at UConn about their experience with blinded searches and why they’ve decided to go back to traditional searches. I’ve heard a bit through the grapevine about why they went back to the old way, but I don’t want to spread n-th hand “information”… (UPDATE: And Mark Urban from UConn has now commented to share their experience.)

  5. Worth noting that even the fact that certain information is redacted can give away information about the applicant. For instance, if you redact the name of your college because you went to a women’s college or a historically-black college, then you’ve given away that the name of your college would be highly informative about some attribute of yours. Which in itself is somewhat informative. It tells the search committee you’re a member of an underrepresented group. A white guy wouldn’t have to redact the name of his college to avoid giving away any information about his gender or race.

    In light of this, should Duke have told all applicants to redact the names of their undergraduate colleges? Should they have asked everyone to redact the names of any fellowships they might’ve held (since some fellowships are reserved for members of underrepresented groups)?

    • I emailed the head of the search committee at Duke ( asking her to clarify the expectations for the redacted cv in terms of ‘hidden’ identification of , say, gender and or underrepresented groups. Perhaps he/she will reply directly on DE…I told her/he a discussion was underway. Oops, her email address discloses her gender.
      My UNM dept has 4 big, competitive scholarship programs for underrepresented minorities, and winning one is a clear signal of high quality; Can it be mentioned? don’t know.

  6. Cindi Jones and I came up with the initial idea of a blind searches at UConn EEB a couple of years ago after reading the literature on evidence for implicit bias against minorities and women in job application reviews. The blind review only occurs for the initial round of review to get to a short list of 20-30 from hundreds of applications. After that, applications were unblinded in order to get to the interview list.The idea is that implicit bias is likely to play a role here, and at this stage few committee members are reading papers yet, but rather looking at publication and grant records and ‘fit’ with the job ad.

    In our first attempt, we redacted applications of indicators of gender and race. This process took a lot of resources and ultimately it missed words and committee members were clever at figuring out gender anyway (e.g., a redacted ‘he’ is smaller than a redacted ‘she’).

    We ran a search the next time where we asked candidates and letter writers to redact their own letters. This process was much more successful. I had committee members guess the gender of blind applications, and they were frequently wrong even when they were sure they were right.

    Many critics point out gaps in this approach, such as that eventually gender and minority information are revealed. I think that eliminating some bias is better than none at all. Also, a side effect was that all potential applicants knew that UConn was serious about promoting diversity, and many candidates noted that as a reason to apply.

    In our last two searches, we did not redact applications although I fought strongly to keep the process in place. Some reasons were mundane – late approvals for searches and not wanting to hold up the process, and more recently, our university requesting that applicants submit a plan on how candidates will promote diversity on campus, which would be difficult to write without revealing personal aspects. A more important issue raised was whether redaction could actually work against a woman or a minority, although I do not know any empirical evidence to support this. This objection also raises the question of whether we even want an unbiased search at all or if we want to be biased toward underrepresented groups in our assessment.

    I don’t know what the best way is to give everyone an equal chance in the job market. The evidence is strong that we all judge people based on gender and race. I think removing that information for as long as possible is a logical way to move forward given our human flaws. Carefully constructed experiments to test the effectiveness of this approach would be desirable, but replication is difficult when you have one job search a year.

    • Cheers for this Mark, thanks very much for sharing the UConn experience. This is very interesting.

      I’m interested in your remark that UConn was worried about implicit bias at an early stage of the search. As opposed to at the later stages? As I said in the post, my own instinct (having never sat on a search committee running a blinded search) is that implicit biases would matter most at later stages, when the remaining applicants are more closely matched.

      Do you think you’d have chosen a different top 20-30 had the search been unblinded from the get-go? If so, how different–just a few applicants swapped in or out of the top 20-30, or many? And do you think that you might have ended up hiring someone different had the search been unblinded from the get-go?

      To my mind, your most important point is the opposition between two different ways to promote equity and diversity–blind the search committee to attributes like gender and race as much as possible, vs. explicitly taking those attributes into account so as to better promote equity and diversity. I tend to lean towards the latter myself, but it’s a hard issue on which there’s definitely scope for a range of reasonable views.

      Re: the possibility that redaction would work against a woman or a minority, my own view is that that’s not a far-fetched hypothetical possibility. For instance, in a recent comment, Ric Charnov notes that his university has four very prestigious, highly competitive fellowships reserved for members of underrepresented groups. Or think of the various highly competitive minority postdoctoral fellowship programs NSF has. Obliging a minority candidate to leave such a fellowship off of their cv, or obliging them to just call it something vague like “competitive fellowship”, seems like it could quite well cut against that candidate at the initial stage of the search.

      Re: candidates citing UConn’s blinded search as a reason to apply, I confess I find it really difficult to weigh the value of any given bias-mitigation and diversity-promotion measure as a public signal of taking bias and diversity seriously, vs. its actual effectiveness at mitigating bias and promoting diversity. For instance, we recently linked to a review of the literature on double-blind peer review by Bob O’Hara over at Methods.Blog. As Bob notes, double-blind review is quite popular in surveys, and so I suspect it’s quite effective as a public signal. If your journal goes to double-blind review, it’s going to make many authors, especially women and minorities, very happy that your journal takes the issue of bias very seriously. It’s an honest signal of taking the issue seriously. And in some hard-to-quantify way, your journal’s decision to go to double-blind review is going to help create and sustain a climate in which people are thinking and talking about bias and how best to mitigate it. But there’s next to no empirical evidence that double-blind reviewing actually improves fairness and quality of peer review. Plus there are examples of journals achieving fully gender-neutral outcomes without using double-blinding (Functional Ecology is one; I dunno. My own instinct is to give far more weight to whether a given bias-mitigation or diversity-promotion policy is actually effective (both in an absolute sense, and compared to alternative policies) than to whether it’s seen to be effective, or whether it helps create a climate in which people think a lot about bias and what to do about it. In part because I think there are other ways to create that climate, and other ways to signal that you take issues of equity and diversity seriously.

      • “Obliging a minority candidate to leave such a fellowship off of their cv, or obliging them to just call it something vague like “competitive fellowship”, seems like it could quite well cut against that candidate at the initial stage of the search.”

        This sort of thing doesn’t concern me much. Many minority fellowships don’t scream “minority fellowship” in their official title. (I have two minority fellowships on my CV, but I bet you can’t tell which they are.) And if they do, changing a fellowship name from “NSF Minority Postdoctoral Research Fellowship” to “NSF Postdoctoral Research Fellowship” certainly isn’t going to change my mind about prestige.

      • I’d be more worried about the sort of demographic effects that happen when you blind a search rather than explicitly think about bias. If minorities are less likely to apply, and you’re going from 100’s to 20 in a long-list, stochasticity (or small differences among candidates) may lead you to proportionally fewer minorities in your long list. Or if women tend to have slower (but equal quality) research output due to childbearing, you might tend to eliminate them more readily when blinded. All the bias in academia that comes *before* the search will exist in written form in the CVs, and a committee cognizant of bias might be able to adjust for it, but would need more information than on redacted CVs to be able to do so. I wish there were more data…

      • Yes, delays in research due to childbearing is an issue that just occurred to me. How do you deal with that at the initial stages of a blinded search?

        EDIT: and very good point about search committees possibly wanting to allow for the effects of past circumstances and history that are now baked into applicants’ cv’s. You can’t really do that with redacted applications.

  7. Pingback: Recommended reads #88 | Small Pond Science

  8. The Ecology search committee at Duke has been reading the responses to this Dynamic Ecology blog posting with a lot of interest – thanks also for the free publicity, Jeremy! To respond, even just briefly, we would first like to reiterate that implicit bias in faculty hiring exists and has been clearly documented. We recognize that there are multiple different approaches that can mitigate implicit bias, many of which have been mentioned in this blog. Duke already has set up training for all faculty serving on search committees to recognize and avoid implicit bias. Our intent with this Ecology search (besides hiring a fantastic ecologist) is to experiment with the effects of going beyond this training. We know that evaluation of redacted documents is not a foolproof method, but we do think that conducting early stages of assessment of applications based on record of achievement in as unbiased a manner as possible is a worthwhile endeavor. There is not much precedence to go on (except the UConn experience), so our plan during this process is also to evaluate the process itself. We plan to report back to Dynamic Ecology, and to the broader community of ecologists, on our experience after the search is complete to describe what we learned from doing the search in this way.
    With best wishes, Katia Koelle, on behalf of the Duke Ecology search committee

    • Thanks very much for taking the time to comment Katia. Thank you as well for your willingness to share your experiences after the search is complete. That’s really going above and beyond and I’m I sure I speak for many readers when I say how much I look forward to hearing about how the search worked. Send me an email when the search is complete; we’d be very happy to give you a guest post.

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s