You can’t estimate your odds of getting a faculty job from common quantitative metrics (UPDATED)

The 2016-17 ecology & evolution jobs compilation includes a spreadsheet on which anonymous job seekers can list some common quantitative metrics summarizing their qualifications. Year of PhD, number of years as a postdoc, number of peer-reviewed publications (first-authored and total), h-index, number of major grants held, and number of courses taught (not counting TA positions). Job seekers also can list the number of positions for which they’ve applied this year, the number of interviews they’ve received (phone/skype and on-campus), some personal attributes such as gender, and other information. The purpose presumably is to allow job seekers to determine how competitive they are for faculty positions.

As of Dec. 19, 2016, 73 people had listed their information. Not a massive sample of current ecology & evolution job seekers. Also surely a statistically-biased sample in various ways. But it’s many more current job seekers than anyone not currently sitting on a search committee is likely to have personal knowledge of. So I checked how well quantitative metrics like number of publications and h-index predict the number of interviews job seekers receive. For comparison, I also compiled data on the h-indices of 84 North American ecologists recently hired as assistant professors.

Faculty job seekers understandably want any information they can get on how competitive they are. But how competitive any given individual is for any given position depends on many factors, many of which are only captured coarsely or not at all by common quantitative metrics. You can’t put numbers on fit to the position, quality of your science, strength of your reference letters, and so on. So I suspect that many job seekers tend to overrate the importance to search committees of things you can put numbers on: publication count, h-index, etc. It’s an instance of “looking under the streetlight”. Hence my question: Can you estimate your odds of being interviewed for, or obtaining, a faculty position in ecology and evolution just from common quantitative metrics?

Short answer: No. For the details, read on.

Here’s my copy of the spreadsheet of qualifications of anonymous job seekers, which I had to clean a bit. You can easily compare my cleaned version to the first 73 positions on the original to see what I did. Briefly, I recoded a few non-numerical entries, for instance recoding “<1” for number of years as a postdoc as 0.5 years. I dropped some variables that seemed less useful or redundant. And blanks in the two columns for # of interviews were recoded as zeroes. Obviously, I have no way to check the accuracy of the original data. All I can say is that the data don’t contain any obvious errors or implausibilities. Also, the data could change a bit as job seekers get more interview offers, publish more papers, etc.

I also haphazardly used Google Scholar to look up the h-indices of 83 people recently hired to tenure-track assistant professor positions in ecology or an allied field (hired in 2015 or later, or in a few cases in 2014). I looked at Google Scholar h-index purely because it was convenient. Do not read anything else into my choice of metric here, and definitely do not take my choice here as an endorsement of using the h-index for any serious purpose.

Without further ado, the results, followed by some discussion. I’m only going to show results for phone/skype interviews. Results for on-campus interviews were very similar.

Year of PhD doesn’t predict number of interviews, except that interviews are rare for the very few people in the dataset who have yet to receive their PhD or who received their PhD before about 2008-9:

year-of-phd

Size of point increases with the number of identical observations.

The same is true for number of years as a postdoc (which of course covaries tightly with year of PhD). The few people who’ve been postdocs for a year or less, or for >6 years, report receiving few interviews, but otherwise there’s no correlation:

years-as-postdoc

Same for number of first-authored papers (which of course covaries with the previous variables). The few people with >~20 first-authored papers mostly report receiving few or no interviews, but otherwise there’s no correlation:

first-authored-papers

Same for total number of publications (which of course covaries with the previous variables). The few people with <~10 or >~35 papers report receiving few or no interviews, but otherwise there’s no correlation:

number-of-papers

Same for h-index. (Which of course covaries with the previous variables. It’s also associated with the impact factors of the journals in which you publish.) The few people with h-indices <~4 report receiving few or no interviews, but otherwise there’s no correlation:

h-index

Number of major grants held does not predict number of interviews:

grants

Number of classes taught does not predict number of interviews, save that the few people who report having taught >6 classes report receiving few or no interviews (aside: before you go leaping to the surely-incorrect conclusion that “too much” teaching experience is bad for your chances, read the rest of the post):

classes

You know what does predict the number of interviews you’ll get? The number of positions for which you applied:

applications

Now, I know what you’re thinking: maybe none of these metrics predicts number of interviews very well on its own, but maybe all of them together do? Nope. I did a multiple regression of number of phone/skype interviews on years as a postdoc, first-authored publications, total publications, h-index, number of classes taught, and number of major grants held. The R^2 was only 0.03, and the p-value was 0.92. And before you hassle me about possible data transformations or nonlinear regression or something, c’mon: look at those graphs above! Every single one except the one for “number of positions applied for” looks like a shotgun blast. There is no signal in these data of search committees selecting heavily on any of these quantitative metrics.

And before anyone asks, no, scaling number of publications (or number of first-authored publications, or h-index, or etc.) relative to number of years as a postdoc does not predict number of interviews either. (Results not shown).

Finally, the h-indices of the 84 recent ecology hires were all over the map. The mean was 9.8 (median 9), the middle 50% of the distribution was 5.25-10, and the full range was 1-27. Before you ask, no, this wide spread is not because research universities only hire people with high h-indices while other places only hire people with low h-indices. The 42 recent hires at R1 universities had a mean h-index of 11.8, median 11, middle 50% 9-14.75, full range 3-27. The 42 recent hires at non-R1s had a mean h-index of 7.9, median 8, middle 50% 5.25-10, full range 1-21. So, h-indices of recent R1 hires do tend to run higher, but there’s lots of overlap between the R1 and non-R1 distributions. Heck, even within a single R1 department recent hires in ecology had h-indices ranging from 3-17, and in another the range was 5-27. And within a single non-R1 department recent hires in ecology had h-indices ranging from 1-12. Even if you restrict attention to tiny teaching colleges with no graduate programs, I found one that recently hired an ecologist with an h-index of 12, and another that recently hired an ecologist with an h-index of 2. Just to give you a sense of how big those ranges are, Jeff Ollerton reports that, over the course of his career, his own h-index typically increases by ~1/year. Assuming that Jeff’s a fairly typical ecologist, that means that even just the “typical” range of h-indices of recent hires in ecology (4.75, the interquartile range) is roughly equivalent to more than 4 years worth of growth in the h-index of a typical ecologist.

As an aside, the distribution of h-indices of recent ecology hires overlaps a lot with the distribution of h-indices of the anonymous ecology job seekers (mean 8.2, median 7.5, full range 1-19). Note as well that the current h-indices of people hired in 2015-16 (or in a few cases in 2014) will in many cases slightly exceed their h-indices at the time they were hired. Finally, note that recently-hired ecologists with Google Scholar profiles may well be a non-random subset of all recently-hired ecologists with respect to their h-indices. I suspect that researchers with low h-indices are less likely to maintain Google Scholar profiles. And when I was compiling these data, I found that recent hires at teaching colleges were less likely to maintain Google Scholar profiles than recent hires at research universities.

A few comments:

  • I’m not at all surprised by these results. They confirm what I said in my old post on how faculty position search committees work. Search committees evaluate every remotely-competitive candidate in a holistic way. And they place a lot of weight on things that can’t be quantified and that don’t necessarily correlate tightly with things that can be easily quantified. Things like “fit” to the position, their own evaluations of the quality of the candidates’ previous and planned research, reference letters, etc. And different search committees look for different non-quantifiable things. (aside: just because those things can’t be quantified doesn’t mean they’re purely “subjective”, at least not in the same sense that a preference for chocolate ice cream over vanilla is purely subjective. Rather, they’re matters of professional judgment.) Bottom line: faculty search outcomes are not primarily driven by crude quantitative metrics, or by any factor even loosely correlated with crude quantitative metrics.
  • The partial exceptions to the previous bullet are if you are very inexperienced, or very experienced. That’s why people with very few or very many publications, very few or very many first-authored publications, etc. report receiving few interviews. Very roughly, “very inexperienced” means no PhD yet, or perhaps <1 year of postdoc. “Very experienced” means, roughly, 6 years or more as a postdoc.
  • The one clear-cut piece of advice I think job seekers should take away from all this is “apply widely”. Don’t get me wrong, it’s totally fine to restrict your search geographically or by type of institution or whatever. Just make sure you have good reasons for doing so, meaning that you’d rather not get a faculty position than search more widely. Because it’s obvious but true: the more positions you apply for, the more interviews you’re likely to get. Anecdotally, many people who think they have a “dealbreaker” (“I could never be happy in institution type X/geographic region Y/etc.”) later discover it wasn’t a dealbreaker at all. For instance, I didn’t realize I could be really happy in a big city, in another country far from my family, until I did it for my postdoc.
  • If you’re a job seeker, I can’t tell you whether these results should make you happy or sad. That’s up to you. You could choose to be happy about these results because they tell you not to worry about what you’ve might’ve thought of as some deficiency of your cv. Hooray–your future isn’t determined by how many papers you have! 🙂 Or, you could choose to be sad about these results because they leave you very uncertain as to how competitive you’ll be for any particular position. Boo–your future isn’t determined by how many papers you have! 😦
  • In an old post Brian and I offered some advice on how to decide whether to keep pursuing a tenure-track faculty position. It’s a difficult and personal decision–which you shouldn’t make by focusing on crude quantitative metrics like how many papers you have.
  • Nothing in this post is a criticism of the folks who put together or contributed to the “anonymous qualifications” page on the ecology & evolution jobs spreadsheet.
  • Coincidentally, after I wrote this post but before it was published, someone else had the exact same idea, reporting the same results on the “general discussion” tab of the ecology & evolution jobs spreadsheet.

Finally, just to satisfy my own curiosity, I hope you’ll complete the following two polls:

UPDATE: on Mar. 3, 2017, I downloaded updated data from the ecology jobs spreadsheet and ran the numbers again. There are now over 100 people who’ve added their information, and many people have updated their information. More people have more interviews now. But the main conclusion is unchanged–no crude quantitative metric predicts number of interviews received other than number of positions applied for.

Also, just out of curiosity, I split the data by gender and didn’t find any substantial differences. Almost exactly equal numbers of men and women have chosen to report their anonymous qualifications. On average, the men and women in this self-selected group have nearly identical experience (mean of 2.6-2.7 years as a postdoc), nearly identical h-indices (8.08-8.09), have applied for similar numbers of jobs (17 for women, 14.8 for men), and received similar numbers of phone/skype interviews (3.1 for women, 1.8 for men) and on-campus interviews (2.3 for women, 1.7 for men). And there is of course substantial variation around the averages among both men and women.

77 thoughts on “You can’t estimate your odds of getting a faculty job from common quantitative metrics (UPDATED)

  1. Unsurprisingly, a commenter on the “general discussion” tab of the the jobs spreadsheet reports having done the same analysis with last year’s anonymous qualifications data and obtaining the same results.

    • That same comment thread also mentions that at least some of the people who initially added to the qualifications tab have not been updating their information. I also wonder whether there is any sort of self-selection in who lists themselves.

      • Maybe. But frankly, the updated information would have to change a *lot*, and in *systematic* ways, to cause relationships to appear where there are none in the current dataset. *Randomly* adding a bunch of interviews would only strengthen my conclusion that these quantitative metrics have no predictive power.

        I’m sure there is some self-selection in who lists themselves. Totally guessing, but I suspect some grad students and brand-new postdocs who are applying for some faculty jobs but are too inexperienced to be competitive aren’t listing themselves. But there could be other more subtle sources of self-selection too. It’s hard for me to imagine that self selection would be sufficiently strong, and sufficiently non-random *with respect to the patterns I was testing for* to cause patterns that actually exist in the world to not show up in this sample. But I can’t say, obviously.

        Also, I’ll repeat again what I noted in an earlier comment: on the ecoevojobs spreadsheet, someone just noted that they did the same analysis with last year’s data, and got the same answers I did. Last year’s data of course should suffer less from information that hasn’t been updated, interview invitations that haven’t yet gone out, etc.

  2. Jeremy, some of those “shotgun blasts” look suspiciously triangular, suggesting that maybe quantile regression would be more appropriate than “regular” regression. In other words, and consistent with your discussion, the pattern (if there is one) is more about some unusual extremes that do or don’t get more interviews than average. Such an analysis might reveal that candidates with lots of grants and papers get fewer interviews. Which might indicate “staleness on the market”, or only that with more experience they are more expensive (in all senses) to hire. Of course, such an analysis might also reveal that my eye, like all human eyes, is really good at seeing patterns that aren’t there 🙂

    I have a soon-to-appear anti-CV post in which I assert a similar thing: you should never be offended by any particular job rejection, because the fit of opening to applicant is far too complex to send any real message. I can now link to you for data to support this!

    • I was waiting for someone to wonder if there’s an intermediate optimum for these metrics.

      I mean, you’re welcome to fit a spline to the 75th quantile or something. But I doubt you’ll find much. That visual impression of a triangular distribution is driven by (1) a very small number of people who happen to have gotten many interviews (2) the few very inexperienced or very experienced people who’ve gotten few or no interviews (noted in the post), (3) the fact that most people have intermediate values on the x-axis, so that there’s a larger sample size and thus a greater chance of sampling someone with an intermediate h-index (or whatever) who’s gotten lots of interviews.

      Re: your anti-CV post, yes. It’s understandable but always totally unjustified when someone complains that they were passed over for job X in favor of someone with fewer publications or fewer first-authored papers or whatever. It’s unfortunate when job seekers’ understandable frustration at how competitive the job market is leads them to lash out at undeserving targets, or see unfairness or capriciousness where there is none. As a job seeker, you have no useful information on the applicant pool for any position for which you applied, and only crude information at best on fit. So you’re *never* in a position to second-guess the outcome of any search.

  3. I don’t think anyone has ever accused me of being “a fairly typical ecologist” 🙂 Seriously though, really nice analysis and interesting post. A couple of initial thoughts/comments:

    – what if you were to construct some kind of multiplicative index using these metrics rather than using multiple regression, would you expect the same result? E.g. h-index x grants x teaching experience = that more holistic measure.

    – that h-index post was far and away my most viewed post last year (and second-most viewed post in 2015) suggesting that there’s still a huge “appetite” for the h-index, in terms of understanding what it is and how it can/should (or cannot/should not) be used. I was planning to update it and give examples where I think it might be useful (e.g. understanding when to apply for a promotion in the UK system which is a less transparent process than in North America, or at least that’s my impression).

    – in my experience, I agree, the longest and most productive CV in the world is not going to get you a job unless you “fit” with the department, and that can be as simple giving the presentation that was asked for, not the one you thought the interview panel ought to hear….

    • I don’t see any reason to think a multiplicative model would explain much more variation. But dont let me stop you from trying it.☺

      I worry a little that this post contributes in some small way to appetite for the h-index and other crude quantitative metrics. Even though the whole point is that those metrics are useless in this context. But maybe that’s a silly worry.

      • No, I don’t think you should worry about that – you’ve been very clear about the lack of correlation with the h-index. However there’s a proviso to that: as you say yourself, this is a highly self-selecting and biased data set. Perhaps your findings are not representative at all?

      • Yes, impossible to say how representative the people on the “anon quals” page are of all applicants for tenure-track ecology jobs in N. America. At a wild guess, I suspect they’re fairly representative. But I don’t know.

      • OK, I played around with the numbers on the spreadsheet and, as you suggested, there’s no sign of any correlation with more complex indices. However two things struck me:

        1. Did you remove those zero interview data points where the individual had not applied for any positions – I counted at least 10. Sorry if that’s an obvious suggestion!

        2. How trustworthy are these data, and should oddities/outliers be removed? For example, one individual who is not due to finish their PhD until 2017 claims to have published 19 papers, 9 of them as first author, to hold 2 >$100k grants and to have taught 3 classes. I suppose that’s not impossible but seems highly implausible (at least from a UK perspective where the number and size of grants a PhD student could apply for is very limited).

      • @Jeff:

        Re: 1, as you can see if you look closely at my final graph, nobody with zero applications is included on the graph.

        I’m sure there are some errors in the data. But for instance, depending on the subfield you work in, you absolutely could have 19 papers including 9 first authored before finishing a PhD. Physiologists in particular publish a *lot* of really short papers. And there are ecologists who publish a *lot* of natural history notes in little local journals in addition to the usual number of papers in more selective journals. In noting this, I don’t intend any criticism of those publication practices. Just noting that yeah, it’s possible for someone without a PhD to have that many papers.

        As for someone without a PhD having held multiple large grants, yeah, depends what you mean by “held”. Possibly, someone is counting a grant they helped write. Or a grant on which they were listed as a co-PI but didn’t actually control the budget. Or perhaps most likely, it’s someone who got a couple of very valuable graduate student fellowships and is counting them as “grants”. For instance, Vanier graduate fellowships in Canada are worth over 100K.

      • @Jeff:

        No, they’re not excluded because some people who left “# of applications” blank report receiving one or more interviews. So I assumed that people who left “# of applications” blank all applied for at least one job but just didn’t specify how many they applied for.

        I just checked if dropping the people who left “# of applications” blank changes any of the other analyses, and as you’d expect it doesn’t.

      • I hope I’m replying to Jeff’s comment below by hitting ‘reply’ to this post…

        Anyway, it’s definitely possible (albeit rare) to have that kind of productivity as a grad student in the U.S., especially if he/she did a M.S. beforehand and had research experience prior to starting a Ph.D, or works in a particularly collaborative and well-funded lab. Keep in mind that it’s not unusual to take 7 years to finish a Ph.D here. Furthermore, I’m sure people are lumping NSF and university fellowships into the >$100k grant pot.

  4. This post reminded me of the movie Moneyball (no I didn’t read the book) which highlighted the lack of correlation between the high value players predicted by the Jonah Hill character and the high value players picked by the team’s recruiting staff. And at least in the movie, the quant models were better predictors of success than the whatever process the recruiting staff were doing. So maybe search committees *should* use some of these metrics to choose whom to interview 😉 Or maybe using these metrics is more like the metrics used by the recruiting staff and there is an undiscovered set of metrics out there. Sounds like an open niche for someone to start a company that sells quant models predicting faculty success to Universities.

  5. I wonder if using impact factor* instead of h-index would reveal something? One can imagine that the relationship between number of publications and number of interviews is not strongly correlated because quality of publications is so important. For example, a person with 3 pubs in top tier journals may have a better shot at an interview than someone with 15 pubs in much lower tier journals.

    *Yes, I’m aware IF is a far from perfect metric, but I can’t think of a better way to quantify “quality of publications” at the moment.

    Thanks for a great post.

    • Yes, you could perhaps do a bit better if you had that information. Though even there, it’s not that search committees are actually looking at the impact factor of the journals you publish in.

      There are unfortunately a lot of myths out there about how and why the venues in which you publish matters. For instance, the laughable myth that you have to have a paper in Science/Nature/PNAS to get a job at an R1 (no you don’t, as evidenced by the fact that the large majority of ecologists recently hired at R1s don’t have Science/Nature/PNAS papers).

      My old post on how faculty position search committees work has some discussion of how and why publication quality or venue matters.

      • I actually served on a search committee (at a major Univ. in your country!) where they did tally up the IF of all the competitive candidates and the scores played an important role in decisions! Note, I am not advocating this approach.

    • I think more useful information comes from non-obvious minimum qualifications. Regression analysis inevitably fails and the spreadsheet has to be taken with great caution. For instance, I only provided data for some fields (I do not think I filled the how many positions have you applied for for instance).

      I did a similar analysis years ago on ERC Starting grant winners and except for 1 winner, all others had either a Science or a Nature publication as first author. Which is a non-obvious (almost) minimum qualification.

      I do not think you can find a newly hired Assistant Professor in Ecology at a R1 with no publications as first-author in a journal with IF > 4 (maybe even 5).

      I can find others.

      • “I do not think you can find a newly hired Assistant Professor in Ecology at a R1 with no publications as first-author in a journal with IF > 4 (maybe even 5).”

        Well, yes, I’m sure you can find a sufficiently low bar that *almost* everyone recently hired by a research university has jumped. Like already having a PhD, as noted in the post. And yes, “at least one first-authored paper in a journal at least as selective and widely-read as Oikos/Oecologia/etc.” might be another such low bar for research universities.

        I have no idea what ERC starter grants are like, but I’m guessing that they’re very prestigious and so very, very few people get them? That gets into a quite different situation than the one discussed in the post. *Very* few recently hired ecologists have Science or Nature papers. Even at R1 universities.

      • ERC starting grants are typically multi-year grants (up to 2 million euros per project I think) for young (up to 7 years post PhDs) researchers in Europe. Very prestigious. Minimum non-obvious qualifications.

        And yes, “at least one first-authored paper in a journal at least as selective and widely-read as Oikos/Oecologia/etc.” – More selective, I used IF 4 (or better 5) on purpose (Ecology/Ecology Letters/Molecular Ecology territory). My hypothesis is that if your top IF paper is in Oikos, your chances of getting hired are extremely low (Oikos is an excellent journal by the way). Don’t you think so?
        If we disagree, it is an interest test.

      • Simone, new data in tomorrow’s post prove both you and me wrong! You can get hired as a TT asst. prof at an R1 with no first-authored publications in a journal with IF > 3 (never mind 4 or 5)!

  6. I would mention that number of phone/skype interviews probably misses a lot. Many universities go straight to on-campus invitation. Second, many universities have not sent out invitations for phone/skype or on-campus for the 2016/2017 season. These factors are likely adding considerable noise to your analysis. Finally, given the correlation between #jobs applied vs. #interviews, you need to account for this in your likelihood when you fit a poisson or zero-inflated poisson to these data (very possible more zeros here than can be captured by a poisson). What you really want to know is the probability of landing an interview which requires accounting for the total number of interviews, not merely the total number of interviews.

    • @colinaverill:

      Yes, many universities go straight to on-campus interviews. But as I said in the post, campus interviews also are uncorrelated with all the predictors, save for a positive correlation with # of positions applied for.

      Yes, there is surely some measurement error in the results because some interview invitations haven’t gone out yet. But as I said in an earlier comment, someone on ecoevojobs.net reports having done the same analysis for last year’s job searches (all of which are now complete) and gotten the same results. So I doubt that the results will change dramatically as more interview invitations go out.

      Sorry, but if you think that I overlooked a true relationship because I didn’t assume a poisson or zero-inflated poisson error term, you’re going to need to prove it to me by doing the reanalysis yourself. I’m open to being convinced. But with respect, when I look at the data, I don’t see even a hint of a signal beyond what I discussed in the post. So my own feeling is that the alternative analysis you suggest is statistical machismo.

      • Reran the model as a binomial in glm, accounting for the number of applications in the probability of receiving a phone/skype interview (R code below). No effects, as you have found previously. Only interesting thing I can add is that, on average (intercept only model), 7% of applications result in a phone/skype interview.

        Though, the fact that the results are not different is not a good case for fitting the “wrong” model. You want success rate, not the total number of interviews landed.

        #load data
        d <- read.csv('/path/to/data.csv')
        #generate two column dependent variable of number of successes and failures.
        y <- cbind(d$number.of.phone.skype.interviews, d$number.of.applications – d$number.of.phone.skype.interviews)

        #Fit a model. Here is intercept only, substitute any IVs you like.
        m1 <- glm(y ~ 1, family=binomial(link='logit'))
        #get the average probability of receiving a phone interview. Each app has a ~6.9% prob of landing an phone/skype interview.
        require(boot)
        inv.logit(coef(m1))

      • “Though, the fact that the results are not different is not a good case for fitting the “wrong” model. ”

        In drawing the conclusions I did, I’m not relying primarily on that multiple regression, I’m relying primarily on just looking at the graphs. It’s obvious from just looking at the graphs that there’s no relationship to find no matter what you assume about the distributions the data were sampled from, or whether you focus on interviews/application, or etc. Thanks for taking the time to confirm this.

        Part of the art of doing statistics is knowing when the “wrong” model, or no model at all, is just fine. 🙂

  7. Can I comment entirely by linking to previous posts?

    Scientists like to think it can all be quantified (https://dynamicecology.wordpress.com/2016/02/29/we-arent-scientists-because-of-our-method-were-scientists-because-we-count/) but productivity is a complex high-dimensional concept poorly represented by any metric (https://dynamicecology.wordpress.com/2016/06/21/impact-factors-are-means-and-therefore-very-noisy/) and more importantly productivity is only one goal – fit (https://dynamicecology.wordpress.com/2015/12/22/why-fit-is-more-important-than-impact-factor-in-choosing-a-journal-to-submit-to/) and collegiality (https://dynamicecology.wordpress.com/2013/07/23/are-deans-committing-the-same-error-as-hen-breeders/) are at least as important and nobody has figured out (or bothered) to measure those quantitatively.

    I know I’m saying the same things you are. And I agree that tweaking your analyses are not going to identify anything new and are kind of missing the point (not that I could help myself from suggesting additional analyses when you shared preliminary data with Meg & I).

    In my experience the productivity/work quality aspect becomes increasingly less important as the process goes on. It is very important on the initial filter. Then on the scoping to 10 or so skype interviews, there are more productive people than you can interview so fit of subject area becomes increasingly important. And then by the time you get to the on campus interview, everybody is productive enough for your university (you’ve just taken the top 3 people out of a couple of hundred applications), so it is almost entirely about collegiality and and fit in terms of skill sets/courses teachable, etc.

    Just as a concrete example, in a recent search I was on, one individual was clearly the highest productivity (20+ papers at the postdoc stage with strong reference letters (with the next person at 12 papers and most at 5-6), but the fit in subject area was weak. This was enough to get him/her to the skype interview phase although not without significant discussion due to fit. But during the skype interview the person managed to demonstrate a complete lack of collegiality and he/she was out.

    • “In my experience the productivity/work quality aspect becomes increasingly less important as the process goes on. It is very important on the initial filter. Then on the scoping to 10 or so skype interviews, there are more productive people than you can interview so fit of subject area becomes increasingly important. And then by the time you get to the on campus interview, everybody is productive enough for your university (you’ve just taken the top 3 people out of a couple of hundred applications), so it is almost entirely about collegiality and and fit in terms of skill sets/courses teachable, etc.”

      Yes, exactly.

  8. Unhappy. It’s a lottery. There are so many great ecologists out there. Not enough positions to support them all. I’m personally gunning for that very satisfying data point on the positions-applied-for vs. interviews graph: (1,1)

    • The fact that those measures do not predict much of the chances of getting hired does not mean it is a lottery.
      Clearly luck has a role, but there are a few “stars” that are getting multiple interviews at R1s and many “earth-bound” who get zero interviews. What are the stars doing to be stars? Important labs, trendy topics of research, publications in certain journals are the first coming to mind.

      For instance, if you come from top lab (PhD and/or postdoc) and have some pubs in (say) Ecology/Ecology Letters and up with strong rec letters from PI, what are your chances of getting hired at a R1?
      I’d say very high (also because I have been interested in these things for years) and apart from the rec letters, quantifiable by someone who’s willing to spend some time doing it. It is clustering and not regression, but the model is part of the modeling.

      In synthesis, I think that by using the right measures, there is more signal and less noise than what’s coming from the blog post.

      • @Simone and Margaret:

        Simone is correct: job searches are not a “lottery” in the sense that, if you gave the same search committee the same applicant pool again, they likely would pick all or mostly the same people to interview, and would have a non-trivial chance of hiring the same person. And it’s also true (though to a degree that’s difficult to quantify) that two different search committees at two different but similar universities that are searching for similar jobs have non-negligible odds of deciding to interview and make offers to at least some of the same people.

    • Fair enough. Though surely you were aware already that there are more good ecologists in the world than there are tenure-track jobs in ecology, right? So why are you more unhappy after having read this post than you were before? Honest question. Is it just that the post really drove home to you an unhappy point you were already aware of?

      • Jeremy, I thank you for the blog post, the topic is very interesting. I am aware that there are more good ecologists in the world than there are tenure-track jobs in ecology.

        In particular, my comments on the analyses are (and I understand this is blog post):

        – self-reported data. 1) There are some pub numbers, grants etc. that look very suspicious, 2) people report only certain fields (like me) or lose interest through time, 3) self-selection in every direction. For instance, the whole decade-long debate on calories and weight loss was fueled by self-reported data. 200 lb people were gaining weight on 1000 calorie a day diets. The problem was that 1000 a day were fantasy.

        – However, I think that using other data (same variables, but not self-reported) would give the same results using the same models. It is not driving home to me an unhappy point, I have been very aware for a long time that those are just potential predictors and never made the jump to predictors.

        – As I explained above, changing the modeling approach and using other variables, I believe (I do not want to use ‘I know’) would provide more meaningful answers to the question (very interesting especially before starting the PhD) “What should you do to increase your chances to land interviews/position at R1/SLAC etc.”. For me, some of those are:
        a) important labs (history of positions landed by PhDs or postdocs of the PI + rec letters), b) publishing in Ecology and up, c) trendy topics (for instance, afaik community ecology > population ecology for instance).
        I did a very brief analyses of ERC starting grant winners years ago, I was talking about the Science/Nature results and in general I observed a terrific, textbook example of confirmation bias. While the fact was that all winners had a Science/Nature, which was clearly a minimum qualification.

        I would say that checking a-b-c- would (I am conservative) give prob > 0.75 of landing a job at a R1 (and of course a-b covary with “being good”).

      • @Simone:

        The undoubted limitations and imperfections of the data seem to me for an argument for simpler stats or avoiding stats entirely, not an argument for more or fancier stats.

        Re: Science/Nature papers being a minimum qualification for ERC grants, I can’t really comment much, not having seen the data. But as I noted in my old post on how search committees work, having a Science/Nature paper tends to correlate with lots of other strengths. And Science/Nature papers themselves often report very interesting and important results. So “having a Science/Nature paper” often is synonymous with “having done some truly outstanding science”. I guess I don’t quite see why “having done some truly outstanding science” is a problematic feature to have in ERC early career grant recipients. Recognizing of course that I’m sure many people who didn’t get those grants also did truly outstanding science.

        Yes, publishing in good journals matters for many positions.

        As for who you’ve worked with, yes, people who’ve done a lot of outstanding work often have worked with and/or trained with other people who’ve themselves done outstanding work. Does that potentially lead to a bit of a Matthew Effect? Yes. But that’s also how you do good science and become a good scientist yourself–you train with and collaborate with good people (and of course, there are a lot of good people out there, many of them not employed by R1s).

        You can of course make the argument that context-independent differences in “scientific ability” (whatever those might be!) among investigators are sufficiently small and hard to estimate, and major scientific advances sufficiently hard to anticipate, and investigators with big grants sufficiently time- and attention-limited (rather than money-limited), that grant funding should be spread thinly across many investigators rather than being heavily concentrated on a small number of “stars”. I agree with that argument.

      • Jeremy, if I understand well it seems you agree with what I wrote (and I was not calling for fancier statistics with the data you have).

        I also provided some testable predictions (which I do not expect anyone to test). The main point for me is that there is more signal if you look at other data. Intangibles etc. certainly have a role, but if there are people (and there are) that are very marketable, there is certainly more signal than what’s coming from the analysis, but from other, available data.

        Grant funding is tricky.

      • @Simone,

        Apologies for misunderstanding the thrust of your comments. Yes, I agree that there are other data, in particular publication venue data, that would give you at least a bit of predictive power, though I may disagree with you a bit on just how much predictive power one could expect to obtain.

      • Not surprised or *more* unhappy. Just completely unsurprised and unhappy. That’s all. Didn’t mean to imply that your post changed my mood.

  9. With 75ish poll responses in, looks only only a very small fraction of respondents are very surprised by the results, but there’s a roughly even split of the remaining responses among the other three surprise levels. And most respondents’ feelings either aren’t affected by the results (36%) or are mixed (31%), with 21% feeling happy and 11% feeling sad.

    I’m slightly and pleasantly surprised that very few readers are very surprised by the results. I didn’t expect that response to be most common but I expected more than a few percent of respondents to choose it.

    I’m disappointed but not surprised that 70% of poll respondents were at least slightly surprised by the results. As you can probably tell from the comments, these results are totally unsurprising to the faculty I happen to know. So on the principle that “what’s familiar to me and my friends surely is familiar to most people, right?”, there’s a part of me that can’t quite believe these results would be even slightly surprising to anyone. But I’ve now seen enough violations of that principle on this blog to expect it to be violated. Indeed, off the top of my head I can’t recall that principle ever holding true for me. 🙂

    I’m not surprised that “no affect on my feelings” and “mixed feelings” are the most common responses to the second poll question. I am surprised and interested that “these results make me happy” is outrunning “these results make me sad” by 2:1. I’d have expected an even split, or perhaps a slight predominance of “sad”.

    UPDATE: And now with 260+ responses in, the numbers haven’t shifted too much. Only 4% very surprised by the results, with a pretty even split between the other three surprise levels. 40% have mixed feelings about the data in the post, 33% say their feelings aren’t affected, 15% are happy about the result in the post, 12% are sad. So the split between “happy” and “sad” is now close to the even split I expected.

    • Jeremy, I enjoyed the post and comment thread. Maybe I can provide a little personal context for why these results make me happy as a current job-seeker. To me, these results imply that once you’re accomplished enough to be seen as “qualified” by most search committees, things like your cover letter and research/teaching statements (your way of conveying “fit” pre-interview) are potentially more important than chasing that next high-impact pub or major grant. Not that I’m going to stop doing that, mind you, but given the amount of effort, time horizons on pubs/grants, and difficulties of peer review, having excellent job materials seems a far wiser investment and is something I have far more control over. I suspect that others who are happy about this news might have similar feelings.

  10. Via Twitter, a reminder of a paper I linked to in an old linkfest. At CNRS in France, newly-hired evolutionary biologists are more experienced and productive than they used to be even a decade ago via various quantitative metrics. But there’s still substantial variation among the evolutionary biologists hired in any given year:

    http://link.springer.com/article/10.1007%2Fs11192-015-1534-5

    Unfortunately, it’s a small dataset (only 54 researchers in total), and the data are plotted as means and standard errors by year hired, implying that many of those means and standard error bars are probably for 5 observations or less. So my interpretation is that, besides revealing a temporal trend I couldn’t test for (and that doesn’t exist in N. America based on other data sets I’ve seen and my own anecdotal experience), those data are broadly in line with those in the post. Lots of variation among recent hires in terms of # of papers, experience, etc.

  11. I also did roughly the same thing last year to procrastinate, though I’m not the commenter on the wiki. At the time, my thought was that part of the relationship between applications and interviews is that there just aren’t enough really high-profile jobs to populate that upper quadrant by themselves. So if you send in 30+ applications, to jobs you are even nominally qualified for (ie not microbial ecology when you work with vertebrates), you are almost certainly applying to some/many less-competitive jobs with smaller applicant pools.

    • Dunno. Could be.

      In general, I don’t like the understandable tendency to judge “competitiveness” for any given job by the number of applicants. A large fraction of the 300 people (or whatever) applying for that ecology job at Famous U are totally non-competitive. As in, they’re not even nominally qualified or are obvious poor fits. And some fraction of the others aren’t obviously uncompetitive to outsiders, but are obviously uncompetitive to the search committee (i.e. they’re uncompetitive given who the other applicants are and all the needs and desires of the hiring department that couldn’t be expressed in the ad.) It’s a bit like the NY marathon, which is very competitive–but not because 25,000 people run it.

    • Yes, I was just scrolling down to leave a similar comment and I think this is a really important point for how these data are interpreted (setting aside all the data quality issues discussed above). If individual metrics are associated with selectivity of the job search, you could see the (lack of) correlations even if the metrics are actually highly predictive of success. For example, a highly productive ecologist might apply to 10 jobs while a less productive ecologist applies to those same 10 jobs, plus an additional 30 jobs, many of which are ‘less’ competitive. Maybe the first candidate gets 1 interview out of 10 from their list while the second gets 0 out of 10 from the top schools, but 5 out of 30 from the additional schools.

      Note that I’m not necessarily saying the conclusions in the post here are wrong, but it seems that this mechanism is plausible and would lead to very different conclusions about the predictive power of these metrics (though admittedly not in a way that would make them any more helpful to job seekers given the information available). This mechanism could drive both the correlation between # of applications and interviews and the lack of correlation between raw metrics and interviews (for those plots you’d have two competing forces driving the correlation in opposite directions).

      • @Conor:

        Except that on the jobs spreadsheet, the folks who report their anonymous quals report applying to different sorts of jobs. It’s not that the people who apply to few jobs are only applying to a nested subset of the jobs applied for by people who applied for many jobs.

        Ok, I suppose it’s possible that number of jobs applied for covaries with the competitiveness of the applicant for the jobs for which he or she applied. And sure, competition for some jobs could be stronger in some meaningful sense, and I suppose it’s possible that # of positions applied for covaries with average competitiveness of jobs applied for. I dunno. This all sounds like a bit of a stretch to me–the sort of hypothesis you come up with if you feel like these crude quantitative metrics just *have* to have *some* predictive power. But I’m just going on instinct, I don’t know.

        And in any case, as you say, even if your hypothesis is right it’s not one that would help any given individual better estimate their own competitiveness for any given position.

      • @ Jeremy

        I take your point that the nested applicants is an exaggeration of what is likely the reality. Even with the different types of jobs though, it doesn’t seem a stretch to suggest that applicants applying to just a few are being more selective and likely to be choosing the most desirable, and therefore most competitive jobs in their search (by whatever metric desirable/competitive are defined, I agree that total number of applicants probably isn’t the best measure here). Those could be the most competitive SLAC, R1, etc. Whether that pattern is strong enough to obscure relationships, I don’t know. Personally, I’m not particularly attached to the idea that these metrics ‘*have* to have *some* predictive power’, but I admit that I am a bit surprised at the complete lack of signal. We agree that it doesn’t make any practical difference for job seekers anyway.

        It would be really interesting to do a similar analysis within the applicant pool for a few individual job searches to ask whether any of these metrics predict the interview list. That would get around a lot of problems with the jobs wiki data. You wouldn’t be able to get at the # of applications pattern, but I’d have a lot more faith in the conclusions re: the other metrics. Alas, those data are only probably only accessible if you are on a search committee and then they probably cannot be shared.

  12. This post confirms my experience and the advice I was given as an applicant – apply early and often to any position that seems like maybe a decent fit and is geographically acceptable and pay close attention to the ‘soft’ parts of your application – letters, your cover letter, statements, interviews if you get them, etc, as they really matter (and get lots of feedback on them from different people). I was told early on that ‘you have no idea what a hiring committee is actually looking for’ and that certainly proved true for me.
    From the applicant perspective I would also say it’s similarly near-impossible to predict where you’ll get an interview or an offer. I was at a postdoc workshop years ago where the people who’d gotten faculty jobs *all* said the job they’d gotten was one they applied for thinking ‘Well, there’s no way this’ll work, but we’ll see anyway…’ If I had ranked the applications I sent out the year I got a job based on likelihood of success, the job I actually got would have been close to the bottom, but it worked out and it’s great.
    I think this shows that academic hiring committees aren’t robots, which is mostly a good thing, as long as they’ve also had some training in implicit bias, etc.

    • “From the applicant perspective I would also say it’s similarly near-impossible to predict where you’ll get an interview or an offer.”

      That was my experience (as someone who’s had 12 or 13 interviews in my life*). Well, I wouldn’t say it was totally impossible to predict where I’d get an interview. McGill once advertised for a microcosm ecologist; I was confident I’d get an interview for that one, and I did. But I also once got an interview for a microbial ecology job even though I am obviously not a microbial ecologist. And someone who’s basically a biochemist ended up getting hired. And my very first interview, as a barely second-year postdoc with only 6 papers to my name, was at Yale. Had I done what some people unfortunately do and waited until I thought I was “ready” to apply to a place like Yale, I’d never have gotten that interview.

      *p.s. Aside to anyone worried that you must be Doing It Wrong if you’ve gotten some interviews but no offers yet: I’ve had 12 or 13 on-campus interviews in my life. I’ve only ever been offered the job I currently hold, and I was only offered this one after someone else turned it down. No, this doesn’t show I suck at interviewing (or that I don’t suck). It shows that my life (or anyone’s life) is a small sample size. Don’t beat yourself up over it if you’re getting interviews but not offers. Just do the only thing you can do: your best. And keep trying, until such point when you’d be happier doing something else rather than keep trying.

  13. Pingback: What’s the point of the h-index? | Jeff Ollerton's Biodiversity Blog

    • The paper by the folks who made that website came up in an earlier comment.

      Yes, it’s a biomedical dataset, so I doubt that it generalizes to EEB. And I haven’t looked at the paper in enough detail (or recently enough) to recall how much variance they’re able to explain. As I recall, they have a huge dataset. That possibly allows them to build a predictive model that’s statistically significant but that only explains a small fraction of the variation.

  14. I’ve been on several search committees in the past few years. The process certainly would be a lot less labor intensive if we took the number of publications and multiplied it by the impact factor and then added the total grant funding! But that definitely would not have yielded the interview lists that we had for our positions. As you said, the process is much more holistic than that. At Michigan, training in implicit bias and holistic review of applications is required of all search committee members.

  15. I might have an odd viewpoint, coming from a small teaching institution (~650 undergrad majors, with two MS programs in Biology with about 20 combined students). I think that, inherently, this discussion is a 30,000 ft analysis on an issue that isn’t one-size-fits all. In the last decade, I’ve been either on the search committee or involved as a faculty member of the department in 12-15 searches.
    Here are a few things I’ve seen:
    1) We never do a search without a good a priori idea of the qualifications for which we are looking. For us, that means a specific part of the subfield that complements what we have to offer, or the habitats we have available. It also means demonstrated ability (through either papers or experience) to teach a specific class/classes. The specifics are different for each hire though, so the same candidates can rarely re-apply (though we did get one person who submitted an application for every position from cell biology to aquatic ecologist we had in a three year window).
    2) I’ve also seen a lot of applications from people who don’t know what they want in a position and so apply to the full spectrum of schools. This gives them a high number of applications, but if they don’t have the content focus, or publication focus (as previously mentioned) they aren’t really competitive regardless of their counts. Admittedly, we are concerned about people with a lot of research experience but no teaching experience. So while that might help predict success applying to a department like mine (we don’t want to lose someone after the search is over because they can’t do enough research to keep happy), the opposite might be true for a department at an R1. So perhaps a dataset (obviously it would have to be part of the criteria in the initial collecting, which wasn’t possible here) where people indicated if they had a preference for a specific type of institution.
    3) We also get people who see a buzzword and apply, but don’t really seem to read the whole job announcement. The title might be aquatic ecologist, but if the “preferred” section is experience in ichthyology, aquatic macroinvert people aren’t likely to make it through the first round. The number of “reject immediately” applications we have is phenomenal, because they miss required qualifications that were spelled out in the advertisement.
    4) As mentioned above, getting to a phone interview just means you look like you meet the criteria for which the committee is looking on paper, but (at least for us) the phone interview is incredibly more important. Communication skills, presentation skills, those are really important. We’re looking for someone who will fit well with the department both in terms of skill set, but also collaboration. We don’t want to have to re-search for a position, so we look at the candidate as a possible colleague for a long time. That means that interpersonal skills are critical.

    This got longer than intended. Sorry about that. Just thought it might be interesting to have some input from the hiring side of the equation.

    • Cheers for this Jeff, and no apologies needed!

      Yes, the point of the post is intentionally quite narrow. I didn’t intend the post to be a complete guide to what search committees are or aren’t looking for. I was just looking to alert job seekers that the “anon quals” page on ecoevojobs.net isn’t really useful. Judging from the poll results, this seems to come as at least a modest surprise to a decent fraction of readers, so hopefully the post has done a bit of good. And hopefully those readers have gone on to read your comment and Brian’s above and so gotten some insight into what search committees actually are looking for.

      Your point 1 reinforces the point of the post, I think. It’s a big reason one can’t use crude quantitative metrics to predict one’s odds of getting an interview or getting hired (either for a specific job, or for some job or other).

      Re: your point 2, many people who list their qualifications anonymously on ecoevojobs.net do indicate what sort of job they’re searching for. I didn’t break the analysis down that way because the sample sizes would’ve been too small to be useful. But I strongly suspect that crude quantitative metrics would not have much predictive power even if we had a much bigger sample size and restricted attention only to applicants applying for the same sorts of jobs. As you suggest, you’d perhaps be able to detect the obviously-uncompetitive extremes (e.g., someone with a whole bunch of research experience and no teaching experience not being competitive for a job at a teaching institution), but probably nothing more refined.

      Re: your #3, yes, my experience also is that obvious lack of fit is more common than many people perhaps realize. This is another big reason why crude quantitative metrics aren’t predictive–they don’t measure fit. And it’s why I worry that some applicants are unduly discouraged by stories of positions getting massive numbers of applicants. Many of those applicants will be obviously poor fits or otherwise uncompetitive, so their presence in the applicant pool doesn’t actually affect the chances of any of the competitive applicants.

      Re: your #4, yes, absolutely. Brian noted this in his comment as well: that once one gets to the phone interview stage (and even more so as the campus interview stage), what matters most is that the people doing the hiring see you as a desirable colleague–someone they’re going to want to have around for decades. This is something else not captured by crude quantitative metrics, and is a reason why crude quantitative metrics don’t predict on-campus interviews or offers.

  16. Pingback: Universities that did not hire me (a chronicle of absolutely normal rejection) | Scientist Sees Squirrel

  17. Pingback: I just got my first papers accepted in almost two years. Which is ok. | Dynamic Ecology

  18. Pingback: Hardly any ecology faculty jobs are filled by internal candidates. And you can’t identify the ones that will be. | Dynamic Ecology

  19. Pingback: Useful links related to tenure track job searches in ecology (last update Sept. 2017) | Dynamic Ecology

  20. Pingback: When did newly-hired N. American tenure-track ecology faculty get their PhDs? | Dynamic Ecology

  21. Pingback: How much do you–and should you–tailor your ecology faculty job application to the hiring institution? Poll results and commentary | Dynamic Ecology

  22. Pingback: How many first-authored papers in “leading” journals does an ecologist need to be hired as a tenure-track asst. prof at an R1 university? Not nearly as many as most ecologists think. | Dynamic Ecology

  23. Pingback: A statistical profile of recent EEB faculty job applicants | Dynamic Ecology

  24. Pingback: Are there any measurable predictors of how many interviews or offers an ecology faculty job seeker will receive in the US or Canada? | Dynamic Ecology

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.