The 2016-17 ecology & evolution jobs compilation includes a spreadsheet on which anonymous job seekers can list some common quantitative metrics summarizing their qualifications. Year of PhD, number of years as a postdoc, number of peer-reviewed publications (first-authored and total), h-index, number of major grants held, and number of courses taught (not counting TA positions). Job seekers also can list the number of positions for which they’ve applied this year, the number of interviews they’ve received (phone/skype and on-campus), some personal attributes such as gender, and other information. The purpose presumably is to allow job seekers to determine how competitive they are for faculty positions.
As of Dec. 19, 2016, 73 people had listed their information. Not a massive sample of current ecology & evolution job seekers. Also surely a statistically-biased sample in various ways. But it’s many more current job seekers than anyone not currently sitting on a search committee is likely to have personal knowledge of. So I checked how well quantitative metrics like number of publications and h-index predict the number of interviews job seekers receive. For comparison, I also compiled data on the h-indices of 83 North American ecologists recently hired as assistant professors.
Faculty job seekers understandably want any information they can get on how competitive they are. But how competitive any given individual is for any given position depends on many factors, many of which are only captured coarsely or not at all by common quantitative metrics. You can’t put numbers on fit to the position, quality of your science, strength of your reference letters, and so on. So I suspect that many job seekers tend to overrate the importance to search committees of things you can put numbers on: publication count, h-index, etc. It’s an instance of “looking under the streetlight”. Hence my question: Can you estimate your odds of being interviewed for, or obtaining, a faculty position in ecology and evolution just from common quantitative metrics?
Short answer: No. For the details, read on.
Here’s my copy of the spreadsheet of qualifications of anonymous job seekers, which I had to clean a bit. You can easily compare my cleaned version to the first 73 positions on the original to see what I did. Briefly, I recoded a few non-numerical entries, for instance recoding “<1” for number of years as a postdoc as 0.5 years. I dropped some variables that seemed less useful or redundant. And blanks in the two columns for # of interviews were recoded as zeroes. Obviously, I have no way to check the accuracy of the original data. All I can say is that the data don’t contain any obvious errors or implausibilities. Also, the data could change a bit as job seekers get more interview offers, publish more papers, etc.
I also haphazardly used Google Scholar to look up the h-indices of 83 people recently hired to tenure-track assistant professor positions in ecology or an allied field (hired in 2015 or later, or in a few cases in 2014). I looked at Google Scholar h-index purely because it was convenient. Do not read anything else into my choice of metric here, and definitely do not take my choice here as an endorsement of using the h-index for any serious purpose.
Without further ado, the results, followed by some discussion. I’m only going to show results for phone/skype interviews. Results for on-campus interviews were very similar.
Year of PhD doesn’t predict number of interviews, except that interviews are rare for the very few people in the dataset who have yet to receive their PhD or who received their PhD before about 2008-9:
The same is true for number of years as a postdoc (which of course covaries tightly with year of PhD). The few people who’ve been postdocs for a year or less, or for >6 years, report receiving few interviews, but otherwise there’s no correlation:
Same for number of first-authored papers (which of course covaries with the previous variables). The few people with ~20 first-authored papers mostly report receiving few or no interviews, but otherwise there’s no correlation:
Same for total number of publications (which of course covaries with the previous variables). The few people with <~10 or >~35 papers report receiving few or no interviews, but otherwise there’s no correlation:
Same for h-index. (Which of course covaries with the previous variables. It’s also associated with the impact factors of the journals in which you publish.) The few people with h-indices <~4 report receiving few or no interviews, but otherwise there’s no correlation:
Number of major grants held does not predict number of interviews:
Number of classes taught does not predict number of interviews, save that the few people who report having taught >6 classes report receiving few or no interviews (aside: before you go leaping to the surely-incorrect conclusion that “too much” teaching experience is bad for your chances, read the rest of the post):
You know what does predict the number of interviews you’ll get? The number of positions for which you applied:
Now, I know what you’re thinking: maybe none of these metrics predicts number of interviews very well on its own, but maybe all of them together do? Nope. I did a multiple regression of number of phone/skype interviews on years as a postdoc, first-authored publications, total publications, h-index, number of classes taught, and number of major grants held. The R^2 was only 0.03, and the p-value was 0.92. And before you hassle me about possible data transformations or nonlinear regression or something, c’mon: look at those graphs above! Every single one except the one for “number of positions applied for” looks like a shotgun blast. There is no signal in these data of search committees selecting heavily on any of these quantitative metrics.
And before anyone asks, no, scaling number of publications (or number of first-authored publications, or h-index, or etc.) relative to number of years as a postdoc does not predict number of interviews either. (Results not shown).
Finally, the h-indices of the 83 recent ecology hires were all over the map. The mean was 9.7 (median 9), the middle 50% of the distribution was 5.25-10, and the full range was 1-27. Before you ask, no, this wide spread is not because research universities only hire people with high h-indices while other places only hire people with low h-indices. The 41 recent hires at R1 universities had a mean h-index of 11.8, median 11, middle 50% 9-15, full range 3-27. The 42 recent hires at non-R1s had a mean h-index of 7.9, median 8, middle 50% 5.25-10, full range 1-21. So, h-indices of recent R1 hires do tend to run higher, but there’s lots of overlap between the R1 and non-R1 distributions. Heck, even within a single R1 department recent hires in ecology had h-indices ranging from 3-17, and in another the range was 5-27. And within a single non-R1 department recent hires in ecology had h-indices ranging from 1-12. Even if you restrict attention to tiny teaching colleges with no graduate programs, I found one that recently hired an ecologist with an h-index of 12, and another that recently hired an ecologist with an h-index of 2. Just to give you a sense of how big those ranges are, Jeff Ollerton reports that, over the course of his career, his own h-index typically increases by ~1/year. Assuming that Jeff’s a fairly typical ecologist, that means that even just the “typical” range of h-indices of recent hires in ecology (4.75, the interquartile range) is roughly equivalent to more than 4 years worth of growth in the h-index of a typical ecologist.
As an aside, the distribution of h-indices of recent ecology hires overlaps a lot with the distribution of h-indices of the anonymous ecology job seekers (mean 8.2, median 7.5, full range 1-19). Note as well that the current h-indices of people hired in 2015-16 (or in a few cases in 2014) will in many cases slightly exceed their h-indices at the time they were hired. Finally, note that recently-hired ecologists with Google Scholar profiles may well be a non-random subset of all recently-hired ecologists with respect to their h-indices. I suspect that researchers with low h-indices are less likely to maintain Google Scholar profiles. And when I was compiling these data, I found that recent hires at teaching colleges were less likely to maintain Google Scholar profiles than recent hires at research universities.
A few comments:
- I’m not at all surprised by these results. They confirm what I said in my old post on how faculty position search committees work. Search committees evaluate every remotely-competitive candidate in a holistic way. And they place a lot of weight on things that can’t be quantified and that don’t necessarily correlate tightly with things that can be easily quantified. Things like “fit” to the position, their own evaluations of the quality of the candidates’ previous and planned research, reference letters, etc. And different search committees look for different non-quantifiable things. (aside: just because those things can’t be quantified doesn’t mean they’re purely “subjective”, at least not in the same sense that a preference for chocolate ice cream over vanilla is purely subjective. Rather, they’re matters of professional judgment.) Bottom line: faculty search outcomes are not primarily driven by crude quantitative metrics, or by any factor even loosely correlated with crude quantitative metrics.
- The partial exceptions to the previous bullet are if you are very inexperienced, or very experienced. That’s why people with very few or very many publications, very few or very many first-authored publications, etc. report receiving few interviews. Very roughly, “very inexperienced” means no PhD yet, or perhaps <1 year post-PhD, and “very experienced” means 6 years or more post-PhD.
- The one clear-cut piece of advice I think job seekers should take away from all this is “apply widely”. Don’t get me wrong, it’s totally fine to restrict your search geographically or by type of institution or whatever. Just make sure you have good reasons for doing so, meaning that you’d rather not get a faculty position than search more widely. Because it’s obvious but true: the more positions you apply for, the more interviews you’re likely to get. Anecdotally, many people who think they have a “dealbreaker” (“I could never be happy in institution type X/geographic region Y/etc.”) later discover it wasn’t a dealbreaker at all. For instance, I didn’t realize I could be really happy in a big city, in another country far from my family, until I did it for my postdoc.
- If you’re a job seeker, I can’t tell you whether these results should make you happy or sad. That’s up to you. You could choose to be happy about these results because they tell you not to worry about what you’ve might’ve thought of as some deficiency of your cv. Hooray–your future isn’t determined by how many papers you have! 🙂 Or, you could choose to be sad about these results because they leave you very uncertain as to how competitive you’ll be for any particular position. Boo–your future isn’t determined by how many papers you have! 😦
- In an old post Brian and I offered some advice on how to decide whether to keep pursuing a tenure-track faculty position. It’s a difficult and personal decision–which you shouldn’t make by focusing on crude quantitative metrics like how many papers you have.
- Nothing in this post is a criticism of the folks who put together or contributed to the “anonymous qualifications” page on the ecology & evolution jobs spreadsheet.
- Coincidentally, after I wrote this post but before it was published, someone else had the exact same idea, reporting the same results on the “general discussion” tab of the ecology & evolution jobs spreadsheet.
Finally, just to satisfy my own curiosity, I hope you’ll complete the following two polls:
UPDATE: on Mar. 3, 2017, I downloaded updated data from the ecology jobs spreadsheet and ran the numbers again. There are now over 100 people who’ve added their information, and many people have updated their information. More people have more interviews now. But the main conclusion is unchanged–no crude quantitative metric predicts number of interviews received other than number of positions applied for. The only conclusion I might modify a bit is that it no longer looks all that rare for people with <1 year of postdoctoral experience to get interviews.
Also, just out of curiosity, I split the data by gender and didn’t find any substantial differences. Almost exactly equal numbers of men and women have chosen to report their anonymous qualifications. On average, the men and women in this self-selected group have nearly identical experience (mean of 2.6-2.7 years as a postdoc), nearly identical h-indices (8.08-8.09), have applied for similar numbers of jobs (17 for women, 14.8 for men), and received similar numbers of phone/skype interviews (3.1 for women, 1.8 for men) and on-campus interviews (2.3 for women, 1.7 for men). And there is of course substantial variation around the averages among both men and women.