Over at The Contemplative Mammoth, Jacquelyn Gill suggests that it would be really helpful for young academics and grad students to see “shadow CVs” of established researchers, as a way of benchmarking their own career progress. Your “shadow CV” includes all your rejected papers and grant applications, all the jobs you applied for but didn’t get, etc. It’s also called “CV of failures“, but personally I prefer “shadow CV”.
I think this is a really neat idea. Much as publication biases cause the published literature to present a distorted view of the results of science as a whole, our real CVs give a distorted view of the results of our careers as a whole.
Now, I think you’d want to look at many shadow CVs, since every one will have unique features. You’d also want to see shadow CVs from people holding many different positions at many different places. You don’t just want to compare yourself to, say, the very top people in your field, as that’s not the standard by which you’ll be judged.
I also think you’d want to complement your inspection of shadow CVs with hard data. Public funding agencies publish their grant rejection rates, and in the US and Europe those rates are really high–the vast majority of applications get rejected. Cassey and Blackburn (2004) surveyed successful ecologists and found that 22% of their papers were rejected at least once, and 72% had at least one paper they’d never been able to publish. If you get rejected, you’re in good company.
Worth noting that standards change over time. I landed a plum postdoc, and then was regularly interviewed for faculty positions, despite having less than 5 publications to my name. Now, they were all first-authored publications in good journals, and things like reference letters matter a lot. But I do wonder if times have changed to the point where my 2000-era self would’ve had difficulty even getting a postdoc, much less a faculty position. I honestly don’t know the answer.
Anyway, here, as best as I can recall it (since I didn’t keep track of a lot of this stuff at the time) is my shadow CV. For context, my real CV, which needs a bit of updating, is here.
Undergrad colleges I didn’t get into
I didn’t get into Duke.
Graduate programs I didn’t get into
I didn’t get into Washington or Oregon State. There might’ve been one or two others I applied to but didn’t get in.
Jobs I didn’t get
Near the end of my postdoc, I applied for a second one at NCEAS. I didn’t get it (and quite rightly; my proposal was a fishing expedition and was spotted as such).
I’ve interviewed for 13 or 14 faculty positions in my life. I’ve only ever been offered the job I currently hold.
I applied for many more faculty jobs than the ones I interviewed for. Don’t recall how many–over 50, I’d guess. I used to know about how many it was, because I kept all the rejection letters from the ones that bothered to send rejection letters. When I finally got the Calgary job I took all the rejection letters and ripped them to pieces.
Grants I didn’t get (and one I sort of did)
I haven’t applied for large numbers of grants in my career, because of the way my postdoc was funded and the way the Canadian funding system works.
As a postdoc, I once helped some colleagues write an NSF grant on which I was to be the postdoc if it was funded. I obviously couldn’t be listed as a co-PI on that grant, so it’s not on my CV even though it was funded (I didn’t take the postdoc because I’d gotten the Calgary job by that point).
I’ve had a grant application to the James S. McDonnell Foundation rejected. That one stung a bit, my collaborator and I had worked hard on it and we had every reason to think we had a decent shot.
I didn’t get the only NSERC Research Tools and Instruments grant for which I applied.
I once submitted a long-shot preliminary proposal to an agency I can’t recall (it was a long time ago). No full proposal was invited.
Sabbatical fellowships I didn’t get
I applied to spend a year at Wissenschaftskolleg du Berlin; my application was declined.
Below is a lower bound on my rejections; I’m sure I’m forgetting at least some, and quite possibly many.
Hausch et al. in press Ecology was rejected by Ecology Letters and Ecology before eventually getting into Ecology following resubmission.
Fox et al. 2017 was rejected by Nature (after review and revision) and Ecology Letters (after review) before getting into Nature Ecology & Evolution. Still bummed about that, as I think very highly of that paper. But not everyone does.
Hausch et al. 2017 was rejected by two or three leading EEB journals before getting into Ecology & Evolution.
I think Fox and Kerr 2012 was rejected at least once before being published, but I can’t recall for sure. They say memory is the first thing to go. 😉
Fox et al. 2010 was rejected without review from a leading ecology journal before being published in an equally-leading journal with the best reviews I’ve ever gotten in my life. Go figure.
Vasseur and Fox 2009 was initially rejected by Nature with an invitation to resubmit with additional data. We collected the additional data, resubmitted, and were accepted.
Fox 2007 was rejected by at least three leading ecology journals, despite substantial revision after each rejection. I gave up and re-framed the paper to address a completely different issue than I’d originally intended and it was finally published.
Fox 2006 was rejected by Nature before being published in Ecology. The Nature reviewers were quite positive, but wanted to see some illustrative applications to real data. Which I’d thought about including in the original submission, but decided against it because I was having trouble squeezing them in and felt like they didn’t add enough to the central conceptual insights. Still kicking myself over that one.
Fox et al. 2000 was rejected after (pretty good) review by Science before being published in Ecology Letters, if memory serves.
I did a modeling paper on how you can get transient overyielding, and non-monotonic changes in overyielding over time, even in really simple competition models. The point was to provide a theoretical contrast to empirical papers that were coming out at the time, showing that overyielding in experiments with terrestrial plants generally increases over time. It was rejected a couple of times, I think, with rather negative reviews, so I just dropped it. Later Brad Cardinale published a paper on the same topic (but based on different models and coming to a different conclusion), which I took as vindication that the basic question at least was a good one. (Just to be clear, I’m sure Brad was working totally independently of me)
I have a paper on experimental evolution of bacteria in response to different C:N ratios that’s never been published. Several leading journals rejected it. So did PLoS ONE (!)* I gave up on it for years, then revisited it and heavily revised it for BMC Evolutionary Biology. Was offered the chance to revise further, but I didn’t have good ways to address the reviewers’ (quite reasonable) major concerns, so I never submitted the revision. Am contemplating whether to bother trying to publish a “here’s what I found, it’s intriguing but hard to interpret and needs follow up” paper somewhere just to get the data out there. I’m not likely to follow it up myself, my research has gone in a different direction.
I have an experimental paper on higher order interactions that was rejected years ago.
*I have a colleague who does amazing work who’s also been rejected from PLoS ONE. PLoS ONE of course says that they judge papers only according to “technical soundness”, which means that in theory getting rejected from PLoS ONE should be embarrassing. Personally, I’m not embarrassed, as I hear that the PLoS ONE editors are constantly debating what it means for a paper to be “technically sound”. I even hear that some of their editors just ignore the journal’s stated policy of publishing anything that’s technically sound. I freely admit that’s pure hearsay, but it does jive with my own, and my colleague’s, admittedly-limited experience with PLoS ONE.