Also this week: a story of a successful tenure track job search in evolutionary biology, how the other half
lives does experiments, what it’s like to work for an environmental non-profit, and more.
These glass art marine animals are beautiful! Clearly we need a freshwater version.
Chris Blattman worries that an increasing emphasis on randomized controlled experiments is steering social science in the wrong direction. Basically, he worries that demands for rigorous experimental design and high power will lead to undue focus on experiments with few treatments, conducted at one site, that can be done inexpensively, and that address tractable but unimportant questions. In the comments over there I suggested that distributed experiments like NutNet address some (not all) of these concerns. Anyway, I have no idea if Chris is right to worry or not. I just find it interesting to read about people in other fields thinking through the same issues that ecologists think about.
I’m very late to this, but here’s Sergey Kryazhimskiy’s epic post on his (successful) search for a tenure track job in evolutionary biology at a research university. Includes anecdata on the predictors of getting an interview and an offer. Also lots of good advice. Complements my own epic post on how the faculty search process works.
Stephen Heard on why you almost certainly should not appeal when a journal rejects your paper. There are good reasons to appeal, but they’re rare. It’s much more common for authors to think they have grounds for appeal when in fact they don’t. He omits one reason why you shouldn’t appeal rejections, at least not routinely: you’ll get a bad reputation.
Nathan Johnson on his first 9 months working for a small environmental non-profit. Our own series of guest posts on non-academic careers for ecologists starts here.
Preregistration of experiments is no panacea if the experimenters just deviate from the preregistered plan without explanation–as a large fraction of preregistered clinical trials published in top medical journals apparently do.
Speaking of apparent panaceas that aren’t: a suggestion that every paper be required to devote a section of the discussion to “the most damning result”–the result that most disfavors the authors’ preferred hypothesis. I can see the motivation for this. In ecology, one often reads papers with mixed results, in which the favorable results are highlighted in the abstract and discussion, while the unfavorable results are de-emphasized. But I can also imagine lots of ways to game this, plus I think if every paper was required to include it readers would ignore it. Like how everyone flying on a plane routinely ignores the safety briefing.