From Meg:
Anne Jefferson recently retweeted an old post of hers at Highly Allochthonous, in which she urges grad students to show their data to their advisor early and often. Good advice!
Joan Strassmann has a post on how to set up your Google Scholar profile. I definitely agree that doing so is a good idea. I put the link to my Google Scholar profile on my CV to try to make it easier on, oh, say potential tenure letter writers.
And, to go with one from our own archives, in talking with new grad students recently, I’ve suggested that you could do worse than to use Jeremy’s post on the most cited papers from the 70s, 80s, and 90s as a list of papers to read as a new grad student.
From Jeremy:
Can’t believe I wasn’t aware of this, but back in June The Silwood Circle was published. The book traces the history of Silwood Park, the branch campus of Imperial College London that starting in the late 1960’s became home to one of the greatest and most influential groups of ecologists in the world. The key people all shared a particular approach to science, and they succeeded in making that approach hugely influential in Britain and elsewhere. I’m proud to have done my postdoc there and so made my own small contribution to the history of Silwood Park. I’ve asked for a copy of the book for my birthday and I promise I’ll review it as soon as I get the chance. In the meantime, Andrew Read has a review at his lab group’s blog.
A while back, Andrew Hendry did a post criticizing the use of parsimony (“Occam’s razor”), and null models in evolution (I’ve said much the same in an ecological context). Andrew’s post was prompted by some exchanges he had with a graduate student, Njal Rollinson, while serving as an external examiner for Njal’s defense. Pretty vigorous exchanges, from the sound of it. And now Njal, to his credit, has done a post laying out his point of view. It’s very thoughtful and I can appreciate where he’s coming from. Njal’s post also starts with some very forthright comments on the nature of grad student-faculty conversations and debates. Kudos to Njal and Andrew for sharing a fine example of a vigorous professional debate (no less fine for being vigorous, or for not leading to agreement).
Jeremy Yoder links to a video clip of an interview with the late, great John Maynard Smith, relating a very funny anecdote. It involves JBS Haldane and a car on fire…What, are you still here? Haven’t you clicked through yet? 🙂 And here’s Haldane’s own comment on the tale, via a tweet from beyond the grave. 🙂
BioDiverse Perspectives interviewed Bob Paine. He talks a lot about his time as a grad student, offers some opinions on current topics (he thinks NEON is “a waste of money”). And he shares a good anecdote about the demand for reprints of his keystone species paper.
From a new preprint (not yet peer-reviewed) from Barraquard et al.: Survey data indicate that young ecologists want more quantitative training–probably more than many ecology programs offer. One caveat: it was an online survey and one wonders if that introduces a sampling bias towards ecologists who want more math, although the authors did make efforts to disseminate the survey widely. For what it’s worth, it reinforces the more anecdotal responses of my own ecology undergraduates when I asked them to reflect on their mathematical training. (HT Theoretical Ecology)
Should paper titles in ecology be more specific? Are authors using over-general titles in an attempt to signal the generality of their papers?
And finally, when a student asks for an extension on the grounds that a grandparent just passed away, do you believe them? If not, maybe you should: Frances Woolley crunches the numbers and finds that, if you teach a large class, the odds are that somebody’s grandmother is going to pass away during the term. Of course, as the saying goes, “trust everyone, but cut the cards”: Frances still expects students to provide a copy of the obituary or death certificate before the end of the term.
Hoisted from the comments:
Readers have had field experiments destroyed by volcanoes, bomb squads, drunk coatis, and the study organisms themselves. 🙂
Are authors using over-general titles in an attempt to signal the generality of their papers?
Of course they are! Taking a very specific thing and pitching it as something generalized it is how papers are pitched to high tier journals. Sometimes, the authors of the papers even believe their own claims.
Interestingly, pressure can go the opposite direction at lower tiered journals. The first paper of my masters thesis published in the Journal of Great Lakes Research was originally titled “Vertical distribution of larval fish in pelagic waters of Lake Michigan: Implications for growth, survival, and dispersal”. However for better or worse, at the demand of a referee the title was changed to: “Vertical distribution of larval fish in pelagic waters of SOUTHWEST Lake Michigan: Implications for growth, survival, and dispersal”
Even then I resorted to spicing up the title with a colon, a dirty habit I can’t seem to shake.
Just a word on the quantitative training survey for Jeremy:
You mention correctly that there are biases with online voluntary surveys (voluntary being more important than online), which we of course acknowledge in the preprint. The post suggests however that we attempted to correct this by diffusing the survey widely, which is not at all what we did. Here’s why:
With a low response rate, a wide internet diffusion can result in a large sample size and yet a strongly biased survey. We dealt instead with the sample composition bias by using control questions, designed to assess to what extent the results are influenced by one’s activity and opinions. In a way, we check if the survey composition reproduces that of the ecological community, and when it does not, in what way that affects the results. We initially expected huge differences between modellers and non-modellers, or those who like using equations vs those who do not.
Remarkably, we did not find these huge differences. Early-career ecologists want more math and stats (and programming) irrespective of whether they are fond of it or not. We had a “Feeling” question: “Rate your feeling for equations (To construct a mathematical, statistical, or computational model)”, with a scale from 1 to 5 (1 was “you really dislike it” and 5 “you really like it”). Among categories 1 and 2 (>200 respondents), who cannot seriously be considered model-lovers, they are still 90% to want more math and 95% more stats! [I don’t think we mention it this clearly in the preprint btw, perhaps we should]. Thus we have > 1/4th of our respondents whose favorite topics within ecology are really not statistical analyses, and yet 9 out of 10 suggest that quantitative training should be increased (as do the “average” ecologist in our sample). Sometimes a comment would suggest “improved” rather than “increased”.
Other questions show that we are not predominantly targeting theoreticians and statisticians (ECOLOG has been the main diffusion hub). After our analyses on the survey composition, and how respondent categories affect their responses, we have no reason to believe that a strong survey composition bias is driving the results as you speculate. Note that other psychological biases in general affect voluntary response samples, e.g. overrepresentativity in people with strong opinions (fortunately in this case many comments seem quite balanced).
While it might seem at first preferable to round up several hundred ecologists using stratified random sampling (thumbs up if somebody does), one should keep in mind that a nonanonymous survey might introduce other sources of bias. There will then be other “caveats”.
Thank you for elaborating on the methods you used to identify possible sampling biases. I think it’s great that you thought things through like that.
When I said that you made efforts to disseminate the survey widely, I thought I was merely summarizing what you yourself said in the paper about how the survey was disseminated. You stated that you disseminated the survey through the INNGE network, Ecolog-L (which you noted has over 13,000 subscribers), the Indian YETI mailing list, the French Ecological Society, Twitter, and blogs, including a post on Oikos blog back when I was there. I think it’s fair to say that you made efforts to disseminate the survey widely, and to imply that this makes your sample less biased. Had you merely, say, surveyed INNGE members, or just disseminated the survey via Twitter, or whatever, you’d surely have had a much more biased sample of young ecologists.
Thanks for the feedback. That’s certainly true that a wide diffusion is better and avoids some biases, however not what is called nonresponse bias (http://en.wikipedia.org/wiki/Non-response_bias). Even with a wide diffusion, voluntary response surveys can be highly biased due to the small ratio of individuals responding / reading the survey (rough upper bound of 10-20% here, I think) – small differences in probabilities of responding can then strongly bias survey composition.
Hence the presence of survey composition questions; in our opinion the survey composition and the robustness of the responses across categories is even more important than sample size / wide diffusion to provide meaningul results (actually now I wish we had asked some more composition questions to provide more detailed answers…). I was just trying to make the readers aware that we tackled such problems – and they can read the preprint for more!