Based on some conversations I’ve had with colleagues recently, I’m starting to wonder whether I should do more intensive mentoring* of the students in my lab, especially related to long term goals and whether they’re on track to achieve those goals.
To start out with what I currently do: almost all of the students who work in my lab are paired with a grad student or a postdoc. That person does the day-to-day mentoring on a particular project. In addition, I meet with students more sporadically, with those meetings focusing more on bigger picture things – what projects they are working on, what their career goals are, applying for summer research positions, applying to grad school, etc. It’s tailored to the student’s interests, but, at the same time, I’m starting to wonder if it’s not specific or intensive enough.
Many ecologists, including me, want to discover generalities. We want to see the forest for the trees. That often means abstracting away from certain details so as to focus on features shared by all cases of interest.
But is there such a thing as too much generality, or the wrong kind of generality? It’s a good thing to step back and see the forest for the trees, but what if you step back too far (into deep space, say)? Don’t you lose sight of the forest, or end up mistaking the forest for something else?
Also this week: the boy who cried wolf vs. type I errors, pre-registered replication vs. stereotype threat, update on double-blind reviewing at Am Nat, myths of scientific software, scientific texts vs. Google Ngrams, and more.
tl;dr: Making scientific debate faster can be a good thing, but only in combination with other good things. But the combination of “speed plus other good things” may not be a stable combination, because changes in technology and norms of scientific practice that promote speed also tend to inhibit those other good things.
Functional Ecology just published a bunch of data from the past 10 years (i.e. as long as
the journal has existed data are available) on correlations between gender and various aspects of the peer review process (ht Retraction Watch). The headline results that most caught my eye (click through for much more):
When I teach, I often note to students when there is a word that is used differently in ecology than in everyday speech, since I think this can contribute to confusion for students. So, I found this tweet really interesting:
This would be really interesting to try with ecology students!
I think one of the biggest ones is “competition”. In my experience, this is one of the most difficult concepts for students to grasp, and I think it’s because, for most students, “competition” evokes thoughts of a basketball game or boxing match or something like that. I think this leads to two sources of confusion: first, competition is often subtle (at least, from our human perspective). As I told my students last week, you might not look at a field of plants and think, “Whew! That is some fierce competition going on out there!”, but the competition is, indeed, fierce. Second, in competition, both players suffer, even when we talk about one species “winning”. My guess is that the idea of something like the Super Bowl being a competition that has a winner is part of why that idea is hard to grasp.
One thing I like about blogging as a form is that it’s natural to revisit topics you’ve discussed before. A blog is a record of your evolving thoughts.
Or sometimes, your “living fossil” thoughts that haven’t evolved at all. Earlier this week, I posted on questions you should ask yourself if you’re thinking of starting a blog. It was a really easy post to write, I banged it out quickly. Presumably because, as I just discovered, I’ve written it before.
They say the memory is the first…wait, what was I talking about? :-)
That recent post was pretty popular despite being a total rerun. Which illustrates how any old post that doesn’t come up high in common Google searches gets flushed down the internet’s collective memory hole. I’m now tempted to take a break from writing new posts, and just repost old ones, but without telling anyone I’m doing it. Then I’ll wait and see if anyone notices. I’m betting nobody will–since after all I just did it and even I didn’t notice! :-)
Also this week: prediction markets vs. replicability, Photoshop vs. Bill Nye, Marc Cadotte on Chinese science, honest Student, what to get Rich Lenski, Meg, and Ben Bolker for Christmas, and more!
Charley Krebs suggests we call a moratorium on microcosm studies in ecology, because their results don’t generalize to nature. He refers to this as “Volkswagon Syndrome”, claiming that, like Volkswagon cars, microcosms don’t perform they way their real world versions do.
I have huge respect for Charley, but he’s way off base on this. He’s making a common mistake: assuming that the purpose of all microcosm studies is to reproduce or predict the behavior of some particular natural system, or ‘nature’ in general. And implicitly, he’s making the common mistake that that’s the only possible useful purpose of microcosm studies. In fact, microcosm experiments have various useful purposes (just like experiments in general have various useful purposes). Charley’s objection to microcosms only applies to certain microcosm experiments, conducted for certain purposes (e.g., to estimate the value of some rate parameter in some specific natural system), and then only if the experiment in question is in fact insufficiently realistic.
Further, Charley appears to have overlooked cases in which the results of microcosm studies do generalize to nature. For instance, Fox (2007 Oikos) shows that the average strength of trophic cascades in protist microcosms is almost scarily similar to their average strength in field experiments. And Smith (2005 PNAS) shows that microcosm and mesocosm phytoplankton communities fall on exactly the same species-area curve as natural phytoplankton communities.
In passing, Charley notes that ecologists can’t wait around for another century’s worth of data to test many predictions of interest. Which is a puzzling point to raise in this context. One motivation for microcosm and mesocosm studies of small organisms with short generation times is to collect many generations of long-term data in a reasonable human time frame. Better some long-term data from studies of organisms with short generation times than no long-term data at all, surely?
Here’s my old post on objections to microcosms in ecology, and their answers, which anticipates Charley’s objection (and others). Many of the points I make in that post have been made in older peer-reviewed literature as well (e.g., Lawton 1996 Ecology). More broadly, here’s an old post of mine arguing for the value of model systems in ecology. I’d welcome the opportunity to discuss these issues with Charley in the comments, as they’re near and dear to my heart, and because comments on his blog seem to be closed now. I’ve enjoyed our previous exchanges on related issues.
Note from Jeremy: this is a guest post by Peter Adler.
Is it important to have a well-attended, stimulating department seminar series? And if an existing seminar isn’t working well, can it be saved?
Here’s a totally, completely, absolutely hypothetical scenario: A large state university has a cross-campus ecology program with a great seminar series run by graduate students. That seminar series brings in a nationally-recognized speaker each month to give a pair of talks, accompanied by a reception, meetings with students, and organized discussions or workshops. The same university also has a College of Natural Resources (NR) that runs its own seminar series during the other three weeks of each month. The NR series isn’t so great: it does not have a big budget to bring in speakers from across the continent, the quality of the talks is inconsistent, and attendance by both faculty and graduate students is poor.
Should a hypothetical NR professor try to do anything to improve this seminar series?