Over the years, I’ve heard people talk about mentoring plans and individual development plans (IDPs), and always thought they sounded like they could be worth trying some time. But I never made it a high priority, and so never actually got to doing them with my lab. I got as far as starting to do an IDP for myself to test it out, but never got further than that. Then, last year, I had to do a mentoring plan with one of my students, as a requirement of her graduate program. As soon as I did that one with her, I realized I needed to be doing these with everyone in my lab, including grad students, postdocs, technicians, and undergrads. Here, I’ll describe what we include in our mentoring plans, talk about some of the ways they’ve been helpful, and will ask for ideas on some things I’d like to add or change.
Dan Bolnick just had a really important – and, yes, brave – post on finding an error in a published study of his that has led him to retract that study. (The retraction isn’t official yet.) In his post, he does a great job of explaining how the mistake happened (a coding error in R), how he found it (someone tried to recreate his analysis and was unsuccessful), what it means for the analysis (what he thought was a weak trend is actually a nonexistent trend), and what he learned from it (among others, that it’s important to own up to one’s failures, and there are risks in using custom code to analyze data).
This is a topic I’ve thought about a lot, largely because I had to correct a paper. It was the most stressful episode of my academic career. During that period, my anxiety was as high as it has ever been. A few people have suggested I should write a blog post about it in the past, but it still felt too raw – just thinking about it was enough to cause an anxiety surge. So, I was a little surprised when my first reaction to reading Dan’s post was that maybe now is the time to write about my similar experience. When Brian wrote a post last year on corrections and retractions in ecology (noting that mistakes will inevitably happen because science is done by humans and humans make mistakes), I still felt like I couldn’t write about it. But now I think I can. Dan and Brian are correct that it’s important to own up to our failures, even though it’s hard. Even though correcting the record is exactly how science is supposed to work (and I did corrected the paper as soon as I discovered the error), it still is something that is very hard for me to talk about.
If you didn’t know, in economics and political science, people are hired for faculty positions based in large part on their “job market paper”. As in, one paper, ordinarily from their Ph.D. work and often not even published yet. Number of publications matters relatively little (though apparently it matters more in political science than in economics). Economics even has a centralized repository of job market papers; that’s how much they matter.
I am curious to hear what you think of this, and whether you think this approach or something like it could be an improvement on current practices in ecology. Personally, I think current faculty hiring practices in ecology are mostly pretty reasonable (see also), and so don’t think this would be a net improvement on current practices in ecology. But I think it’s not so obviously a bad idea as to be uninteresting to think about. I find it useful to think about the practices of other fields and whether they’d transfer to ecology. It helps me look at standard practice in ecology with fresh eyes. A few thoughts to get the ball rolling:
Last year, when I wrote a post with advice on strategies (and reasons) for working more efficiently, the first strategy on my list was:
- Recognize what is “good enough”. As the saying goes, perfect is the enemy of good. And recognize that “good enough” will vary between different tasks. It’s okay if the email you are sending to your lab about lab meeting isn’t perfectly composed.
In this post, I want to go into that idea more, since I think it’s really important (and since it’s one I need to continually remind myself of!)
Preface: This post is a bit different than a typical post for me (or any of us here at DE!) It relates to an interesting bit of Daphnia biology that I find myself relating a lot when I talk to people more generally about my research. People seem to find it surprising and interesting, so I decided to write a post on it in the hopes that others find it interesting, too.
If I put a bunch of different Daphnia on a microscope in front of you, you’d probably think they all look pretty much the same.* As an example, when keying out the species I’ve done the most work on, Daphnia dentifera**, using the excellent online Haney et al. key, these are two of the first traits you need to focus on:
Those aren’t exactly traits that are overwhelmingly obvious, are they?
I think it is because of their morphological similarity that it is then very surprising to most people when they learn just how old the genus Daphnia is. It’s really old.
When submitting a paper to a journal, you ordinarily want to suggest one or two editors who would be well-qualified to handle the paper. Many journals require you to do this. This makes it much easier for the EiC to assign your paper to the most appropriate editor.
Journals can help authors do this by listing some keywords for their editors. Or even better, organizing the editors into broad subject areas. For instance, here’s BMC Ecology’s nicely-categorized list of editors. This is SO helpful! As someone who does not have a mental Rolodex of every single ecologist and evolutionary biologist in the world, I cannot always just glance at an alphabetical list of approximately eleventy-thousand editors and instantly recognize an appropriate name. I mean, yes, I always do know the names of some people whom I think would be good candidates to handle my paper. But in the fairly-likely event that none of those people happen to be on your board, I need a fallback. And it is not feasible to google all eleventy-thousand editors, or to click links to eleventy-thousand personal websites.
Less commonly, there’s such a thing as too much information. I’m looking at you, Journal of Ecology. Your editorial board is excellent. But the only reason the online list of editors exists is so authors can quickly skim it to identify promising candidates to handle their papers. So I’m sorry, but a whole paragraph on every editor’s research is too much information to easily skim. Well, except for the various J Ecol editors for whom there’s no information at all…
In the grand scheme of things, this isn’t a big deal. But it’s not a big deal to fix either. So Brian, remember when you asked what you can do as EiC to encourage authors to submit to your journal? Here’s a suggestion: add some keywords to your list of editors. 🙂
Ecologists, especially community ecologists, are always looking for ways to infer process from pattern, cause from effect. Ideally, they’d like some way to do this that:
- Is based on previously-collected or easily-obtained observational data
- Is “off the shelf”, meaning that it can be implemented in a routine, “crank the handle” way, without the need for much customization or even thought from the user.
- Can be used in any system
Examples of previously- or currently-prominent ways to infer process from pattern in ecology include:
- randomization of species x site matrices to infer interspecific competition
- plotting coexisting species onto a phylogeny to infer contemporary coexistence mechanisms
- plotting local vs. regional species richness to infer whether local communities are closed to invasion, or whether local species richness and composition is just a random draw from the regional “species pool”
- using the shape of the species-abundance distribution to infer whether communities have neutral dynamics
- using ordination to infer the process dominating metacommunity dynamics
- the use of power law distributions of movement lengths to infer whether foraging animals follow Levy walks
- using body size ratios of co-occurring species to test for limiting similarity
- attractor reconstruction and convergent cross-mapping
The above approaches to inferring process from pattern all have something in common: none of them work, either in theory or practice. Which leads to the my question:
Has any widely applicable “off the shelf” method to infer process from pattern in ecology ever worked? Can anyone name one?
My first paper was from my undergraduate honors thesis; it was a protist microcosm experiment (Fox and Smith 1997). Almost 20 years later, protist microcosms are still my main study system, because they remain the system best suited for answering the questions I want to ask.
Which as best I can tell makes me almost the longest-tenured “microcosmologist” in the history of ecology, and one of a very few to spend my entire career using microcosms as my main study system.
Which is a bit surprising. After all, protist microcosms have some features that you’d think would make them broadly attractive to a lot of people. They’re cheap and easy to learn, set up, and run. You can get long-term data (hundreds of generations) in a single summer. Etc. And a decent number of people have dabbled in them. So why don’t more people make a career out of them? More broadly, what makes for a “fruitful” study system in which lots of people will spend their entire careers?
Via Twitter, Diogo Provete noted that he’s cited our blog posts at least three times during peer review. Thanks Diogo! I’ve cited some of Brian’s posts on statistical machismo and model selection in a peer review. Which got me wondering: is citing blog posts in peer review becoming a Thing? To collect some anecdata, here’s a little poll. Looking forward to your responses!
Yes, it’s another of my patented non-timely book reviews. At the long-ago suggestion of frequent commenter
Jeff Ollerton Artem Kaznatcheev, I just read David Kaiser’s How the Hippies Saved Physics. Here’s my review, which as usual is less about the book and more hopefully-interesting thoughts inspired by the book.
Yes, I know this is useful to like minus-seven of you. Whatever. If all our posts were useful, you’d forget how useful the useful ones are. You’d get tired of
winning reading useful posts.* 🙂
tl;dr: It’s a fun and thought provoking book, you should totally read it. Click through if you care why I say that, or if you want to read my half-baked thoughts on the non-tradeoff between creativity and rigor in science, the challenges of pursuing theory-free research programs, and whether there’s really such a thing as a “productive mistake”.