Also this week from Meg: reviewers vs. Rich Lenski, a good ending to a bad week, how to value your time, Emily Dickinson vs. plants, flipped classroom failures, the Kermoji-McKendrick model, and more! Did Meg read All The Things? ¯\_(ツ)_/¯ And from Jeremy: the PhD jobs crisis that isn’t, against open peer review, Jane Lubchenco vs. Jim Estes, bald eagles vs. bison, Dr. Seuss vs. Nietzsche, and more! Will you be able to have a productive Friday with all these links tempting you? ¯\_(ツ)_/¯
Lately, I’ve seen a few posts/stories where ecologists and evolutionary biologists describe their path to science/ecology/evolutionary biology. These stories can be compelling, in part because they show the diversity of experiences. No two paths to a career as a scientist are the same! I think it would be useful to have the links all compiled in one place, hence this post. If you know of others, let me know in the comments or on twitter (@duffy_ma). And, if you don’t have a blog but want to share your story, you can post that in the comments, too!
Thanks to the influence of Robert MacArthur and other key figures, and external factors like the rise of environmentalism, ecology in the late 1950s and 1960s transitioned from an obscure and mostly descriptive discipline, to a modern science concerned with general principles and hypothesis testing and attracting significant public attention and support. It was recognized at the time that this transition was both needed, and already underway (see the ESA’s 1965 report on the state and future of ecology, which is a fascinating read).
But how did that transition come about, exactly? Was it really a matter of a few revolutionary geniuses like MacArthur coming along and demonstrating a totally new way to do ecology, which replaced the old-fashioned stuff? It sure looks that way if you look at the list of Mercer Award winning papers. Some of the Mercer Award winners from that time are foundational classics of modern ecology, while others are old fashioned papers that are now forgotten. By about 1971, the “competitive replacement” looks complete, and from then on the Mercer award always goes to recognizably modern work.
But if you look more closely within the Mercer Award winning papers, a rather different picture emerges…
(UPDATE: to my embarrassment, everything that follows is just an inferior recapitulation of stuff Mike Kaspari said much better in the ESA Bulletin several years ago. I’d either read Mike’s piece and forgotten it, or missed it, but either way it’s embarrassing. Thanks very much to Mike for sharing his piece in the comments. You should all click through and read it, it’s full of insight.)
Also this week: how to email your professor, advice on when to start a family, Greenpeace vs. Ray Hilborn, The R Objects That Shall Remain Nameless (all of them, apparently), and more.
Can I just say that I love that many journals let reviewers see the other reviews of the ms after the decision is made? I learn so much from comparing my own evaluation of a ms with those of others. Did other reviewers pick up on the things I picked up on? Did other reviewers pick up on things I missed? Do I seriously disagree with anything the other reviewers said? Did all the reviewers pick up on the same things but disagree on how to weight them or what to do about them? Etc.
I also learn a lot from seeing the editor’s decision letter, in those cases where it explains the editor’s thinking (as it should; decision letters shouldn’t be form letters). Particularly when the reviewers disagree.
I confess I’m proud that my reviews rarely are way out of line with the other reviews, and that when they are the editor generally broadly agrees with my review. I take this as reassuring evidence that I am the thoughtful, careful reviewer I try to be.
As Hannah Gay astutely points out in The Silwood Circle, two things that separate the best scientists from others are (i) heightened willingness to pass judgment (including negative judgment) on the quality, interest, and importance of the work of others, and (ii) heightened yet selective attention to what other scientists are thinking and doing. One of the best ways to acquire and maintain both those traits is to serve as a reviewer for selective journals, and then to read the reviews of others who evaluated the same papers. It hones your judgment.
It’s for this reason that I wouldn’t want to live in a world in which everything was published in non-selective journals that evaluated mss only on technical soundness.* In that hypothetical world (which I don’t think will ever come about, but which some people are calling for), I’d feel cut off from the evaluative judgments of others and so would worry about my own judgment atrophying. And before you ask, no, I wouldn’t consider social media or “post-publication review” a substitute. Social media mostly only exposes you to the judgments of your friends rather than the much broader group of people comprising your field.** Social media also mostly exposes you only to people’s positive judgments of other people’s papers, and mostly doesn’t expose you to the reasons behind people’s judgments. Nobody retweets or Facebook shares papers they don’t like, and people rarely spell out their thinking at length on Twitter. As for post-publication review, it doesn’t exist for most papers. And when it does it’s mostly checks for image manipulation and other misconduct, non-substantive comments, comments that aren’t actually about the paper in question, or abuse,*** so mostly doesn’t let you compare your evaluative judgments to those of others.
*Don’t misunderstand, journals like Plos One have their place. I’ve published in Plos One.
**That’s also why things like lab groups and journal clubs are only an imperfect substitute for exposure to the evaluative judgments of other peer reviewers.
***This statement describes the 20 most recent comments on PubPeer as I’m typing this.
Via Twitter, Andrew MacDonald asks a good question:
Do you ever read @DynamicEcology and feel like you’ve never done ecology correctly, and neither do most other people?
I can totally appreciate where this question comes from, and it’s one readers have had before. So I made a rare foray onto Twitter and replied via tweetstorm. But I don’t know how to storify tweets, plus nobody would read the storify unless I blogged about it, so I figured I’d just blog my response.
tl;dr: I don’t think that most ecologists are doing it wrong! Indeed, I think that, collectively, we’re better at ecology than we’ve ever been. But I can totally see why my blogging might give some people a different impression. So click through if you want a navel-gazing post on what I actually think about ecology and why I blog as I do.
Ecologists often want to study the relative importance or strength of different variables or factors. Which is stronger: top down or bottom up effects? Where does community X fall on a continuum from “drift dominated” to “niche dominated”? What’s the relative importance of density-dependent vs. density-independent factors in explaining temporal variance in species’ abundances? What’s the relative importance of ecological vs. evolutionary determinants of species’ range limits? Etc.
Questions about the relative importance of different variables or factors often are very sensible questions to ask in a multicausal world (see also). But if you’re not careful, it’s very easy to ask what seems like a sensible question, but actually makes no sense at all. In general, just because dependent variable Y is affected by more than one thing does not mean that it makes sense to ask about the relative strength or importance of those things!
Here are some common pitfalls in studies of the relative importance or strength of different variables in ecology:
Inspired by Meg’s great find of “page not found” errors as explained by economists, here are “page not found” errors as explained by ecologists and evolutionary biologists.
Also this week: Aristotle on trolling, natur
alists, self-funding your research, Audobon the prankster, Obama vs. bison, and more. And this week’s funniest webpage is one that doesn’t exist.