Also: everybody gets rejected (a lot), how many people read Plos One, is biodiversity science political, how to review NSF grants, and more.
From Jeremy:
Andrew Hendry shares the data on how often his papers are rejected. Answer: a lot more often than a lot of you (especially students) probably imagine. Everybody gets rejected. Related: my shadow cv, which includes an incomplete list of all my rejected papers (I’m not the compulsive record-keeper Andrew is, so went by memory…)
Andrew also risks the wrath of a large chunk of the science intertubes by showing data on how little his Plos One papers have been cited, relative to other papers he published around the same time and that he considers to be equally good. Sorry to those of you who wish it were otherwise, but I think Andrew’s right. If you want your paper to be noticed, I think your best chance is to publish it in a venue people read (which might of course include a selective open access venue like Plos Biology). Because like it or not, the #1 way people decide what papers to read is by reading what’s published in leading journals. And because, just like Andrew, lots of authors use Plos One only for papers that have been rejected by selective journals. Yes, data from Wardle 2012 suggest that ecology papers in Plos One get cited as often as papers in leading ecology journals. But Andrew looks at a larger sample (in an admittedly-crude way) and gets a different answer from Wardle. See the comments on Andrew’s post for some pushback from David Wardle and David Skelly.
Straight from the horse’s mouth: the NSF DEBrief blog with advice on how to write reviews for the NSF DEB. Really, it’s advice for how to do any review. I particularly like the advice to substantiate your claims, and to be self-aware and self-critical about your criticisms. Kudos to the NSF DEB staff for their ongoing efforts with DEBrief, I think it’s great how they’re using it, and I say that even though I can’t even apply to them for money.
This is a lot of fun: You know how we often refer to our mathematical models of food webs, ecosystems, etc. as models of “stocks” and “flows”? Or more colloquially as “bucket models”? Well, macroeconomists do the same with their models. And back in 1949 one of them went so far as to build a physical model of the economy that literally consists of a bunch of interconnected buckets. It’s now been restored by an engineer. Behold: the hydraulic economy! The link goes to a video of the model in action; here’s some discussion of the history, and here’s more. This thing would be a great teaching tool for driving home the difference between stocks and flows, and helping students understand what “equilibrium” means. It’s fun to imagine building a hydraulic ecological model–has anyone ever done it?
You can always count on Charley Krebs to tell you what he really thinks, without mincing words. Like in this post where he accuses much work on biodiversity of comprising a mix of science, unsubstantiated belief, and political advocacy, to the detriment of the science. Related to this post of Brian’s, asking about whether scientists need to present a “united front” on policy issues.
Krebs follows that up by accusing ecologists of framing their research in terms of unanswerable non-questions like “How vulnerable or resilient are boreal North American ecosystems to environmental change?” He argues that this question contains too many vague, non-operational terms (“vulnerable”, “resilient”, “environmental change”) to be an answerable scientific question. I’d say that’s true but missing the point. I’d say this kind of question is what Krebs calls a “broad-brush agenda” rather than a specific research question, and unlike him I think you lose something valuable if you don’t have a broad-brush agenda. It’s important that we recognize the connections between studies of different variables in different systems. Having said that, he’s right that ecology runs into problems if ecologists aren’t careful about how they operationalize their broad-brush research agendas (see here, here and here).
Francis et al. (2014) report that psychology papers published in Science tend to find results supportive of the authors’ hypotheses much more often than would be expected by chance, given the reported effect sizes and sample sizes. That is, even on the assumption that the reported effect size estimates are correct, you wouldn’t expect these papers to so often report such “successful” combinations of results. The implication is that the results and/or hypotheses were generated in such a way as to create high odds of “success”, which is quite possible to do unintentionally. Here’s one way that can happen: HARKing (Hypothesizing After the Results are Known). Here’s another (satirical) explanation: psychologists are psychic. I wonder what you’d find if you did similar analyses of ecology papers. I suspect ecology would look better than psychology, but that’s just a guess. (ht Retraction Watch)
Terry McGlynn just applied for promotion to full professor. Here’s what he said about his blog in his promotion application. I’d say similar things, though with some minor differences (see here). Terry’s more worried than I am about blogging being seen as a net negative by people who aren’t regular blog readers. I’d also emphasize that Terry and I (and Meg and Brian) have the advantage of being able to point to very large-sounding traffic numbers. In my anecdotal experience, even people who never read blogs, or who are dismissive of blogs in general, tend to be impressed if you tell them that your blog gets thousands of unique visitors and pageviews per week. If your blog doesn’t draw that much traffic, you might not want to bother mentioning it in tenure and promotion applications, grant applications, etc.
Top economics journals publish much less debate than they used to, as measured by the proportion of papers with “comment”, “reply”, or “rejoinder” in the title. By this measure, debate in economics peaked way back in the early 1970s. Wonder if you’d find any trends if you did the same analysis for ecology. I’d guess not, but I have no idea really. I also wonder about the extent to which these data capture changes in the field itself, vs. changes in publishing practices. For instance, in ecology nobody really publishes papers in Roman numeral-numbered series any more, but that’s just a change to how we happen to title our papers. It doesn’t mean we’re no longer writing multiple papers on the same topic. (ht Marginal Revolution)
Trim-and-fill, a common method for correcting for publication bias in meta-analyses, works terribly.
And finally, rare footage of me blogging. Well, sort of me; my ears aren’t actually that big. And sort of blogging. 🙂
Most popular link so far: Andrew Hendry on how often he gets rejected. Surprisingly much more popular than his post on how little his Plos One papers are cited. And I’m bummed that hardly anyone’s clicking through on that hydraulic model link, it’s a really cool piece of engineering.
I don’t know about hydraulic ecosystem models, but Howard Odum famously developed an electrical ecosystem model using batteries, resistors and voltmeters. I kind of regret not living in the parallel universe where differential equation solvers aren’t programmed like vintage analog synths with patch cables and knobs to twist!
Odum, H. T. (1960). Ecological potential and analogue circuits for the ecosystem. American Scientist, 48(1), 1–8.
Neat! I probably should’ve heard of that before, but I hadn’t.
Electric fish-controlled Christmas tree at Michigan State University
This came in too late for me to suggest it for the Friday links, but it is too good not to share: https://www.youtube.com/watch?v=K406K_VK2JY&feature=youtu.be
Thanks, that’s cool!